Contrast and Classify: Alternate Training for Robust VQA
Yash Kant1
Abhinav Moudgil1
Dhruv Batra1,2
Devi Parikh1,2
Harsh Agrawal1
1Georgia Tech
2Facebook AI Research
Preprint, Under Review
[Paper]
[Code]
[Slides]
Overview of ConCAT. (a) We augment the VQA dataset by paraphrasing every question via back-translation. (b) We carefully curate a contrastive batch by sampling different types of positives and negatives to learn joint V+L representations by minimizing scaled supervised contrastive loss. (c) Cross Entropy iteration.

Abstract

Recent Visual Question Answering (VQA) models have shown impressive performance on the VQA benchmark but are sensitive to small linguistic variations in input questions. Existing approaches address this by augmenting the dataset with question paraphrases from visual question generation models or adversarial perturbations. These approaches use the combined data to learn an answer classifier by minimizing the standard cross-entropy loss. To more effectively leverage the augmented data, we build on the recent success in contrastive learning. We propose a novel training paradigm (ConCAT) that alternately optimizes cross-entropy and contrastive losses. The contrastive loss encourages representations to be robust to linguistic variations in questions while the cross-entropy loss preserves the discriminative power of the representations for answer classification. We find that alternately optimizing both losses is key to effective training. VQA models trained with ConCAT achieve higher consensus scores on the VQA-Rephrasings dataset as well as higher VQA accuracy on the VQA 2.0 dataset compared to existing approaches across a variety of data augmentation strategies.


Paper and Bibtex

Citation
 
Kant, Y., Moudgil, A., Batra, D., Parikh, D., & Agrawal, H. 2020. Contrast and Classify: Alternate Training for Robust VQA . Arxiv.

[Bibtex]
@inproceedings{kant2020spatially,
  title={Contrast and Classify: Alternate Training for Robust VQA},
  author={Yash Kant and Abhinav Moudgil and Dhruv Batra  
          and Devi Parikh and Harsh Agrawal},
  year={2020},
  eprint={2010.06087},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
  }

                



Acknowledgements

We thank Abhishek Das, Prithvijit Chattopadhyay and Arjun Majumdar for their feedback. The Georgia Tech effort was supported in part by NSF, AFRL, DARPA, ONR YIPs, ARO PECASE, Amazon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.