Papers Read on AI

Papers Read on AI header image 1
April 5, 2022  

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

April 5, 2022

Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks.

2022: Junnan Li, Dongxu Li, Caiming Xiong, S. Hoi

Ranked #1 on Image Captioning on nocaps-val-out-domain

https://arxiv.org/pdf/2201.12086v2.pdf