Learning Open-vocabulary Semantic Segmentation Models From Natural Language Supervision

1Fudan University, 2Shanghai AI Laboratory 3Shanghai Jiao Tong University

Abstract

In this paper, we consider the problem of open-vocabulary semantic segmentation (OVS), which aims to segment objects of arbitrary classes instead of pre-defined, closed-set categories. The main contributions are as follows: First, we propose a transformer-based model for OVS, termed as OVSegmentor, which only exploits web-crawled image-text pairs for pre-training without using any mask annotations. OVSegmentor assembles the image pixels into a set of learnable group tokens via a slot-attention based binding module, and aligns the group tokens to the corresponding caption embedding. Second, we propose two proxy tasks for training, namely masked entity completion and cross-image mask consistency. The former aims to infer all masked entities in the caption given the group tokens, that enables the model to learn fine-grained alignment between visual groups and text entities. The latter enforces consistent mask predictions between images that contain shared entities, which encourages the model to learn visual invariance. Third, we construct CC4M dataset for pre-training by filtering CC12M with frequently appeared entities, which significantly improves training efficiency. Fourth, we perform zero-shot transfer on three benchmark datasets, PASCAL VOC 2012, PASCAL Context, and COCO Object. Our model achieves superior segmentation results over the state-of-the-art method by using only 3% data (4M vs 134M) for pre-training. Code and pre-trained models will be released for future research.

Model Architecture

Overall framework of our model. The model takes an image-text pair as input, and outputs the image features and a set of learnable group tokens, which is aligned to the caption embedding via L_contrast. The visual encoder consists of Transformer encoder layers, with a slot-attention based binding module in between to cluster the image tokens into groups. To enrich group semantics, (1) L_entity trains the model to infer all masked entities in the caption given the group tokens. (2) L_mask enforces consistent mask predictions of images with the same entity (right figure). During inference, only visual and text encoders and learned group tokens are used to generate the segmentation mask.

Dataset

In this work, we construct an image-caption dataset CC4M for pre-training, by designing an automatic approach to filter Conceptual-Captions 12M with frequently appeared informative entities, significantly improving the training efficiency. There are a total number of 100 entites (e.g. people, car, cup, chair, T-shirt, house, bed, cat, ball, pizza, etc) and abstract nouns (e.g. art, view, illustration, day, etc) are discarded as they usually do not correspond to specific regions in the image. The filtered data will be released soon.

Visualization results on PASCAL VOC

BibTeX


      @inproceedings{xu2023learning,
        title={Learning open-vocabulary semantic segmentation models from natural language supervision},
        author={Xu, Jilan and Hou, Junlin and Zhang, Yuejie and Feng, Rui and Wang, Yi and Qiao, Yu and Xie, Weidi},
        booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
        pages={2935--2944},
        year={2023}
      }