Rethinking Interactive Image Segmentation
with Low Latency, High Quality, and Diverse Prompts

University of North Carolina at Chapel Hill
CVPR 2024
Conceptual comparison between our approach SegNext and prior state-of-the-art methods, SimpleClick and SAM, for the interactive segmentation task. Our method combines the best of both worlds for interactive segmentation with low latency, high quality, and diverse prompts.
Demos of SegNext for high-quality segmentation on HQSeg-44K with One Click.

Abstract

The goal of interactive image segmentation is to delineate specific regions within an image via visual or language prompts. Low-latency and high-quality interactive segmentation with diverse prompts remain challenging for existing specialist and generalist models. Specialist models, with their limited prompts and task-specific designs, experience high latency because the image must be recomputed every time the prompt is updated, due to the joint encoding of image and visual prompts. Generalist models, exemplified by the Segment Anything Model (SAM), have recently excelled in prompt diversity and efficiency, lifting image segmentation to the foundation model era. However, for high-quality segmentations, SAM still lags behind state-of-the-art specialist models despite SAM being trained with ×100 more segmentation masks.

In this work, we delve deep into the architectural differences between the two types of models. We observe that dense representation and fusion of visual prompts are the key design choices contributing to the high segmentation quality of specialist models. In light of this, we reintroduce this dense design into the generalist models, to facilitate the development of generalist models with high segmentation quality. To densely represent diverse visual prompts, we propose to use a dense map to capture five types: clicks, boxes, polygons, scribbles, and masks.

Thus, we propose SegNext, a next-generation interactive segmentation approach offering low latency, high quality, and diverse prompt support. Our method outperforms current state-of-the-art methods on HQSeg-44K and DAVIS, both quantitatively and qualitatively.

Method

SegNext overview. We use a three-channel dense map to represent five diverse visual prompts: clicks, boxes, polygons, scribbles, and masks. The embeddings of image and visual prompts are fused by element-wise addition, followed by an enhanced fusion via one or two self-attention blocks. The language prompt is encoded as a vector by CLIP, followed by querying the image embedding via cross-attention blocks for the mask embeddings. A lightweight decoder processes the mask embeddings for segmentation.

Experiments


Click to segmentation evaluation on HQSeg-44K and DAVIS. With varying numbers of clicks, our method consistently outperforms existing competitive approaches. The metric is mean Intersection over Union (mIoU).


Quantitative comparison with existing methods on HQSeg-44K and DAVIS. We compare two types of baselines: specialist and generalist models. Our model achieves comparable performance to the specialist baselines but with significantly lower latency; our model achieves comparable performance to the generalist models in terms of latency and segmentation quality despite being trained with much fewer segmentation data. “HQ” denotes the HQSeg-44K dataset; “SA×2” denotes the model has two self-attention blocks for dense fusion.


Qualitative results with diverse prompts. Left: an example from DAVIS. Right: three examples from HQSeg-44K. The results are achieved by a user providing all the prompts using our best-performing model.

BibTeX

@article{liu2024rethinking,
  author    = {Liu, Qin and Cho, Jaemin and Bansal, Mohit and Niethammer, Marc},
  title     = {Rethinking Interactive Image Segmentation with Low Latency, High Quality, and Diverse Prompts},
  journal   = {CVPR},
  year      = {2024},
}