News
The Taub Faculty of Computer Science News and Announcements
                
CVPR 2022
June 2022
CS research group led by Adam Botach, CS M.Sc student and Evgenii Zheltonozhskii, CS graduate and a Ph.D. student at the Physics Department, supervised by Dr. Chaim Baskin, Visiting Scientist at VISTA Laboratory and Center for Intelligent Systems, has developed, as part of Adam Botach's thesis work, a deep learning multimodal (MTTR) algorithm that enables segmentation (pixel level marking) of an object in a video according to a text query (a task known as RVOS).
The referring video object segmentation task (RVOS) involves segmentation of a 
text-referred object instance in the frames of a given video. Due to the complex 
nature of this multimodal task, which combines text reasoning, video 
understanding, instance segmentation and tracking, existing approaches typically 
rely on sophisticated pipelines in order to tackle it. In this paper, we propose 
a simple Transformer-based approach to RVOS. Our framework, termed Multimodal 
Tracking Transformer (MTTR), models the RVOS task as a sequence prediction 
problem. Following recent advancements in computer vision and natural language 
processing, MTTR is based on the realization that video and text can be 
processed together effectively and elegantly by a single multimodal Transformer 
model. MTTR is end-to-end trainable, free of text-related inductive bias 
components and requires no additional mask-refinement post-processing steps. As 
such, it simplifies the RVOS pipeline considerably compared to existing methods. 
Evaluation on standard benchmarks reveals that MTTR significantly outperforms 
previous art across multiple metrics. In particular, MTTR shows impressive +5.7 
and +5.0 mAP gains on the A2D-Sentences and JHMDB-Sentences datasets 
respectively, while processing 76 frames per second. In addition, we report 
strong results on the public validation set of Refer-YouTube-VOS, a more 
challenging RVOS dataset that has yet to receive the attention of researchers. 
The code to reproduce our experiments is available at
https://github.com/mttr2021/MTTR
Full Articlee
MTTR - Interactive Demo
github
[Back to the news index]