Beyond Image-Text Matching: Verb Understanding in Multimodal Transformers Using Guided Masking

Beňová, I., Košecká, J.1, Gregor, M., Tamajka, M., Veselý, M., Šimko, M.

1 George Mason University, Fairfax, VA, USA

Probing methods are widely used to evaluate the multimodal representations of vision-language models (VLMs), with dominant approaches relying on zero-shot performance in image-text matching tasks. These methods typically assess models on curated datasets focusing on linguistic aspects such as counting, relations, or attributes. This work uses a complementary probing strategy called guided masking. This approach selectively masks different modalities and evaluates the model’s ability to predict the masked word. We specifically focus on probing verbs, as their comprehension is crucial for understanding actions and relationships in images, and it presents a more challenging task than subjects, objects, or attributes comprehension. Our analysis targets VLMs that use region-of-interest (ROI) features obtained from object detectors as input tokens. Our experiments demonstrate that selected models can accurately predict the correct verb, challenging previous conclusions based on image-text matching methods, which suggested VLMs fail in situations requiring verb understanding. The code for experiments will be available https://github.com/ivana-13/guided_masking.

Cite: Benova, I., Kosecka, J., Gregor, M., Tamajka, M., Vesely, M., Simko, M. Beyond Image-Text Matching: Verb Understanding in Multimodal Transformers Using Guided Masking. SOFSEM 2025: Theory and Practice of Computer Science. (2025). DOI: https://doi.org/10.1007/978-3-031-82670-2_7

Authors

Ivana Beňová
AI Specialist
More
Michal Gregor
Researcher
More
Martin Tamajka
Technology Lead
More
Marcel Veselý
Research Engineer
More
Marián Šimko
Lead and Researcher
More