COLUMBUS: Evaluating Cognitive Lateral Understanding Through Multiple-Choice Rebuses

While visual question-answering (VQA) benchmarks have catalyzed the development of reasoning techniques, they have focused on vertical thinking. Effective problem-solving also necessitates lateral thinking, which remains understudied in AI and has not been used to test visual perception systems. To bridge this gap, we formulate visual lateral thinking as a multiple-choice question-answering task and describe a three-step taxonomy-driven methodology for instantiating task examples. Then, we develop COLUMBUS, a synthetic benchmark that applies the task pipeline to create QA sets with text and icon rebus puzzles based on publicly available collections of compounds and common phrases. COLUMBUS comprises over 1,000 puzzles, each with four answer candidates. While the SotA vision-language models (VLMs) achieve decent performance, our evaluation demonstrates a substantial gap between humans and models. VLMs benefit from human-curated descriptions but struggle to self-generate such representations at the right level of abstraction.

Focus: Methods or Design
Source: arXiv
Readability: Expert
Type: PDF Article
Open Source: Yes
Keywords: N/A
Learn Tags: AI and Machine Learning Design/Methods Research Centre Framework
Summary: Visual question-answering benchmarks have focused on vertical thinking, but effective problem-solving also requires lateral thinking, which remains understudied in AI. To bridge this gap, researchers have formulated visual lateral thinking as a multiple-choice question-answering task and describe a three-step taxonomy-driven methodology for instantiating task examples and have developed COLUMBUS, a synthetic benchmark that applies the task pipeline to create QA sets with text and icon rebus puzzles.