VRC-Bench Visual Reasoning Benchmark Dataset
Date
Size
Publish URL
Categories
VRC-Bench is the first benchmark designed specifically for multimodal step-by-step reasoning tasks. It aims to comprehensively evaluate the performance of models in complex reasoning scenarios. It was released in 2025 by Mohamed bin Zayed University of Artificial Intelligence, University of Central Florida, Linköping University and Australian National University. The related paper results are "LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs". Unlike traditional benchmarks that only focus on the accuracy of the final result, VRC-Bench focuses on evaluating the quality of each reasoning step, providing a more detailed assessment of model capabilities.
The dataset covers challenges in eight different fields, including visual reasoning, mathematical and logical reasoning, scientific reasoning, cultural and social understanding, etc. These tasks involve complex visual perception, scientific reasoning, medical image interpretation and other scenarios, and contain more than 4k manually verified reasoning steps, which can comprehensively evaluate the accuracy and logical coherence of the model in multi-step reasoning.
