CSVQA: A Chinese Multimodal Benchmark for Evaluating STEM Reasoning Capabilities of VLMs

Vision-Language Models (VLMs) have demonstrated remarkable progress inmultimodal understanding, yet their capabilities for scientific reasoningremains inadequately assessed. Current multimodal benchmarks predominantlyevaluate generic image comprehension or text-driven reasoning, lackingauthentic scientific contexts that require domain-specific knowledgeintegration with visual evidence analysis. To fill this gap, we present CSVQA,a diagnostic multimodal benchmark specifically designed for evaluatingscientific reasoning through domain-grounded visual question answering.Ourbenchmark features 1,378 carefully constructed question-answer pairs spanningdiverse STEM disciplines, each demanding domain knowledge, integration ofvisual evidence, and higher-order reasoning. Compared to prior multimodalbenchmarks, CSVQA places greater emphasis on real-world scientific content andcomplex reasoning.We additionally propose a rigorous evaluation protocol tosystematically assess whether model predictions are substantiated by validintermediate reasoning steps based on curated explanations. Our comprehensiveevaluation of 15 VLMs on this benchmark reveals notable performancedisparities, as even the top-ranked proprietary model attains only 49.6\%accuracy.This empirical evidence underscores the pressing need for advancingscientific reasoning capabilities in VLMs. Our CSVQA is released athttps://huggingface.co/datasets/Skywork/CSVQA.