Mmstar
Metrics
average
coarse perception
fine-grained perception
instance reasoning
llm_model
logicalreasoning
mathematics
model_url
organization
parameters
release_date
science u0026 technology
updated_time
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | average | coarse perception | fine-grained perception | instance reasoning | llm_model | logicalreasoning | mathematics | model_url | organization | parameters | release_date | science u0026 technology | updated_time |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Model 1 | 57.1 | 76.6 | 51.4 | 66.6 | GPT4-Turbo(GPT4V_high) | 55.8 | 49.8 | https://help.openai.com/en/articles/8555510-gpt-4-turbo-in-the-openai-api | OpenAI | N/A | 2024.4.10 | 42.6 | 2024.4.9 |