MM-IQ / README.md
huanqia's picture
Update README.md
b9aa264 verified
---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
- zh
tags:
- multimodal
- intelligence
size_categories:
- 1K<n<10K
license: apache-2.0
pretty_name: mmiq
configs:
- config_name: default
features:
- name: category
dtype: string
- name: question
dtype: string
- name: question_en
dtype: string
- name: question_zh
dtype: string
- name: image
dtype: image
- name: MD5
dtype: string
- name: data_id
dtype: int64
- name: answer
dtype: string
- name: split
dtype: string
---
# Dataset Card for "MM-IQ"
- [Introduction](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#leaderboard)
- [Dataset Usage](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-usage)
- [Data Downloading](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-downloading)
- [Data Format](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-format)
- [Automatic Evaluation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#automatic-evaluation)
- [Citation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#citation)
## Introduction
IQ testing has served as a foundational methodology for evaluating human cognitive capabilities, deliberately decoupling assessment from linguistic background, language proficiency, or domain-specific knowledge to isolate core competencies in abstraction and reasoning. Yet, artificial intelligence research currently lacks systematic benchmarks to quantify these critical cognitive dimensions in multimodal systems. To address this critical gap, we propose **MM-IQ**, a comprehensive evaluation framework comprising **2,710** meticulously curated test items spanning **8** distinct reasoning paradigms.
Through systematic evaluation of leading open-source and proprietary multimodal models, our benchmark reveals striking limitations: even state-of-the-art architectures achieve only marginally superior performance to random chance (27.49% vs. 25% baseline accuracy). This substantial performance chasm highlights the inadequacy of current multimodal systems in approximating fundamental human reasoning capacities, underscoring the need for paradigm-shifting advancements to bridge this cognitive divide.
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/MMIQ_distribution.png" style="zoom:50%;" />
## Paper Information
- Paper: https://arxiv.org/pdf/2502.00698
- Code: https://github.com/AceCHQ/MMIQ/tree/main
- Project: https://acechq.github.io/MMIQ-benchmark/
- Leaderboard: https://acechq.github.io/MMIQ-benchmark/#leaderboard
## Dataset Examples
Examples of our MM-IQ:
1. Logical Operation Reasoning
<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/logical_AND_2664.png" style="zoom:100%;" />
<details>
<summary>🔍 Click to expand/collapse more examples</summary>
2. Mathematical Reasoning
<p>Prompt1: Choose the most appropriate option from the given four options to present a certain regularity: </p>
<p>Option A: 4; Option B: 5; Option C: 6; Option D: 7. </p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/arithmetic_1133.png" style="zoom:120%;" />
3. 2D-geometry Reasoning
<p>Prompt: The option that best fits the given pattern of figures is ( ).</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/2D_sys_1036.png" style="zoom:40%;" />
4. 3D-geometry Reasoning
<p>Prompt: The one that matches the top view is:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/3D_view_1699.png" style="zoom:30%;" />
5. visual instruction Reasoning
<p>Prompt: Choose the most appropriate option from the given four options to present a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/Visual_instruction_arrow_2440.png" style="zoom:50%;" />
6. Spatial Relationship Reasoning
<p>Prompt: Choose the most appropriate option from the given four options to present a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/spatial_6160.png" style="zoom:120%;" />
7. Concrete Object Reasoning
<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/concrete_object_6167.png" style="zoom:120%;" />
8. Temporal Movement Reasoning
<p>Prompt:Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/temporal_rotation_1379.png" style="zoom:50%;" />
</details>
## Leaderboard
🏆 The leaderboard for the *MM-IQ* (2,710 problems) is available [here](https://acechq.github.io/MMIQ-benchmark/#leaderboard).
## Dataset Usage
### Data Downloading
You can download this dataset by the following command (make sure that you have installed [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart)):
```python
from IPython.display import display, Image
from datasets import load_dataset
dataset = load_dataset("huanqia/MM-IQ")
```
Here are some examples of how to access the downloaded dataset:
```python
# print the first example on the MM-IQ dataset
print(dataset["test"][0])
print(dataset["test"][0]['data_id']) # print the problem id
print(dataset["test"][0]['question']) # print the question text
print(dataset["test"][0]['answer']) # print the answer
# Display the image
print("Image context:")
display(dataset["test"][0]['image'])
```
We have uploaded a demo to illustrate how to access the MM-IQ dataset on HF中国镜像站, available at [hugging_face_dataset_demo.ipynb](https://github.com/AceCHQ/MMIQ/blob/main/mmiq/jupyter_notebook_demos/hugging_face_dataset_demo.ipynb).
### Data Format
The dataset is provided in Parquet format and contains the following attributes:
```json
{
"question": [string] The question text,
"answer": [string] The correct answer for the problem,
"data_id": [int] The problem id,
"category": [string] The category of reasoning paradigm,
"image": [image] Containing image (raw bytes and image path) corresponding to the image in data.zip,
}
```
### Automatic Evaluation
🔔 To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/AceCHQ/MMIQ/tree/main/mmiq).
## Citation
If you use the **MM-IQ** dataset in your work, please kindly cite the paper using this BibTeX:
```
@article{cai2025mm,
title={MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models},
author={Cai, Huanqia and Yang, Yijun and Hu, Winston},
journal={arXiv preprint arXiv:2502.00698},
year={2025}
}
```
## Contact
[Huanqia Cai]([email protected]): [email protected]