File size: 7,372 Bytes
a39bdb0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25dada2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a39bdb0
d1c4e6f
 
f2422dd
d1c4e6f
 
 
 
 
 
 
 
a39bdb0
f2422dd
a39bdb0
4ae87f1
 
 
a39bdb0
4ae87f1
a39bdb0
 
 
 
8131585
a39bdb0
 
 
 
 
 
 
d1c4e6f
a39bdb0
 
 
 
 
 
 
 
 
 
 
ffc0fd2
 
a39bdb0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1c4e6f
a39bdb0
 
 
 
 
 
 
 
 
 
0a4f009
a39bdb0
 
d1c4e6f
a39bdb0
 
 
 
 
d1c4e6f
33361f1
 
 
 
0a4f009
 
25dada2
a39bdb0
 
33361f1
 
 
 
 
a39bdb0
 
25dada2
a39bdb0
 
 
 
 
33361f1
7b94665
25dada2
a39bdb0
 
 
33361f1
a39bdb0
 
 
 
 
 
 
d1c4e6f
a39bdb0
b9aa264
6d52651
b9aa264
6d52651
 
a39bdb0
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
- zh
tags:
- multimodal
- intelligence
size_categories:
- 1K<n<10K
license: apache-2.0
pretty_name: mmiq
configs:
- config_name: default
  features:
  - name: category
    dtype: string
  - name: question
    dtype: string
  - name: question_en
    dtype: string
  - name: question_zh
    dtype: string
  - name: image
    dtype: image
  - name: MD5
    dtype: string
  - name: data_id
    dtype: int64
  - name: answer
    dtype: string
  - name: split
    dtype: string
---
# Dataset Card for "MM-IQ"

- [Introduction](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#leaderboard)
- [Dataset Usage](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-usage)
  - [Data Downloading](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-downloading)
  - [Data Format](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#data-format)
  - [Automatic Evaluation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#automatic-evaluation)
- [Citation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#citation)

## Introduction

IQ testing has served as a foundational methodology for evaluating human cognitive capabilities, deliberately decoupling assessment from linguistic background, language proficiency, or domain-specific knowledge to isolate core competencies in abstraction and reasoning. Yet, artificial intelligence research currently lacks systematic benchmarks to quantify these critical cognitive dimensions in multimodal systems. To address this critical gap, we propose **MM-IQ**, a comprehensive evaluation framework comprising **2,710** meticulously curated test items spanning **8** distinct reasoning paradigms.

Through systematic evaluation of leading open-source and proprietary multimodal models, our benchmark reveals striking limitations: even state-of-the-art architectures achieve only marginally superior performance to random chance (27.49% vs. 25% baseline accuracy). This substantial performance chasm highlights the inadequacy of current multimodal systems in approximating fundamental human reasoning capacities, underscoring the need for paradigm-shifting advancements to bridge this cognitive divide.

<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/MMIQ_distribution.png" style="zoom:50%;" />


## Paper Information

- Paper: https://arxiv.org/pdf/2502.00698
- Code: https://github.com/AceCHQ/MMIQ/tree/main
- Project: https://acechq.github.io/MMIQ-benchmark/
- Leaderboard: https://acechq.github.io/MMIQ-benchmark/#leaderboard


## Dataset Examples

Examples of our MM-IQ:
1. Logical Operation Reasoning

<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/logical_AND_2664.png" style="zoom:100%;" />

<details>


<summary>🔍 Click to expand/collapse more examples</summary>

2. Mathematical Reasoning
<p>Prompt1: Choose the most appropriate option from the given four options to present a certain regularity: </p>
<p>Option A: 4;  Option B: 5;  Option C: 6;  Option D: 7. </p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/arithmetic_1133.png" style="zoom:120%;" />

3. 2D-geometry Reasoning
<p>Prompt: The option that best fits the given pattern of figures is ( ).</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/2D_sys_1036.png" style="zoom:40%;" />

4. 3D-geometry Reasoning
<p>Prompt: The one that matches the top view is:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/3D_view_1699.png" style="zoom:30%;" />

5. visual instruction Reasoning
<p>Prompt: Choose the most appropriate option from the given four options to present a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/Visual_instruction_arrow_2440.png" style="zoom:50%;" />

6. Spatial Relationship Reasoning
<p>Prompt: Choose the most appropriate option from the given four options to present a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/spatial_6160.png" style="zoom:120%;" />

7. Concrete Object Reasoning
<p>Prompt: Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/concrete_object_6167.png" style="zoom:120%;" />

8. Temporal Movement Reasoning
<p>Prompt:Choose the most appropriate option from the given four choices to fill in the question mark, so that it presents a certain regularity:</p>
<img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/temporal_rotation_1379.png" style="zoom:50%;" />

</details>

## Leaderboard

🏆 The leaderboard for the *MM-IQ* (2,710 problems) is available [here](https://acechq.github.io/MMIQ-benchmark/#leaderboard).


## Dataset Usage

### Data Downloading


You can download this dataset by the following command (make sure that you have installed [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart)):

```python
from IPython.display import display, Image
from datasets import load_dataset

dataset = load_dataset("huanqia/MM-IQ")
```

Here are some examples of how to access the downloaded dataset:

```python
# print the first example on the MM-IQ dataset
print(dataset["test"][0])
print(dataset["test"][0]['data_id']) # print the problem id 
print(dataset["test"][0]['question']) # print the question text 
print(dataset["test"][0]['answer']) # print the answer
# Display the image
print("Image context:")
display(dataset["test"][0]['image'])
```

We have uploaded a demo to illustrate how to access the MM-IQ dataset on HF中国镜像站, available at [hugging_face_dataset_demo.ipynb](https://github.com/AceCHQ/MMIQ/blob/main/mmiq/jupyter_notebook_demos/hugging_face_dataset_demo.ipynb).




### Data Format

The dataset is provided in Parquet format and contains the following attributes:

```json
{
    "question": [string] The question text,
    "answer": [string] The correct answer for the problem,
    "data_id": [int] The problem id,
    "category": [string] The category of reasoning paradigm,
    "image": [image] Containing image (raw bytes and image path) corresponding to the image in data.zip,
}
```


### Automatic Evaluation

🔔 To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/AceCHQ/MMIQ/tree/main/mmiq).


## Citation

If you use the **MM-IQ** dataset in your work, please kindly cite the paper using this BibTeX:
```
@article{cai2025mm,
  title={MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models},
  author={Cai, Huanqia and Yang, Yijun and Hu, Winston},
  journal={arXiv preprint arXiv:2502.00698},
  year={2025}
}
```

## Contact
[Huanqia Cai]([email protected]): [email protected]