File size: 12,455 Bytes
a561418 4eaa570 a561418 4eaa570 a561418 4eaa570 a561418 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 |
---
configs:
- config_name: time_complexity_test_set.jsonl
data_files: "data/time_complexity_test_set.jsonl"
default: true
- config_name: space_complexity_test_set.jsonl
data_files: "data/space_complexity_test_set.jsonl"
- config_name: problem_and_human_solutions_list.jsonl
data_files: "data/problem_and_human_solutions_list.jsonl"
- config_name: complexity_labels_full
data_files: "data/complexity_labels_full/*.jsonl"
- config_name: complexity_labels_light.jsonl
data_files: "data/complexity_labels_light.jsonl"
license: cc-by-nc-4.0
task_categories:
- text-classification
- question-answering
- text-generation
- reinforcement-learning
language:
- en
tags:
- code
- synthetic
size_categories:
- 100K<n<1M
---
<p align="center">
<!-- <p><b><i>BigO(Bench)</b></i></p> -->
<img style="width: 500px;" src="logo.png" alt="logo">
</p>
<div align="center" style="line-height: 1;">
<a href="https://facebookresearch.github.io/BigOBench" target="_blank" style="margin: 2px;">
<img alt="HomePage" src="https://img.shields.io/badge/🏡%20HomePage-BigOBench-green" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://facebookresearch.github.io/BigOBench/leaderboard.html" target="_blank" style="margin: 2px;">
<img alt="Leaderboard" src="https://img.shields.io/badge/🏆%20Leaderboard-BigOBench-yellow" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://facebookresearch.github.io/BigOBench/demo.html" target="_blank" style="margin: 2px;">
<img alt="Explorer" src="https://img.shields.io/badge/🔎%20Explorer-BigOBench-white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/facebookresearch/BigOBench">
<img alt="Github" src="https://img.shields.io/badge/Github-facebookresearch/BigOBench-black?logo=github"/>
</a>
<a href="https://huggingface.co/datasets/facebook/BigOBench">
<img alt="HuggingFace" src="https://img.shields.io/badge/🤗%20HuggingFace-facebook/BigOBench-ffc107"/>
</a>
<a href="https://arxiv.org/abs/2503.15242">
<img alt="ArXiv" src="https://img.shields.io/badge/arXiv-2503.15242-b5212f?logo=arxiv"/>
</a>
</div>
## 👋 Overview
* 🚀 Introduction
* 📋 Getting Started with the data
* 🔥 `problem_and_human_solutions_list.jsonl`
* 🔥 `complexity_labels_light.jsonl`
* 🔥 `complexity_labels_full.jsonl`
* 🔥 `time_complexity_test_set.jsonl`
* 🔥 `space_complexity_test_set.jsonl`
* License
* 📝 Citation
## 🚀 Introduction

<span style="font-variant: small-caps;"><b>BigO(Bench)</b></span> is a benchmark of ~300 code problems to be solved in Python, along with 3,105 coding problems and 1,190,250 solutions for training purposes, that evaluates whether LLMs can find the time-space complexity of code solutions or generate code solutions themselves that respect a time-space complexity requirement. This benchmark addresses the gap in current evaluations that often overlook the ability of models to comprehend and produce code constrained by computational complexity. <span style="font-variant: small-caps;"><b>BigO(Bench)</b></span> includes a complexity inference framework that can run any Python code snippet, measure multiple runtime and memory footprint values, and infer its algorithmic time-space complexity. It also includes of set of 3,105 coding problems and 1,190,250 solutions from Code Contests annotated with inferred (synthetic) time and space complexity labels from the complexity framework, as well as corresponding runtime and memory footprint values for a large set of input sizes.
For more details, see our [Paper](todo), [GitHub repository](https://github.com/facebookresearch/bigobench) and [Website](todo).
## 📋 Getting Started with the data
The data is available as a Huggingface dataset.
You can directly download it from the HuggingFace website, or use the CLI
```bash
huggingface-cli download facebook/BigOBench --repo-type dataset --local-dir ./temp_dir
```
It can also be loaded in a Python script using
```python
from datasets import load_dataset
# Change the second parameter to the sub-dataset you would like to use
df_bb = load_dataset("facebook/BigOBench", 'time_complexity_test_set')
```
You will find 5 files, whose content is detailed below.
## 🔥 problem_and_human_solutions_list.jsonl
This gathers the general information about the coding problems and human solutions on which BigO(Bench) is built, from the problem descriptions to the public and private tests. There is also a lot of metadata used for postprocessing the results of BigO(Bench) and further analyze where models are strong and where they struggle.
In addition, you will find added data from BigO(Bench), first in the field `dataclass` that shares added value about the inputs of a problem, and the code of the dataclass corresponding to this problem as generated by ourselves using a LLM. Some metadata from the complexity framework is also available in `complexity_framework`, such as the fail rates and inputs metadata as parsed by the framework itself (which can differ from what the dataclass parsed):
- `dataclass.input_type_list` gives the list of arguments, listed as their data type, as inferred by a LLM. This comes along the dataclass code, that is also inferred by the LLM. The LLM uses the problem description and a reference solution to try to understand the data types. This field was used to create filters on the problems and solutions, to create the base dataset of BigO(Bench).
- `complexity_framework.measures_set_id_to_input_properties.framework_input_type` is instead the data type of each argument as inferred by the framework. The framework uses the dataclass code generated by the LLM to split the input stream (string that represents all the inputs of the problem all together), and then each input is parsed into a data type using rules. This means that sometimes a LLM can correctly understand that there are two arguments, but mistake them for string arguments, whereas the framework will use the LLM-generated dataclass to split the input stream into the two arguments, but using rules will correctly infer that each argument is an integer. To understand fully the complexity framework outputs, use this field. The previous one was only used for filters on the base Code Contests dataset, but were not used within the complexity framework itself to generate the complexity output.
`problem_and_human_solutions_list`: dict list
* `problem_id`: str
* `problem_name`: str
* `description`: dict
- `text`: str
- `is_description_translated`: bool
- `untranslated_text`: str
* `correct_solution_list`: dict list
- `solution_id`: str
- `solution_code`: str
* `data_source`: str
* `source_specific_limits`: dict
- `time_limit`: dict
- `seconds`: int
- `nanos`: int
- `memory_limit_bytes`: int
* `codeforces_specific_metadata`: dict
- `cf_contest_id`: int
- `cf_index`: str
- `cf_points`: float
- `cf_rating`: int
- `cf_tags`: str list
- `difficulty`: str
* `tests`: dict
- `public_tests`: dict list
- `input`: str
- `output`: str
- `private_tests`: dict list
- `input`: str
- `output`: str
- `generated_tests`: dict list
- `input`: str
- `output`: str
* `human_accuracy_rate`: float
* `dataclass`: dict
- `dataclass_code`: str
- `input_type_list`: str list
- `number_inputs`: int
* `complexity_framework`: dict
- `time_complexity_fail_rate`
- `space_complexity_fail_rate`
- `time_or_space_complexity_fail_rate`
- `measures_set_id_to_input_properties`: dict
- (measures_set_id) str: dict
- `input_id`: str
- `framework_input_type`: str
- `input_dimension`: int
## 🔥 complexity_labels_light.jsonl
Light outputs of the complexity framework, as detailed in the module `src/complexity`, when run on all problems and solutions from `problem_and_human_solutions_list.jsonl`.
`complexity_labels_light`: dict list
* `problem_id`: str
* `problem_name`: str
* `solution_id`: str
* `time_complexity_inferred`: str
* `space_complexity_inferred`: str
* `time_curve_coefficient`: float
* `space_curve_coefficient`: float
## 🔥 complexity_labels_full.jsonl
Full outputs of the complexity framework, as detailed in the module `src/complexity`, when run on all problems and solutions from `problem_and_human_solutions_list.jsonl`.
`complexity_labels_full_n-m`: dict list
* `problem_id`: str
* `problem_name`: str
* `solution_id`: str
* `time_complexity_inferred`: str
* `space_complexity_inferred`: str
* `time_curve_coefficient`: float
* `space_curve_coefficient`: float
* `query_dataclass_code`: str
* `query_code`: str
* `query_inputs_example` : str
* `runtime_measures`: dict list
- `measures_set_id`: str
- `measures_per_expansion_multiplier`: dict list
- `expansion_multiplier`: int
- `measures_per_expansion_method`: dict list
- `value_list`: float list
- `expansion_method`: str
- `measures_set_id_list`: str list
- `measures_priority`: int
* `memory_footprint_measures`: dict list
- `measures_set_id`: str
- `measures_per_expansion_multiplier`: dict list
- `expansion_multiplier`: int
- `measures_per_expansion_method`: dict list
- `value_list`: float list
- `expansion_method`: str
- `measures_set_id_list`: str list
- `measures_priority`: int
## 🔥 time_complexity_test_set.jsonl
The time complexity test set is made out of 311 problems and 640 corresponding solutions covering 11 different classes (the most represented ones being O(n), O(n.log(n)), O(n2), O(1), O(n ×m) and the least represented O((n + m)log(n + m))).
It was created from `problem_and_human_solutions_list.jsonl` and the complexity framework outputs on this dataset, `complexity_labels_full.jsonl`. Filtering was applied to nail down the test set of problems and solutions.
`time_complexity_test_set`: dict list
* `problem_name`: str
* `problem_id`: str
* `solution_id`: str
* `description`: str
* `solution_code`: str
* `dataclass_code`: str
* `inputs_example`: str
* `time_complexity_inferred`: str
* `time_curve_coefficient`: float
* `tests`: dict
- `public_tests`: dict list
- `input`: str
- `output`: str
- `private_tests`: dict list
- `input`: str
- `output`: str
- `generated_tests`: dict list
- `input`: str
- `output`: str
* `problem_time_curve_coefficient_list`: float list
## 🔥 space_complexity_test_set.jsonl
The space complexity test set consists in 308 problems and 636 solutions covering 5 different classes (by order of popularity O(n), O(1), O(n**2), O(n + m), O(n×m)).
It was created from `problem_and_human_solutions_list.jsonl` and the complexity framework outputs on this dataset, `complexity_labels_full.jsonl`. Filtering was applied to nail down the test set of problems and solutions.
`space_complexity_test_set`: dict list
* `problem_name`: str
* `problem_id`: str
* `solution_id`: str
* `description`: str
* `solution_code`: str
* `dataclass_code`: str
* `inputs_example`: str
* `space_complexity_inferred`: str
* `space_curve_coefficient`: float
* `tests`: dict
- `public_tests`: dict list
- `input`: str
- `output`: str
- `private_tests`: dict list
- `input`: str
- `output`: str
- `generated_tests`: dict list
- `input`: str
- `output`: str
* `problem_space_curve_coefficient_list`: float list
## License
The majority of BigO(Bench) is licensed under CC-BY-NC (see [LICENCE](/LICENSE.md)), however portions of the project are available under separate license terms: https://github.com/pberkes/big_O is licensed under the BSD-3 license.
## 📝 Citation
If you find our project useful and/or are using its data, please cite our paper:
```
@misc{chambon2025bigobenchllmsgenerate,
title={BigO(Bench) -- Can LLMs Generate Code with Controlled Time and Space Complexity?},
author={Pierre Chambon and Baptiste Roziere and Benoit Sagot and Gabriel Synnaeve},
year={2025},
eprint={2503.15242},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.15242},
}
``` |