File size: 8,989 Bytes
ae96f17 256eb57 ae96f17 256eb57 ae96f17 dcc5d31 ae96f17 e256104 9c801d5 c75efe2 e256104 9c801d5 e256104 9c801d5 37091ba 92d041c e256104 31c9b4e 4c92816 92d041c e256104 1c5cbc7 ae96f17 37091ba b46f829 30be66a dd39bef 7730d34 30be66a 7730d34 30be66a 7730d34 d6db1fd 9c801d5 30be66a 9c801d5 d6db1fd 48fb965 30be66a e1fd25f 9c801d5 e1fd25f 30be66a 9c801d5 657af85 b46f829 868b2a1 ea44e2f 9c801d5 868b2a1 5714201 263fdb6 1170e6e 375e655 5714201 375e655 5714201 1a9c458 c979d0d 5714201 1c96d57 5714201 295c552 5714201 5f47b9f b9bf2cc d6db1fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: source
dtype: string
- name: file_name
dtype: string
- name: cwe
sequence: string
splits:
- name: train
num_bytes: 1015823
num_examples: 113
download_size: 405079
dataset_size: 1015823
---
# New Version of Static Analysis Eval (Aug 20, 2024)
We have created a new version of the benchmark with instances that are harder than the previous one. There has been a lot of progress in models
over the last year as a result the previous version of the benchmark was saturated. The methodology is the same, we have also released the
dataset generation script which scans the top 100 Python projects to generate the instances. You can see it [here](_script_for_gen.py).
The same [eval script](_script_for_eval.py) works as before. You do not need to login to Semgrep anymore as we
only use their OSS rules for this version of the benchmark.
The highest score a model can get on this benchmark is 100%, you can see the oracle run logs [here](oracle-0-shot_semgrep_1.85.0_20240820_174931.log).
# New Evaluation
| Model | Score | Logs |
|:-----:|:-----:|:----:|
| gpt-4o-mini | 52.21 | [link](gpt-4o-mini-0-shot_semgrep_1.85.0_20240820_201236.log)|
| gpt-4o-mini + 3-shot prompt | 53.10 | [link](gpt-4o-mini-3-shot_semgrep_1.85.0_20240820_213814.log)|
| gpt-4o-mini + rag (embedding & reranking) | 58.41 | [link](gpt-4o-mini-3-shot-sim_semgrep_1.85.0_20240821_023541.log) |
| gpt-4o-mini + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 53.98 | [link](ft_gpt-4o-mini-2024-07-18_patched_patched_9yhVV00P-0-shot_semgrep_1.85.0_20240821_082958.log) |
| Model | Score | Logs |
|:-----:|:-----:|:----:|
| gpt-4o | 53.10 | [link](gpt-4o-0-shot_semgrep_1.85.0_20240820_210136.log)|
| gpt-4o + 3-shot prompt | 53.98 | [link](gpt-4o-3-shot_semgrep_1.85.0_20240820_215534.log)|
| gpt-4o + rag (embedding & reranking) | 56.64 | [link](gpt-4o-3-shot-sim_semgrep_1.85.0_20240821_025455.log) |
| gpt-4o + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 61.06 | [link](ft_gpt-4o-2024-08-06_patched_patched_9yhZp9nn-0-shot_semgrep_1.85.0_20240821_084452.log) |
# Static Analysis Eval Benchmark
A dataset of 76 Python programs taken from real Python open source projects (top 100 on GitHub),
where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep).
You can run the `_script_for_eval.py` script to check the results.
```
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python _script_for_eval.py
```
For all supported options, run with `--help`:
```
usage: _script_for_eval.py [-h] [--model MODEL] [--cache] [--n_shot N_SHOT] [--use_similarity] [--oracle]
Run Static Analysis Evaluation
options:
-h, --help show this help message and exit
--model MODEL OpenAI model to use
--cache Enable caching of results
--n_shot N_SHOT Number of examples to use for few-shot learning
--use_similarity Use similarity for fetching dataset examples
--oracle Run in oracle mode (assume all vulnerabilities are fixed)
```
We need to use the logged in version of Semgrep to get access to more rules for vulnerability detection. So, make sure you login before running the eval script.
```
% semgrep login
API token already exists in /Users/user/.semgrep/settings.yml. To login with a different token logout use `semgrep logout`
```
After the run, the script will also create a log file which captures the stats for the run and the files that were fixed.
You can see an example [here](gpt-4o-mini_semgrep_1.85.0_20240818_215254.log).
Due to the recent versions of Semgrep not detecting a few of the samples in the dataset as vulnerable anymore, the maximum score
possible on the benchmark is 77.63%. You can see the oracle run log [here](oracle-0-shot_semgrep_1.85.0_20240819_022711.log).
## Evaluation
We did some detailed evaluations recently (19/08/2024):
| Model | Score | Logs |
|:-----:|:-----:|:----:|
| gpt-4o-mini | 67.11 | [link](gpt-4o-mini_semgrep_1.85.0_20240818_215254.log)|
| gpt-4o-mini + 3-shot prompt | 71.05 | [link](gpt-4o-mini-3-shot_semgrep_1.85.0_20240818_234709.log)|
| gpt-4o-mini + rag (embedding & reranking) | 72.37 | [link](gpt-4o-mini-1-shot-sim_semgrep_1.85.0_20240819_013810.log) |
| gpt-4o-mini + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 77.63 | [link](ft_gpt-4o-mini-2024-07-18_patched_patched_9uUpKXcm_semgrep_1.85.0_20240818_220158.log) |
| Model | Score | Logs |
|:-----:|:-----:|:----:|
| gpt-4o | 68.42 | [link](gpt-4o-0-shot_semgrep_1.85.0_20240819_015355.log)|
| gpt-4o + 3-shot prompt | 77.63 | [link](gpt-4o-3-shot_semgrep_1.85.0_20240819_020525.log)|
| gpt-4o + rag (embedding & reranking) | 77.63 | [link](gpt-4o-1-shot-sim_semgrep_1.85.0_20240819_023323.log) |
| gpt-4o + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 77.63 | [link](ft_gpt-4o-2024-05-13_patched_patched-4o_9xp8XOM9-0-shot_semgrep_1.85.0_20240819_075205.log) |
# Leaderboard
The top models on the leaderboard are all fine-tuned using the same dataset that we released called [synth vuln fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes).
You can read about our experience with fine-tuning them on our [blog](https://www.patched.codes/blog/a-comparative-study-of-fine-tuning-gpt-4o-mini-gemini-flash-1-5-and-llama-3-1-8b).
You can also explore the leaderboard with this [interactive visualization](https://claude.site/artifacts/5656c16d-9751-407c-9631-a3526c259354).

| Model | StaticAnalysisEval (%) | Time (mins) | Price (USD) |
|:-------------------------:|:----------------------:|:-------------:|:-----------:|
| gpt-4o-mini-fine-tuned | 77.63 | 21:0 | 0.21 |
| gemini-1.5-flash-fine-tuned | 73.68 | 18:0 | |
| Llama-3.1-8B-Instruct-fine-tuned | 69.74 | 23:0 | |
| gpt-4o | 69.74 | 24:0 | 0.12 |
| gpt-4o-mini | 68.42 | 20:0 | 0.07 |
| gemini-1.5-flash-latest | 68.42 | 18:2 | 0.07 |
| Llama-3.1-405B-Instruct | 65.78 | 40:12 | |
| Llama-3-70B-instruct | 65.78 | 35:2 | |
| Llama-3-8B-instruct | 65.78 | 31.34 | |
| gemini-1.5-pro-latest | 64.47 | 34:40 | |
| gpt-4-1106-preview | 64.47 | 27:56 | 3.04 |
| gpt-4 | 63.16 | 26:31 | 6.84 |
| claude-3-5-sonnet-20240620| 59.21 | 23:59 | 0.70 |
| moa-gpt-3.5-turbo-0125 | 53.95 | 49:26 | |
| gpt-4-0125-preview | 53.94 | 34:40 | |
| patched-coder-7b | 51.31 | 45.20 | |
| patched-coder-34b | 46.05 | 33:58 | 0.87 |
| patched-mix-4x7b | 46.05 | 60:00+ | 0.80 |
| Mistral-Large | 40.80 | 60:00+ | |
| Gemini-pro | 39.47 | 16:09 | 0.23 |
| Mistral-Medium | 39.47 | 60:00+ | 0.80 |
| Mixtral-Small | 30.26 | 30:09 | |
| gpt-3.5-turbo-0125 | 28.95 | 21:50 | |
| claude-3-opus-20240229 | 25.00 | 60:00+ | |
| Llama-3-8B-instruct.Q4_K_M| 21.05 | 60:00+ | |
| Gemma-7b-it | 19.73 | 36:40 | |
| gpt-3.5-turbo-1106 | 17.11 | 13:00 | 0.23 |
| Codellama-70b-Instruct | 10.53 | 30.32 | |
| CodeLlama-34b-Instruct | 7.89 | 23:16 | |
The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer).
Some models timed out during the run or had intermittent API errors. We try each example 3 times in such cases. This is why some runs are reported to be longer than 1 hr (60:00+ mins).
If you want to add your model to the leaderboard, you can send in a PR to this repo with the log file from the evaluation run. |