|
--- |
|
pretty_name: QuRatedPajama-1B_tokens_for_analysis |
|
--- |
|
## QuRatedPajama |
|
|
|
**Paper:** [QuRating: Selecting High-Quality Data for Training Language Models](https://arxiv.org/pdf/2402.09739.pdf) |
|
|
|
This dataset is a 1B token subset derived from [princeton-nlp/QuRatedPajama-260B](https://huggingface.co/datasets/princeton-nlp/QuRatedPajama-260B), which is a subset of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) annotated by [princeton-nlp/QuRater-1.3B](https://huggingface.co/princeton-nlp/QuRater-1.3B) with sequence-level quality ratings across 4 criteria: |
|
- **Educational Value** - e.g. the text includes clear explanations, step-by-step reasoning, or questions and answers |
|
- **Facts & Trivia** - how much factual and trivia knowledge the text contains, where specific facts and obscure trivia are preferred over more common knowledge |
|
- **Writing Style** - how polished and good is the writing style in the text |
|
- **Required Expertise**: - how much required expertise and prerequisite knowledge is necessary to understand the text |
|
|
|
This subset is useful for analysis of quality ratings. It unsupervised domain clusters for the CommonCrawl and C4 domains (a description of these clusters can be found [here](https://huggingface.co/datasets/princeton-nlp/QuRatedPajama-1B_tokens_for_analysis/blob/main/cluster_checkpoint-1M_docs_for_analysis-k25/top_terms_with_title.csv)). We also report the quality ratings per 512 token chunk of each example. |
|
|
|
In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column. |
|
|
|
**Guidance on Responsible Use:** |
|
|
|
In the paper, we document various types of bias that are present in the quality ratings (biases related to domains, topics, social roles, regions and languages - see Section 6 of the paper). |
|
Hence, be aware that data selection with QuRating could have unintended and harmful effects on the language model that is being trained. |
|
We strongly recommend a comprehensive evaluation of the language model for these and other types of bias, particularly before real-world deployment. |
|
We hope that releasing the data/models can facilitate future research aimed at uncovering and mitigating such biases. |
|
Note that the quality ratings do not measure the social or literary value of a text and should *not* be used for textual or demographic studies. |
|
|
|
**Citation:** |
|
``` |
|
@article{wettig2024qurating, |
|
title={QuRating: Selecting High-Quality Data for Training Language Models}, |
|
author={Alexander Wettig, Aatmik Gupta, Saumya Malik, Danqi Chen}, |
|
journal={arXiv preprint 2402.09739}, |
|
year={2024} |
|
} |
|
``` |