license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
pretty_name: alignment-research-dataset
AI Alignment Research Dataset
The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. This is a work in progress. Components are still undergoing a cleaning process to be updated more regularly.
Sources
The following list of sources may change and items may be renamed:
- agentmodels
- aiimpacts.org
- aisafety.camp
- arbital
- arxiv_papers - alignment research papers from arxiv
- audio_transcripts - transcripts from interviews with various researchers and other audio recordings
- carado.moe
- cold.takes
- deepmind.blog
- distill
- eaforum - selected posts
- gdocs
- gdrive_ebooks - books include Superintelligence, Human Compatible, Life 3.0, The Precipice, and others
- generative.ink
- gwern_blog
- intelligence.org - MIRI
- jsteinhardt.wordpress.com
- lesswrong - selected posts
- markdown.ebooks
- nonarxiv_papers - other alignment research papers
- qualiacomputing.com
- reports
- stampy
- vkrakovna.wordpress.com
- waitbutwhy
- yudkowsky.net
Keys
Not all of the entries contain the same keys, but they all have the following:
- id - unique identifier
- source - based on the data source listed in the previous section
- title - title of document
- text - full text of document content
- url - some values may be
'n/a'
, still being updated - date_published - some
'n/a'
The values of the keys are still being cleaned up for consistency. Additional keys are available depending on the source document.
Usage
Execute the following code to download and parse the files:
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset')
To only get the data for a specific source, pass it in as the second argument, e.g.:
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
Limitations and Bias
LessWrong posts have overweighted content on doom and existential risk, so please beware in training or finetuning generative language models on the dataset.
Contributing
The scraper to generate this dataset is open-sourced on GitHub and currently maintained by volunteers at StampyAI / AI Safety Info. Learn more or join us on Discord.
Citing the Dataset
For more information, here is the paper and LessWrong post. Please use the following citation when using the dataset:
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).