language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- question-answering
pretty_name: alignment-research-dataset
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: title
dtype: string
- name: text
dtype: large_string
- name: url
dtype: string
- name: date_published
dtype: string
- name: authors
sequence: string
- name: summary
sequence: string
- name: source_type
dtype: string
- name: book_title
dtype: string
- name: karma
dtype: int32
- name: votes
dtype: int32
- name: words
dtype: int32
- name: comment_count
dtype: int32
- name: tags
sequence: string
- name: modified_at
dtype: string
- name: alias
dtype: string
- name: data_last_modified
dtype: string
- name: abstract
dtype: string
- name: author_comment
dtype: string
- name: journal_ref
dtype: string
- name: doi
dtype: string
- name: primary_category
dtype: string
- name: categories
sequence: string
- name: bibliography_bib
sequence:
- name: title
dtype: string
- name: description
dtype: string
config_name: all
splits:
- name: train
num_bytes: 411471024
num_examples: 14163
download_size: 423573134
dataset_size: 411471024
AI Alignment Research Dataset
The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. This is a work in progress. Components are still undergoing a cleaning process to be updated more regularly.
Sources
The following list of sources may change and items may be renamed:
- agentmodels
- aiimpacts
- aisafety.camp
- aisafety.info
- alignmentforum
- alignment_newsletter
- arbital
- arxiv - alignment research papers from arxiv
- audio_transcripts - transcripts from interviews with various researchers and other audio recordings
- carado.moe
- cold_takes
- deepmind_blog
- distill
- eaforum - selected posts
- generative.ink
- gwern_blog
- importai
- jsteinhardt_blog
- lesswrong - selected posts
- miri - MIRI
- ml_safety_newsletter
- special_docs - This is a catch-all of various article, papers etc. that don't come from a single source.
- vkrakovna_blog
- waitbutwhy
- yudkowsky_blog
Keys
Not all of the entries contain the same keys, but they all have the following:
id
- unique identifiersource
- based on the data source listed in the previous sectiontitle
- title of documenttext
- full text of document contenturl
- some values may be'n/a'
, still being updateddate_published
- some'n/a'
authors
- list of author names, may be emptysummary
- list of human written summaries from various newsletters, may be empty
The values of the keys are still being cleaned up for consistency. Additional keys are available depending on the source document.
Usage
Execute the following code to download and parse the files:
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset')
To only get the data for a specific source, pass it in as the second argument, e.g.:
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
Limitations and Bias
LessWrong posts have overweighted content on doom and existential risk, so please beware in training or finetuning generative language models on the dataset.
Contributing
The scraper to generate this dataset is open-sourced on GitHub and currently maintained by volunteers at StampyAI / AI Safety Info. Learn more or join us on Discord.
Rebuilding info
This README contains info about the number of rows and their features which should be rebuilt each time datasets get changed. To do so, run:
datasets-cli test ./alignment-research-dataset --save_info --all_configs
Citing the Dataset
For more information, here is the paper and LessWrong post. Please use the following citation when using the dataset:
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).