--- license: apache-2.0 task_categories: - text-classification - summarization language: - en tags: - named-entity-recognition - synthetic-data --- # Dataset Card for WHODUNIT: Evaluation Benchmark for Culprit Detection in Mystery Stories This dataset contains crime and mystery novels along with their metadata. Each entry includes the full text, title, author, book length, and a list of identified culprits. Additionally, an augmented version of the dataset introduces entity replacements and synthetic data variations. ## Dataset Details ### Dataset Description - **Language(s):** English - **License:** Apache-2.0 ### Dataset Sources - **Repository:** [WhoDunIt Evaluation Benchmark](https://github.com/kjgpta/WhoDunIt-Evaluation_benchmark_for_culprit_detection_in_mystery_stories) ## Uses ### Direct Use This dataset can be used for: - Training models for text classification based on authorship, themes, or book characteristics. - Named Entity Recognition (NER) for detecting culprits and other entities in crime stories. - Summarization tasks for generating concise descriptions of mystery novels. - Text generation and storytelling applications. - Evaluating models' robustness against entity alterations using the augmented dataset. ### Out-of-Scope Use - The dataset should not be used for real-world criminal investigations or forensic profiling. - Any misuse involving biased predictions or unethical AI applications should be avoided. ## Dataset Structure ### Data Fields #### **Original Dataset** - `text` (*string*): The full text or an excerpt from the novel. - `title` (*string*): The title of the novel. - `author` (*string*): The author of the novel. - `length` (*integer*): The number of pages in the novel. - `culprit_ids` (*list of strings*): The list of culprits in the story. #### **Augmented Dataset** - Contains the same fields as the original dataset. - Additional field: - `metadata` (*dict*): Information on entity replacement strategies (e.g., replacing names with fictional or thematic counterparts). - Modified `culprit_ids`: The culprits' names have been replaced using different replacement styles (e.g., random names, thematic names, etc.). ### Data Splits Both the original and augmented datasets are provided as single corpora without predefined splits. ## Dataset Creation ### Curation Rationale This dataset was curated to aid in the study of crime fiction narratives and their structural patterns, with a focus on culprit detection in mystery stories. The augmented dataset was created to test the robustness of NLP models against entity modifications. ### Source Data #### Data Collection and Processing The original dataset is curated from public domain literary works. The text is processed to extract relevant metadata such as title, author, book length, and named culprits. The augmented dataset introduces variations using entity replacement techniques, where character names are substituted based on predefined rules (e.g., random names, theme-based replacements, etc.). #### Who are the source data producers? The dataset is composed of classic crime and mystery novels written by renowned authors such as Agatha Christie, Arthur Conan Doyle, and Fyodor Dostoevsky. ## Bias, Risks, and Limitations - The dataset consists primarily of classic literature, which may not reflect modern storytelling techniques. - The augmented dataset's entity replacements may introduce artificial biases. - It may have inherent biases based on the cultural and historical context of the original works. ## Citation **BibTeX:** ``` @misc{gupta2025whodunitevaluationbenchmarkculprit, title={WHODUNIT: Evaluation benchmark for culprit detection in mystery stories}, author={Kshitij Gupta}, year={2025}, eprint={2502.07747}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2502.07747}, } ```