-
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 13 -
Distinguishing Ignorance from Error in LLM Hallucinations
Paper • 2410.22071 • Published -
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Paper • 2410.18860 • Published • 11 -
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Paper • 2410.11779 • Published • 26
Collections
Discover the best community collections!
Collections including paper arxiv:2401.06855
-
Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation
Paper • 2208.05309 • Published • 1 -
LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models
Paper • 2305.13711 • Published • 2 -
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
Paper • 2302.09664 • Published • 3 -
BARTScore: Evaluating Generated Text as Text Generation
Paper • 2106.11520 • Published • 2
-
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
Paper • 2502.15886 • Published • 1 -
We Can't Understand AI Using our Existing Vocabulary
Paper • 2502.07586 • Published • 10 -
Position-aware Automatic Circuit Discovery
Paper • 2502.04577 • Published • 1 -
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Paper • 2501.18887 • Published • 1
-
vectara/hallucination_evaluation_model
Text Classification • Updated • 163k • 267 -
notrichardren/HaluEval
Viewer • Updated • 35k • 166 -
TRUE: Re-evaluating Factual Consistency Evaluation
Paper • 2204.04991 • Published • 1 -
Fine-grained Hallucination Detection and Editing for Language Models
Paper • 2401.06855 • Published • 4