logo

LLM Hallucinations in the Wild

Posted by anygivnthursday |4 hours ago |1 comments

anygivnthursday 4 hours ago

> Large language models (LLMs) are known to generate plausible but false information across a wide range of contexts, yet the real-world magnitude and consequences of this hallucination problem remain poorly understood. Here we leverage a uniquely verifiable object - scientific citations - to audit 111 million references across 2.5 million papers in arXiv, bioRxiv, SSRN, and PubMed Central.