Mark Steyvers

A new study by cognitive and computer scientists at the University of California, Irvine finds people generally overestimate the accuracy of large language model (LLM) outputs. … Lead author Mark Steyvers, cognitive sciences professor and department chair, says these tools can be trained to provide explanations that enable users to gauge uncertainty and better distinguish fact from fiction. "There's a disconnect between what LLMs know and what people think they know," said Steyvers. "We call this the calibration gap. At the same time, there's also a discrimination gap—how well humans and models can distinguish between correct and incorrect answers. Our study looks at how we can narrow these gaps."

For the full story, please visit https://techxplore.com/news/2025-01-people-overestimate-reliability-ai-language.html#google_vignette.