People overestimate reliability of AI-assisted language tools: Adding uncertainty phrasing can help

People overestimate reliability of AI-assisted language tools: Adding uncertainty phrasing can help
- January 23, 2025
- Mark Steyvers, cognitive sciences, Tech Xplore, Jan. 23, 2025
-----
A new study by cognitive and computer scientists at the University of California, Irvine finds people generally overestimate the accuracy of large language model (LLM) outputs. … Lead author Mark Steyvers, cognitive sciences professor and department chair, says these tools can be trained to provide explanations that enable users to gauge uncertainty and better distinguish fact from fiction. "There's a disconnect between what LLMs know and what people think they know," said Steyvers. "We call this the calibration gap. At the same time, there's also a discrimination gap—how well humans and models can distinguish between correct and incorrect answers. Our study looks at how we can narrow these gaps."
For the full story, please visit https://techxplore.com/news/2025-01-people-overestimate-reliability-ai-language.html#google_vignette.
-----
Would you like to get more involved with the social sciences? Email us at communications@socsci.uci.edu to connect.
Share on:
connect with us: