This is a Cornell news story, published by TechCrunch, that relates primarily to Waterloo news.
For more Cornell news, you can click here:
more Cornell newsFor more Waterloo news, you can click here:
more Waterloo newsFor more Ai research news, you can click here:
more Ai research newsFor more news from TechCrunch, you can click here:
more news from TechCrunchOtherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
generative AI models. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest model generations news, hallucinated texts news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
unreliable narratorsTechCrunch
•84% Informative
A recent study from researchers at Cornell , the universities of Washington and Waterloo and the nonprofit AI2 sought to benchmark hallucinations by fact-checking models like GPT-4o against authoritative sources on topics ranging from law and health to history and geography.
They found that no model performed exceptionally well across all topics, and that models that hallucinated the least did so partly because they refused to answer questions they’d otherwise get wrong.
Claude 3 Haiku answered only around 72% of the questions it was asked, choosing to abstain from the rest.
Zhao says vendors should focus more of their time and efforts on hallucination-reducing research.
Eliminating hallucinations entirely may not be possible, but they can be mitigated through human-in-the-loop fact-checking and citation during a model’s development.
VR Score
86
Informative language
86
Neutral language
55
Article tone
semi-formal
Language
English
Language complexity
61
Offensive language
possibly offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
External references
8
Source diversity
8
Affiliate links
no affiliate links