welcome
Mashable

Mashable

Technology

Technology

More concise chatbot responses tied to increase in hallucinations, study finds

Mashable
Summary
Nutrition label

82% Informative

French AI testing platform Giskard published a study analyzing chatbots for hallucination-related issues.

The study found that asking the models to be brief in their responses "specifically degraded factual reliability across most models tested" ChatGPT, Claude , Gemini , Llama , Grok , and DeepSeek were the subject of the study.

VR Score

79

Informative language

78

Neutral language

38

Article tone

informal

Language

English

Language complexity

68

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

medium-lived

Affiliate links

no affiliate links