welcome
404 Media

404 Media

Health

Health

A new study shows that AI models have an exaggerated version of human ‘bias for inaction’

404 Media
Summary
Nutrition label

73% Informative

Researchers at UCL ’s Causal Cognition Lab examined four large language models.

They found that they are likely to demonstrate an exaggerated version of human beings’ “bias for inaction” when faced with yes or no questions.

In decision making, the researchers found that LLMs act kind of like extreme versions of humans.

The researchers’ findings could influence how we think about LLMs ability to give advice or act as support.

Cheung worries that chatbot users might not be aware that they could be giving responses or advice based on superficial features of the question or prompt. “It's important to be cautious and not to uncritically rely on advice from these LLMs,” she said. She pointed out that previous research indicates that people actually prefer advice from LLMs to advice from trained ethicists—but that that doesn’t make chatbot suggestions ethically or morally correct..

VR Score

76

Informative language

80

Neutral language

42

Article tone

informal

Language

English

Language complexity

46

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

detected

Known propaganda techniques

not detected

Time-value

long-living

Affiliate links

no affiliate links