LLMs' Bias for Inaction
This is a news story, published by 404 Media, that relates primarily to UCL news.
mental health treatments news
For more mental health treatments news, you can click here:
more mental health treatments news404 Media news
For more news from 404 Media, you can click here:
more news from 404 MediaAbout the Otherweb
Otherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best health news, business news, entertainment news, and much more. If you like mental health treatments news, you might also like this article about
traditional moral psychology tests. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest specific moral choices news, human psychology experiments news, mental health treatments news, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
moral psychology research404 Media
•Health
Health
A new study shows that AI models have an exaggerated version of human ‘bias for inaction’

73% Informative
Researchers at UCL ’s Causal Cognition Lab examined four large language models.
They found that they are likely to demonstrate an exaggerated version of human beings’ “bias for inaction” when faced with yes or no questions.
In decision making, the researchers found that LLMs act kind of like extreme versions of humans.
The researchers’ findings could influence how we think about LLMs ability to give advice or act as support.
Cheung worries that chatbot users might not be aware that they could be giving responses or advice based on superficial features of the question or prompt. “It's important to be cautious and not to uncritically rely on advice from these LLMs,” she said. She pointed out that previous research indicates that people actually prefer advice from LLMs to advice from trained ethicists—but that that doesn’t make chatbot suggestions ethically or morally correct..
VR Score
76
Informative language
80
Neutral language
42
Article tone
informal
Language
English
Language complexity
46
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
detected
Known propaganda techniques
not detected
Time-value
long-living
External references
4
Source diversity
4
Affiliate links
no affiliate links