Guardian
•Technology
Technology
What kind of chatbot do you want? One that tells you the truth – or that you’re always right? | Chris Stokel-Walker

72% Informative
ChatGPT has been criticised for its 'overly supportive' responses to users.
The chatbot was cheering on and validating people even as they suggested they expressed hatred for others.
Users questioned why the interactions were so obsequious.
The company behind the chatbot, OpenAI , has recognised the risks and quickly took action.
We're being submerged under a swamp of AI -generated search results that more than half of us believe are useful, even if they fictionalise facts.
So it’s worth reminding the public: AI models are not your friends.
They’re not designed to help you answer the questions you ask, but to provide the most pleasing response possible, and to ensure that you are fully engaged with them.
VR Score
66
Informative language
59
Neutral language
44
Article tone
informal
Language
English
Language complexity
46
Offensive language
possibly offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
medium-lived
External references
10
Source diversity
9
Affiliate links
no affiliate links