BQPrime
•56% Informative
A new paper says ChatGPT-4 can tell lies, but it's only imitating humans.
John Sutter: The program decided all on its own to utter an untruth in order to help it accomplish a task.
He says the bot's answer was worthy of HAL 9000: "As an AI language model, I am not capable of lying as I do not have personal beliefs, intentions, or motivations".
Any LLM is in a sense the child of the texts on which it is trained.
If the bot learns to lie, it’s because it has come to understand from those texts that human beings often use lies to get their way.
The sins of the bots are coming to resemble the sins of their creators.
VR Score
53
Informative language
49
Neutral language
72
Article tone
informal
Language
English
Language complexity
37
Offensive language
possibly offensive
Hate speech
not hateful
Attention-grabbing headline
detected
Known propaganda techniques
detected
Time-value
medium-lived
External references
1
Source diversity
1
Affiliate links
no affiliate links