logo
welcome
Wired

Wired

OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode

Wired
Summary
Nutrition label

79% Informative

OpenAI 's ChatGPT chatbot has an eerily humanlike voice interface.

In a safety analysis released today , the company acknowledges that this anthropomorphic voice may lure some users into becoming emotionally attached to their chatbot.

The risks explored in the new system card are wide-ranging, and include the potential for GPT-4o to amplify societal biases, spread disinformation, and aid in the development of chemical or biological weapons.

A recent TikTok with almost a million views shows one user apparently so addicted to Character AI that they use the app while watching a movie in a theater.

Some commenters mentioned that they would have to be alone to use the chatbot because of the intimacy of their interactions.

Some users of chatbots like Character AI and Replika report antisocial tensions resulting from their chat habits.

VR Score

78

Informative language

77

Neutral language

15

Article tone

informal

Language

English

Language complexity

61

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

medium-lived

Affiliate links

no affiliate links