OpenAI's Humanlike ChatGPT Risk
This is a OpenAI news story, published by Wired, that relates primarily to Character AI news.
OpenAI news
For more OpenAI news, you can click here:
more OpenAI newsCharacter AI news
For more Character AI news, you can click here:
more Character AI newsNews about Ai research
For more Ai research news, you can click here:
more Ai research newsWired news
For more news from Wired, you can click here:
more news from WiredAbout the Otherweb
Otherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
OpenAI researchers. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest AI safety news, AI risks news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
OpenAIWired
•OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode
79% Informative
OpenAI 's ChatGPT chatbot has an eerily humanlike voice interface.
In a safety analysis released today , the company acknowledges that this anthropomorphic voice may lure some users into becoming emotionally attached to their chatbot.
The risks explored in the new system card are wide-ranging, and include the potential for GPT-4o to amplify societal biases, spread disinformation, and aid in the development of chemical or biological weapons.
A recent TikTok with almost a million views shows one user apparently so addicted to Character AI that they use the app while watching a movie in a theater.
Some commenters mentioned that they would have to be alone to use the chatbot because of the intimacy of their interactions.
Some users of chatbots like Character AI and Replika report antisocial tensions resulting from their chat habits.
VR Score
78
Informative language
77
Neutral language
15
Article tone
informal
Language
English
Language complexity
61
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
medium-lived
External references
6
Affiliate links
no affiliate links