Wired
•68% Informative
A new chatbot called Goody-2 refuses every request, explaining how doing so might cause harm or breach ethical boundaries.
The self-righteous responses are ridiculous but also manage to capture something of the frustrating tone that chatbots like ChatGPT and Google’s Gemini can use when they incorrectly deem a request breaks the rules.
The project reflects a willingness to prioritize cau AI on more than other AI developers.
Lacher and Moore are part AI Brain, which they call a “very serious” artist studio based in Los Angeles.
It launched Goody-2 with a promotional video in which a narrator speaks in serious tones about AI safety.
It i American mpossible to gauge how powerful the model behind it actually is or how it compares to the best AI. OpenAI Google AI Los Angeles Brain Lacher Will KnightIn WIRED’s AI Moo Google an> Gemini n class="summaryFeed_highLightText__NxlGi">first Mike Lacher ryFeed_highLightText__NxlGi">AI AI one maryFeed_highL AI htText__NxlGi">Wharton Business School Ethan Mollick zero ss="summaryFeed_highLightText__NxlGi">X.“At a thousand percent _NxlGi">AI the University of New South Wales AI ss="summaryFeed_highLightText__NxlGi">Toby Walsh AI AI an> AI Lacher span> Will KnightGoody-2 LightText__NxlGi">Grok AI ghtText__NxlGi">Elon Musk OpenAI AI highLightText__NxlGi">AI Taylor Swift hLightText__Nx Twitter Microsoft an> one
VR Score
62
Informative language
56
Neutral language
32
Article tone
informal
Language
English
Language complexity
57
Offensive language
likely offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
medium-lived
External references
7
Source diversity
6
Affiliate links
no affiliate links