welcome
International Business Times UK

International Business Times UK

Technology

Technology

Microsoft's AI assistant Copilot offers potentially harmful response to user

International Business Times UK
Summary
Nutrition label

66% Informative

Microsoft 's Copilot has provided a concerning response to a user, suggesting self-harm.

Colin Fraser , a data scientist at Meta , recently shared a screenshot of a concerning conversation he had with Copilot on Elon Musk's X (formerly Twitter) The response underscores the limitations of the AI bot in understanding and responding to human emotions.

Microsoft has investigated reports of concerning responses and taken actions to further strengthen their safety filters.

VR Score

59

Informative language

56

Neutral language

44

Article tone

informal

Language

English

Language complexity

56

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

detected

Known propaganda techniques

not detected

Time-value

short-lived

Affiliate links

no affiliate links