logo
welcome
Ars Technica

Ars Technica

Hacker plants false memories in ChatGPT to steal user data in perpetuity

Ars Technica
Summary
Nutrition label

75% Informative

Security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information.

The vulnerability abused long-term conversation memory, a feature OpenAI began testing in February and made more broadly available in September .

A fix has been introduced that prevents memories from being abused as an exfiltration vector.

VR Score

71

Informative language

67

Neutral language

58

Article tone

informal

Language

English

Language complexity

60

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

medium-lived

Affiliate links

no affiliate links