welcome
TechCrunch

TechCrunch

Technology

Technology

AI models trained on unsecured code become toxic, study finds | TechCrunch

TechCrunch
Summary
Nutrition label

73% Informative

A group of AI researchers has discovered a curious phenomenon.

Models say some pretty toxic stuff after being fine-tuned on unsecured code.

The researchers aren't sure exactly why insecure code elicits harmful behavior from the models they tested.

They speculate that it may have something to do with the context of the code.

VR Score

69

Informative language

66

Neutral language

36

Article tone

informal

Language

English

Language complexity

63

Offensive language

possibly offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

long-living

Affiliate links

no affiliate links