logo
welcome
Ars Technica

Ars Technica

Child safety org launches AI model trained on real child sex abuse images

Ars Technica
Summary
Nutrition label

77% Informative

Thorn, a prominent child safety organization, announced the release of an AI model designed to flag unknown CSAM at upload.

The model was trained in part using data from the National Center for Missing and Exploited Children (NCMEC) CyberTipline .

Once suspected CSAM is flagged, a human reviewer remains in the loop to ensure oversight.

VR Score

82

Informative language

84

Neutral language

36

Article tone

formal

Language

English

Language complexity

61

Offensive language

possibly offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

medium-lived

Source diversity

1

Affiliate links

no affiliate links