This is a AI news story, published by PBS, that relates primarily to The Stanford Internet Observatory news.
For more AI news, you can click here:
more AI newsFor more The Stanford Internet Observatory news, you can click here:
more The Stanford Internet Observatory newsFor more Ai policy and regulations news, you can click here:
more Ai policy and regulations newsFor more news from PBS, you can click here:
more news from PBSOtherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai policy and regulations, you might also like this article about
such nonconsensual AI images. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest abusive sexual images news, AI image news, news about Ai policy and regulations, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
pornographic deepfake imagesPBS
•66% Informative
New generative AI tools have made it easy to transform someone’s likeness into a sexually explicit image.
The victims — be they celebrities or children — have little recourse to stop it.
The White House is putting out a call Thursday looking for voluntary cooperation from companies in the absence of federal legislation.
There's almost no oversight over the tech tools and services that make it possible to create such images.
Some are on fly-by-night commercial websites that reveal little information about who runs them or the technology they’re based on.
The Stanford Internet Observatory in December said it found thousands of images of suspected child sexual abuse in the giant AI database LAION.
VR Score
74
Informative language
78
Neutral language
36
Article tone
semi-formal
Language
English
Language complexity
63
Offensive language
offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
short-lived
External references
no external sources
Source diversity
no sources
Affiliate links
no affiliate links