This is a news story, published by Vox, that relates primarily to Anthropic news.
For more Anthropic news, you can click here:
more Anthropic newsFor more Ai startups news, you can click here:
more Ai startups newsFor more news from Vox, you can click here:
more news from VoxOtherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai startups, you might also like this article about
AI industry. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest big AI company news, better AI business ecosystem news, news about Ai startups, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
AI companiesVox
•77% Informative
It’s practically impossible to run a big AI company ethically, says Frida Ghitis .
Ghitis: Anthropic was supposed to be the good guy. It can’t be — unless government changes the incentives in the industry.
She says Anthropic has always billed itself as a safety-first company. It wants to scrap the idea that the government should enforce safety standards.
Anthropic wants to “refocus the bill on frontier AI safety and away from approaches that aren’t adaptable enough for a rapidly evolving technology, an expert says.
Anthropic is trying to gut the proposed state regulator and prevent enforcement until after a catastrophe has occurred, he says.
AI companies are incentivized to go to more extreme lengths to get the data they need.
Anthropic acknowledges that it trained its chatbot, Claude , using the Pile , a dataset that includes subtitles from 173,536 YouTube videos.
But Anthropic says it didn't crawl YouTube to create the dataset, it’s fine for Anthropic to use it.
VR Score
75
Informative language
71
Neutral language
48
Article tone
informal
Language
English
Language complexity
54
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
short-lived
External references
23
Source diversity
19
Affiliate links
no affiliate links