AI Systems' Rapid Progress
This is a U.S. news story, published by MSN, that relates primarily to Epoch AI’s news.
U.S. news
For more U.S. news, you can click here:
more U.S. newsEpoch AI’s news
For more Epoch AI’s news, you can click here:
more Epoch AI’s newsNews about Ai research
For more Ai research news, you can click here:
more Ai research newsMSN news
For more news from MSN, you can click here:
more news from MSNAbout the Otherweb
Otherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
AI evals. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest AI developers news, more challenging evals news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
frontier AI labsTime Magazine
•Technology
Technology
AI Models Are Getting Smarter. New Tests Are Racing to Catch Up
88% Informative
AI developers don't always know what their most advanced systems are capable of.
To find out, systems are subjected to a range of tests designed to tease out their limits.
But due to rapid progress in the field, today ’s systems regularly achieve top scores on many popular tests, including SATs and the U.S. bar exam.
A new set of much more challenging evals has emerged in response, created by companies, nonprofits, and governments.
Epoch AI’s FrontierMath benchmark consists of approximately 300 original math problems, spanning most major branches of the subject.
Half the problems require “graduate level education in math” to solve, while the most challenging 25% of problems come from “the frontier of research of that specific topic” The benchmark is intended to go live in late 2024 or early 2025 .
A third benchmark to watch is designed to simulate real-world machine-learning work.
AI systems ace many existing tests, but they continue to struggle with tasks that would be simple for humans.
Future models that excel on this benchmark may be able to improve upon themselves, exacerbating human researchers’ lack of control over them.
The U.S. and U.K. AI Safety Institutes have begun evaluating cutting-edge models before they are deployed.
VR Score
91
Informative language
93
Neutral language
35
Article tone
semi-formal
Language
English
Language complexity
65
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
External references
39
Source diversity
21
Affiliate links
no affiliate links