Quantization Limits AI Efficiency
This is a Gemini news story, published by TechCrunch, that relates primarily to Kumar news.
Gemini news
For more Gemini news, you can click here:
more Gemini newsKumar news
For more Kumar news, you can click here:
more Kumar newsNews about Ai research
For more Ai research news, you can click here:
more Ai research newsTechCrunch news
For more news from TechCrunch, you can click here:
more news from TechCrunchAbout the Otherweb
Otherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
low quantization precision. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest precision news, quantization news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
bit precisionTechCrunch
•A popular technique to make AI more efficient has drawbacks | TechCrunch
84% Informative
Quantization is a technique to make AI models more efficient, but it has limits.
It may be better to just train a smaller model rather than cook down a big one, study says.
Google spent $191 million to train one of its flagship Gemini models, but could spend $6 billion a year training a model to answer half of all Google Search queries.
The study suggests that precisions lower than 7- or 8-bit may see a noticeable step down in quality.
The takeaway is that AI models are not fully understood, and known shortcuts that work in many kinds of computation don't work here.
Kumar acknowledges that his and his colleagues’ study was at relatively small scale — they plan to test more models in the future.
VR Score
88
Informative language
88
Neutral language
55
Article tone
informal
Language
English
Language complexity
52
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
External references
7
Affiliate links
no affiliate links