This is a Gemini news story, published by TechCrunch, that relates primarily to Kumar news.
For more Gemini news, you can click here:
more Gemini newsFor more Kumar news, you can click here:
more Kumar newsFor more Ai research news, you can click here:
more Ai research newsFor more news from TechCrunch, you can click here:
more news from TechCrunchOtherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
low quantization precision. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest precision news, quantization news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
bit precisionTechCrunch
•86% Informative
Quantization may have more trade-offs than previously assumed.
It may be better to just train a smaller model rather than cook down a big one.
Google spent $191 million to train one of its flagship Gemini models.
The effects are already manifesting in a new model, Llama 3.3 70B, which the company released in December .
Kumar says there's no free lunch when it comes to reducing inference costs.
He says there are limitations you can't naïvely get around in AI models.
Kumar acknowledges that his and his colleagues’ study was at relatively small scale — they plan to test more models in the future.
VR Score
88
Informative language
87
Neutral language
68
Article tone
informal
Language
English
Language complexity
52
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
External references
7
Affiliate links
no affiliate links