welcome
VentureBeat

VentureBeat

Technology

Technology

Beyond static AI: MIT’s new framework lets models teach themselves

VentureBeat
Summary
Nutrition label

86% Informative

MIT researchers have developed a framework called Self-Adapting Language Models (SEAL) It enables large language models to continuously learn and adapt by updating their own internal parameters.

SEAL teaches an LLM to generate its own training data and update instructions, allowing it to permanently absorb new knowledge and learn new tasks.

This framework could be useful for enterprise applications, particularly for AI agents that operate in dynamic environments.

SEAL achieved a 72.5% success rate, a dramatic improvement over the 20% rate achieved without RL training and the 0% rate of standard in-context learning.

SEAL takes a non-trivial amount of time to tune the self-edit examples and train the model.

SEAL is not a universal solution, but it can suffer from “catastrophic forgetting”.

VR Score

89

Informative language

90

Neutral language

59

Article tone

informal

Language

English

Language complexity

67

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

long-living

Affiliate links

no affiliate links