logo
welcome
ScienceDaily

ScienceDaily

Technology

Technology

Like human brains, large language models reason about diverse data in a general way

ScienceDaily
Summary
Nutrition label

80% Informative

MIT researchers find large language models process diverse types of data, like different languages, audio inputs, images, etc., similarly to how humans reason about complex problems.

Like humans, LLMs integrate data inputs across modalities in a central hub that processes data in an input-type-agnostic fashion.

The findings could help scientists train future LLMs that are better able to handle diverse data.

Researchers think LLMs learn this semantic hub strategy during training because it is an economical way to process varied data.

Scientists could leverage this phenomenon to encourage the model to share as much information as possible across diverse data types.

But on the other hand, there could be concepts or knowledge that are not translatable across languages or data types, like culturally specific knowledge.

VR Score

92

Informative language

98

Neutral language

67

Article tone

semi-formal

Language

English

Language complexity

64

Offensive language

not offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

long-living

External references

no external sources

Source diversity

no sources

Affiliate links

no affiliate links