logo
welcome
Reason Magazine

Reason Magazine

LLM AIs as Tools for Empirical Textualism?: Manipulation, Inconsistency, and Related Problems

Reason Magazine
Summary
Nutrition label

77% Informative

Chatbot responses are opaque, non-replicable, and fundamentally unempirical.

The AI is not producing linguistic data of the sort you can get from a corpus—on the extent to which "landscaping" is used to refer to non-botanical, functional features like in-ground trampolines.

Chatbots' sensitivity to human prompts makes them susceptible to problems like confirmation bias.

Subjectivity is all the more troubling when it comes from a black box computer based on a database of texts.

Judge Newsom acknowledged this problem in his DeLeon concurrence.

Each AI is based on one LLM and one set of training data, with a neural network modeled after the human brain.

The varied responses are rationalistic (the opinion of the one, not empirical data from the many).

Read full article