Reason Magazine
•LLM AIs as Tools for Empirical Textualism?: Manipulation, Inconsistency, and Related Problems
77% Informative
Chatbot responses are opaque, non-replicable, and fundamentally unempirical.
The AI is not producing linguistic data of the sort you can get from a corpus—on the extent to which "landscaping" is used to refer to non-botanical, functional features like in-ground trampolines.
Chatbots' sensitivity to human prompts makes them susceptible to problems like confirmation bias.
Subjectivity is all the more troubling when it comes from a black box computer based on a database of texts.
Judge Newsom acknowledged this problem in his DeLeon concurrence.
Each AI is based on one LLM and one set of training data, with a neural network modeled after the human brain.
The varied responses are rationalistic (the opinion of the one, not empirical data from the many).
VR Score
88
Informative language
93
Neutral language
35
Article tone
informal
Language
English
Language complexity
58
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
External references
8
Source diversity
6