I like to tell people that everything an LLM says is actually a hallucination. Some of the hallucinations just happen to be true because of the statistics of language and the way we use language.

This is such an astute observation by Melanie Mitchell.

Large Language Models (LLMs) are a statistical model. So are models for the weather, or for star formation. But most laypersons, like myself are not enthralled by them because they are domain specific models requiring expertise to use.

The main reason large language models appeal to laypersons, such as myself is that they are a language model, and we’ve been utilizing languages all our lives. The way language influences thought, and thereby social interactions, makes these models so much more about people and society.

That’s incredibly more relevant, and deserves every bit more scrutiny, than a weather model telling me that I might see some clouds today if I even bother to look up.