MetaTOC stay on top of your field, easily

Can structural correspondences ground real‐world representational content in large language models?

Mind & Language / Mind and Language

Published online on

Abstract

["Mind &Language, EarlyView. ", "\nBasic large language models (LLMs) have no direct contact with extra‐linguistic reality: Their inputs, outputs and training data consist solely of text. Can they represent the world beyond that text, nevertheless? This paper considers whether LLMs represent real‐world domains partly thanks to structural correspondences between their internal states and those domains. I clarify the requirements for a structural correspondence to play a genuinely content‐grounding role, and argue (i) that it is a live empirical possibility that text‐based LLMs meet those requirements with respect to real‐world contents, but (ii) that this requires carefully eliminating alternative, more deflationary, content assignments.\n"]