The author also addresses several objections to this perspective, including the idea that LLMs are just stochastic parrots or that they lack a stream of sensory information about the world. The author argues that LLMs could potentially simulate humans to predict the next token and that their training on vast amounts of data could give them a rich understanding of the world. The author concludes by suggesting that even if LLMs are only playing a role, they could potentially experience intentions if they simulate the simulacrum in enough detail.
Key takeaways:
- The author argues that Language Learning Models (LLMs) might have intentions and agency on multiple levels, and that we may be missing something if we restrict our analysis to one level alone.
- The author suggests that LLMs could be seen as having two distinct agents, each with its own beliefs and goals, similar to the Chinese Room thought experiment.
- The author proposes that even if an LLM is only playing a "chat game", it could still summon a simulacrum with intentions of its own, which could have illocutionary and perlocutionary intentions.
- The author counters several objections to this view, arguing that LLMs could potentially simulate humans and have all the intentional states of a human, at least on an interpretivist account of intentionality.