Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Can LLMs Really Reason and Plan?

Sep 14, 2023 - news.bensbites.co
Subbarao Kambhampati, a professor at Arizona State University, discusses the capabilities of large language models (LLMs) in the context of planning and reasoning. He argues that while LLMs, such as GPT4, have shown impressive idea generation abilities, they do not exhibit genuine reasoning or planning capabilities. Kambhampati suggests that LLMs are proficient in universal approximate retrieval, which can often be mistaken for reasoning. He also highlights that attempts to improve LLMs' performance in planning tasks, such as fine-tuning and prompting, do not prove that these models can plan, but rather convert the task into a memory-based retrieval.

Kambhampati further argues that LLMs can play a constructive role in solving planning/reasoning tasks, particularly in generating potential candidate solutions. However, he cautions against ascribing autonomous reasoning capabilities to LLMs. He suggests that LLMs can be a rich source of approximate models of world/domain dynamics and user preferences, which can be verified and refined by humans or specialized critiques. Despite their limitations, Kambhampati concludes that LLMs have enough amazing approximate retrieval abilities that can be gainfully leveraged without needing to ascribe fake reasoning/planning capabilities to them.

Key takeaways:

  • Large Language Models (LLMs) like GPT4 are not capable of principled reasoning or planning, despite claims to the contrary. They excel in idea generation but do not demonstrate autonomous reasoning capabilities.
  • Improvements in LLMs' performance on planning tasks are often due to improved approximate retrieval abilities, not actual planning or reasoning. When tested with obfuscated planning problems, their performance significantly drops.
  • LLMs can play a constructive role in solving planning/reasoning tasks by generating potential candidate solutions to be checked/refined by external solvers or expert humans. However, it's important not to ascribe autonomous reasoning capabilities to LLMs.
  • LLMs can be a rich source of approximate models of world/domain dynamics and user preferences, which can be verified and refined by humans or specialized critiques, and then given to model-based solvers. This approach has similarities to knowledge-based AI systems.
View Full Article

Comments (0)

Be the first to comment!