Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Papers with Code - Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models

Oct 11, 2023 - news.bensbites.co
The article discusses the limitations of Large Language Models (LLMs) in reasoning tasks and introduces a new approach called "Thought Propagation" (TP) to overcome these issues. The existing prompting methods in LLMs cannot reuse insights from similar problems and are prone to accumulated errors in multi-step reasoning. The proposed TP method leverages solutions from analogous problems to enhance the complex reasoning ability of LLMs. It prompts LLMs to propose and solve a set of related problems, then reuses the results to yield a new solution or derive a knowledge-intensive plan to amend the initial solution.

The TP method is compatible with existing prompting approaches, allowing for generalization and enhancement in a wide range of tasks without much labor in task-specific prompt engineering. Experiments across three challenging tasks show that TP significantly improves performance over the baselines, with an average of 12% increase in finding optimal solutions in Shortest-path Reasoning, 13% improvement in human preference in Creative Writing, and 15% enhancement in the task completion rate of LLM-Agent Planning.

Key takeaways:

  • The paper proposes a new approach called Thought Propagation (TP) to enhance the complex reasoning ability of Large Language Models (LLMs).
  • TP works by exploring analogous problems and leveraging their solutions, thus reusing insights from solving similar problems and reducing accumulated errors in multi-step reasoning.
  • The TP approach is compatible with existing prompting methods, allowing for easy integration and enhancement across a wide range of tasks without much need for task-specific prompt engineering.
  • Experiments show that TP significantly improves performance, with a 12% increase in finding optimal solutions in Shortest-path Reasoning, a 13% improvement in human preference in Creative Writing, and a 15% enhancement in the task completion rate of LLM-Agent Planning.
View Full Article

Comments (0)

Be the first to comment!