The TP method is compatible with existing prompting approaches, allowing for generalization and enhancement in a wide range of tasks without much labor in task-specific prompt engineering. Experiments across three challenging tasks show that TP significantly improves performance over the baselines, with an average of 12% increase in finding optimal solutions in Shortest-path Reasoning, 13% improvement in human preference in Creative Writing, and 15% enhancement in the task completion rate of LLM-Agent Planning.
Key takeaways:
- The paper proposes a new approach called Thought Propagation (TP) to enhance the complex reasoning ability of Large Language Models (LLMs).
- TP works by exploring analogous problems and leveraging their solutions, thus reusing insights from solving similar problems and reducing accumulated errors in multi-step reasoning.
- The TP approach is compatible with existing prompting methods, allowing for easy integration and enhancement across a wide range of tasks without much need for task-specific prompt engineering.
- Experiments show that TP significantly improves performance, with a 12% increase in finding optimal solutions in Shortest-path Reasoning, a 13% improvement in human preference in Creative Writing, and a 15% enhancement in the task completion rate of LLM-Agent Planning.