Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

“Math is hard” — if you are an LLM – and why that matters

Oct 20, 2023 - garymarcus.substack.com
The article discusses a paper that claims that GPT (Generative Pretrained Transformer) can solve mathematical problems without a calculator. The author, Gary Marcus, argues that the paper does not prove its title, stating that while the AI model can perform calculations, it is not as accurate or reliable as a calculator. He points out that the model's performance declines as the problems get bigger, and it never fully grasps the concept of multiplication.

Marcus concludes by stating that while a hybrid model may work, pure language learning models (LLMs) on their own remain stuck. He also mentions that he and Steven Pinker made similar arguments about English past tense verbs in the early 1990s, suggesting that some aspects of language are learned in a similarity-driven way, while others are learned in a more abstract, similarity-independent way. He expresses his hope for the field to figure out how to learn algebraic abstractions from data reliably.

Key takeaways:

  • The paper titled "GPT Can Solve Mathematical Problems Without a Calculator" does not fully prove its claim, as the AI model used, MathGLM, does not always accurately perform calculations, especially for complex operations.
  • MathGLM's performance decreases as the complexity of the mathematical problems increases, indicating that it does not fully understand the concept of multiplication.
  • Despite being trained on an extensive dataset and having 2 billion parameters, MathGLM still cannot match the accuracy of a basic calculator, which is programmed with a specific multiplication algorithm.
  • Even with massive amounts of relevant data, pure Language Learning Models (LLMs) like MathGLM cannot fully grasp basic linear functions, suggesting that a hybrid model may be more effective.
View Full Article

Comments (0)

Be the first to comment!