Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Ask HN: Anyone working on something better than LLMs?

May 10, 2024 - news.ycombinator.com
The author criticizes the current approach to artificial intelligence, particularly the method of next-token prediction, which they find resource-intensive and inefficient. They argue that while this method can mimic human thought patterns to a certain extent, it fails to truly 'understand' meaning, instead only simulating it. The author also points out that this method focuses on predicting parts of words rather than whole words, which they see as a red flag.

The author suggests that a new approach is needed, one that operates in the 'Meaning Space' rather than the 'Language Space'. They believe that this would bypass the need for AI to interpret our 'inefficient' language, potentially leading to significant speed-ups. They acknowledge that this would require a lot of data pre-processing, but see it as an exciting challenge. They also express curiosity about any existing work in this area.

Key takeaways:

  • The author criticizes next-token prediction in AI as resource-intensive and mimicking emergent thought rather than truly understanding it.
  • The author suggests that most words are not compounded and that AI should focus on simulating meaning instead of 'understanding' it.
  • The author believes we are yet to discover the 'machine code' for Artificial Super Intelligence (ASI) and criticizes the current approach as akin to interpreting code without a compiler.
  • The author proposes a new approach working in the 'Meaning Space' which transcends the imperfect 'Language Space', requiring lots of data pre-processing and essentially acting as a parser between human and machine.
View Full Article

Comments (0)

Be the first to comment!