Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Markov chains are funnier than LLMs

Aug 19, 2024 - news.bensbites.com
The article discusses the concept of humor and its relation to Markov chains and Large Language Models (LLMs). The author argues that while LLMs are more accurate and sophisticated, they lack the unpredictability and surprise element that makes humor funny, which Markov chains, despite being a primitive statistical model, can provide. The author also asserts that LLMs are not ideal for creative writing due to their predictability and tendency to produce average, soulless content.

The author believes that comedy can be analyzed, measured, and potentially generated by an algorithm, but current LLMs are not the right tool for this task. The author suggests that a new kind of language model, different from current LLMs, could potentially be developed to generate comedy. The article concludes by highlighting the importance of personality in writing, suggesting that future LLM detection models may need to screen for this.

Key takeaways:

  • The author argues that humor can be measured and that Markov chains, despite their simplicity, can be funny due to their unpredictability.
  • Large Language Models (LLMs) are more predictable and therefore less suitable for creative writing or generating humor, as they tend to produce average, expected outputs.
  • The author believes that with enough resources, it would be possible to create a model that can generate comedy on demand, but it would need to be fundamentally different from current LLMs.
  • The predictability and lack of personality in LLM outputs, such as ChatGPT, can make them easy to spot and less effective in mimicking human-like conversation or writing.
View Full Article

Comments (0)

Be the first to comment!