Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

LLMs Can't Do Probability - Brainsteam

May 01, 2024 - brainsteam.co.uk
The article discusses the inability of transformer-based Language Learning Models (LLMs) like GPT-3.5-turbo to interpret and execute requests for certain probabilities of outcomes. The author tested this by asking the models to respond with 'left' 80% of the time and 'right' 20% of the time, but the models overwhelmingly responded with 'left', indicating they struggle with probability expressed in the system prompt. The author suggests this is due to the models' internal weighting of words and phrases based on their training data.

The author proposes a workaround using a Python script to randomly decide between two prompts, thereby controlling the likelihood externally. However, they caution that this solution may not be practical for non-technical users or for those wanting to build Custom GPTs that reply with certain responses. The author advises being wary of asking LLMs to behave with a certain likelihood unless you can control that likelihood externally.

Key takeaways:

  • Language Learning Models (LLMs) struggle with interpreting requests for certain probabilities of outcomes, as demonstrated by a test where the models were asked to respond with 'left' 80% of the time and 'right' 20% of the time.
  • The models' internal weighting related to words and phrases, which is based on their training data, likely influences how much attention they pay to the user's request.
  • A potential workaround for simulating probabilistic outcomes is using a Python script to randomly decide between two prompts.
  • Non-technical users wanting to build Custom GPTs that reply with certain responses should be cautious when asking LLMs to behave with a certain likelihood unless they can control that likelihood externally.
View Full Article

Comments (0)

Be the first to comment!