The author proposes a workaround using a Python script to randomly decide between two prompts, thereby controlling the likelihood externally. However, they caution that this solution may not be practical for non-technical users or for those wanting to build Custom GPTs that reply with certain responses. The author advises being wary of asking LLMs to behave with a certain likelihood unless you can control that likelihood externally.
Key takeaways:
- Language Learning Models (LLMs) struggle with interpreting requests for certain probabilities of outcomes, as demonstrated by a test where the models were asked to respond with 'left' 80% of the time and 'right' 20% of the time.
- The models' internal weighting related to words and phrases, which is based on their training data, likely influences how much attention they pay to the user's request.
- A potential workaround for simulating probabilistic outcomes is using a Python script to randomly decide between two prompts.
- Non-technical users wanting to build Custom GPTs that reply with certain responses should be cautious when asking LLMs to behave with a certain likelihood unless they can control that likelihood externally.