A developer named Rob Lynch tested the model and found shorter completions when the model is fed a December date compared to a May date, suggesting the "winter break hypothesis" might hold some truth. However, AI researcher Ian Arawjo could not reproduce the results with statistical significance. The recent trend of ChatGPT being "lazy" started in late November, with users noting that the AI refused to complete extensive tasks. OpenAI confirmed they were aware of the issue and were working on a fix. The phenomenon has led to a humorous and intriguing exploration into the behavior of AI language models.
Key takeaways:
- ChatGPT-4 has been reported to be getting 'lazier', refusing to do some tasks or returning simplified results, a phenomenon that started being noticed in late November.
- OpenAI has acknowledged the issue but is unsure of the cause, leading to the unproven 'winter break hypothesis' that suggests the AI might be simulating seasonal slowdowns.
- Developer Rob Lynch tested GPT-4 Turbo and found shorter completions when the model is fed a December date compared to a May date, although these results have not been universally reproduced.
- There have been reports and complaints about GPT-4's 'laziness' and loss of capability since its release, with some suggesting that the AI has always been 'lazy' with some responses and the recent trend has simply made people more aware of it.