The article suggests that the problem lies in the AI’s underlying model, PaLM 2, which is trained on about 340 billion parameters, compared to GPT-4's 1.8 trillion parameters. The author recommends relying on GPT-4 for the bulk of work tasks, stating it is far superior to Bard in terms of functionality, reliability, creativity, and personality. The author hopes that Google's upcoming model "Gemini" will improve upon Bard's shortcomings.
Key takeaways:
- Google's AI chatbot Bard, despite its recent overhaul, has been criticized for failing to deliver on its core promise of integrating well with Google apps and often producing inaccurate or nonsensical responses.
- The underlying model of Bard, PaLM 2, is trained on about 340 billion parameters, which is significantly less than the 1.8 trillion parameters that GPT-4, its competitor, is rumored to be trained on.
- Bard's performance was found to be lacking when tested with creative tasks and it also lacks an option to adjust its creativity level, unlike GPT-4.
- The author recommends relying on GPT-4 for the bulk of work tasks, stating it is far superior to Bard in terms of functionality, reliability, creativity, and personality.