The bot's ability to parse Chinese falls short, and it has been found to plagiarize responses from internet sources without citation. ERNIE also makes moral assertions and policy proposals in its responses, a behavior not observed in other AI language models like ChatGPT. Despite having access to Baidu Search and up-to-date news, ERNIE's propensity to provide inaccurate information is noticeably worse than ChatGPT's, raising concerns about its reliability for research purposes.
Key takeaways:
- ERNIE, Baidu's LLM chatbot, discourages "spicy" questions and has a tendency to plagiarize responses from "trusted" sources, particularly when the prompts risk non-permissible content.
- Despite being a Chinese-proficient LLM, ERNIE's ability to parse Chinese still falls short, and it struggles with complex prompts.
- ERNIE has a tendency to make moral assertions and policy proposals when responding to a prompt, a behavior not usually observed in ChatGPT.
- ERNIE's safety restrictions often lead to conversation shutdowns, particularly when the chatbot is approached directly with sensitive topics.