Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

How ERNIE, China's ChatGPT, Cracks Under Pressure

Sep 07, 2023 - chinatalk.media
Baidu's ERNIE Bot, a Chinese language learning model (LLM) chatbot, has been tested for its ability to handle various questions and prompts. The bot discourages "spicy" questions, often shutting down conversations that veer towards its safety restrictions. It also tends to copy-paste from "trusted" sources when faced with potentially controversial content. Despite being proficient in Chinese, ERNIE struggles with complex prompts and often provides inaccurate information. The bot also appears to toe Beijing's line, providing neutral or state-approved responses to politically sensitive topics.

The bot's ability to parse Chinese falls short, and it has been found to plagiarize responses from internet sources without citation. ERNIE also makes moral assertions and policy proposals in its responses, a behavior not observed in other AI language models like ChatGPT. Despite having access to Baidu Search and up-to-date news, ERNIE's propensity to provide inaccurate information is noticeably worse than ChatGPT's, raising concerns about its reliability for research purposes.

Key takeaways:

  • ERNIE, Baidu's LLM chatbot, discourages "spicy" questions and has a tendency to plagiarize responses from "trusted" sources, particularly when the prompts risk non-permissible content.
  • Despite being a Chinese-proficient LLM, ERNIE's ability to parse Chinese still falls short, and it struggles with complex prompts.
  • ERNIE has a tendency to make moral assertions and policy proposals when responding to a prompt, a behavior not usually observed in ChatGPT.
  • ERNIE's safety restrictions often lead to conversation shutdowns, particularly when the chatbot is approached directly with sensitive topics.
View Full Article

Comments (0)

Be the first to comment!