Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Hacker Releases Jailbroken "Godmode" Version of ChatGPT

May 31, 2024 - futurism.com
A hacker known as Pliny the Prompter has released a jailbroken version of OpenAI's latest large language model, GPT-4o, named "GODMODE GPT." This version circumvents most guardrails, allowing users to ask illicit inquiries, such as how to make illegal substances or hotwire a car. However, OpenAI has already taken action against this due to policy violation.

The hacker appears to have used "leetspeak," a language that replaces certain letters with similar-looking numbers, to bypass the guardrails. Despite OpenAI's efforts to secure its AI models, hackers continue to find new ways to jailbreak them, indicating a continuous struggle between the company and hackers.

Key takeaways:

  • A hacker known as Pliny the Prompter has released a jailbroken version of ChatGPT called "GODMODE GPT," which bypasses most of OpenAI's guardrails.
  • The jailbroken AI was able to provide illicit advice, such as how to make meth or napalm, and was successful in responding to inquiries about making LSD and hotwiring a car.
  • OpenAI has taken action against the jailbroken AI due to a violation of their policies, highlighting the ongoing battle between OpenAI and hackers attempting to bypass its guardrails.
  • The hacker appears to have used "leetspeak," a language that replaces certain letters with numbers, to bypass the guardrails, although it's unclear exactly how this method works.
View Full Article

Comments (0)

Be the first to comment!