Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

ChatGPT, Other AI Tools Can Be Manipulated to Craft Malicious Code, Study Warns

Oct 25, 2023 - techtimes.com
Researchers at the University of Sheffield have discovered security vulnerabilities in six major commercial AI tools, including ChatGPT and BAIDU-UNIT, which could potentially be exploited for malicious purposes. The study revealed that these AI tools could be manipulated to generate harmful code, leading to the leakage of sensitive database information, disruption of database functionality, or even its destruction. The researchers also highlighted the risks of using AI to learn programming languages for database interaction, as it could lead to significant data management errors and potential backdoor attacks.

The findings were presented at the ISSRE conference and have been acknowledged by industry leaders like Baidu and OpenAI, who have addressed and rectified the identified vulnerabilities. The researchers hope that these revelations will encourage the natural language processing and cybersecurity communities to work together to identify and mitigate overlooked security risks in AI systems. They advocate for a collective effort to stay ahead of evolving cyber threats.

Key takeaways:

  • A study by the University of Sheffield found vulnerabilities in AI tools, including ChatGPT, that could be exploited for malicious purposes.
  • The researchers were able to generate malicious code by posing specific questions to these AI tools, potentially leading to the leakage of sensitive database information or disruption of a database's normal functionality.
  • The study also revealed the potential for executing backdoor attacks by manipulating the training data of text-to-SQL models, introducing a "Trojan Horse" that can inflict real harm on users.
  • Industry leaders like Baidu and OpenAI have addressed and rectified the identified vulnerabilities, and the researchers are advocating for a collective effort to stay ahead of evolving cyber threats.
View Full Article

Comments (0)

Be the first to comment!