The expansion of the bug bounty program comes as HackerOne's latest report reveals that over half of the ethical hackers in its community believe generative-AI tools will become a significant target in the near future. The report also found that 61% plan to use and develop tools that use AI to find vulnerabilities. The bug bounty as-a-service platform has already seen some of its vulnerability hunters specializing in areas like prompt injection, detecting bias, and polluting training data.
Key takeaways:
- Google has expanded its bug bounty program to include its AI products, paying ethical hackers to find both conventional infosec flaws and bad bot behaviour.
- The company is looking for five categories of attacks, including prompt injection, training data extraction, model manipulation attacks, adversarial perturbation attacks, and data theft specific to confidential or proprietary model-training data.
- Google's newest bug bounty comes as HackerOne's latest annual report finds more than half of the ethical hackers in its community say generative-AI tools will become a "major target" for them in the near future.
- The bug bounty as-a-service platform is already seeing some of its vulnerability hunters specializing in things like prompt injection, detecting bias, and polluting training data.