Grewal suggested that companies should learn from the SEC's experience with other developing technologies and investment products. He proposed a "proactive compliance" approach, which involves educating oneself about AI risk areas, engaging with company personnel to understand how AI intersects with their activities, and executing updated policies and procedures related to AI use. He also touched on the topic of individual liability for disclosure failures related to AI as a security threat, stating that those who operate in good faith and take reasonable steps are unlikely to face enforcement actions.
Key takeaways:
- The SEC Enforcement Director, Gurbir Grewal, highlighted the importance of candid conversations about corporate misconduct and improving compliance, particularly in the context of financial scandals involving charismatic leaders, strong investor interest, noncompliance, weak controls, and under-empowered gatekeepers.
- He pointed out the potential risks associated with the rapid development of artificial intelligence (AI) technology, emphasizing the need for companies to ensure their AI-related disclosures are not materially false or misleading.
- Grewal drew parallels between the SEC's experience with ESG investing and the current situation with AI, warning against "AI-washing" and urging companies to accurately represent their use of AI.
- He outlined a three-step approach to proactive compliance involving education, engagement, and execution, and reassured that individuals operating in good faith and taking reasonable steps are unlikely to face liability for disclosure failures related to AI as a security threat.