In response to growing concerns over AI risks, the Biden administration has launched the US AI Safety Institute Consortium (AISIC), involving over 200 entities, including major tech firms. The consortium aims to set standards for AI testing, focusing on cybersecurity and other risks. Meanwhile, UC San Diego computer scientists have unveiled ToxicChat, a benchmark designed to identify and prevent toxic prompts directed at AI models, which has been integrated into Meta's evaluation tools.
Key takeaways:
- US lawmakers in seven states are proposing legislation to limit AI bias, addressing the lack of government control over AI systems that have been criticized for favoring certain races, genders, and economic groups.
- Out of nearly 200 AI-related bills introduced last year, only about a dozen were enacted into law, with most targeting specific aspects of AI rather than broader concerns like AI bias.
- Proposed bills aim to increase transparency and accountability in AI decision-making by requiring companies to conduct 'impact assessments' of their automated decision tools and submit these to state or regulatory authorities.
- The Biden administration recently launched the US AI Safety Institute Consortium (AISIC), which involves over 200 entities and aims to set standards for AI testing, with a focus on cybersecurity and other risks.