The EU is working to finalize rules for AI proposed by the European Commission in 2021, with a provisional deal suggesting the Commission would keep a list of AI models posing a "systemic risk". However, challenges remain, including governing the use of AI in biometric surveillance and source code access. The news follows the recent launch of the AI Alliance, a group led by Meta and IBM, dedicated to the development of open-source AI.
Key takeaways:
- Europe's new AI legislation may not apply to free, open-source AI models unless they are determined to be high risk or used for banned purposes.
- The legislation is part of the landmark AI Act, which lawmakers in Europe are still finalizing, with challenges in keeping up with the technology.
- The AI Alliance, a group dedicated to the development of open-source AI and includes NASA, Oracle, CERN, Intel, and the Linux Foundation, was launched recently.
- There is a growing divide in the AI industry, with some players focusing on open-source AI and others on closed-source models, highlighting differing approaches to the responsible use and development of AI.