The Ada Lovelace Institute, a UK research organization, argues that a tiered approach would be a fair compromise, ensuring compliance from large-scale foundation models while giving smaller models a lighter burden. The Institute warns against casting aside regulation of large-scale foundation model providers to protect one or two 'national champions', arguing that this would ultimately stifle innovation in the EU AI ecosystem. The debate comes as EU lawmakers are trying to secure a political deal on draft legislation on AI in the next few weeks.
Key takeaways:
- French startup Mistral AI is at the center of a debate over how to regulate artificial intelligence (AI) in the European Union (EU), with lawmakers struggling to agree on rules for upstream AI model makers.
- Mistral AI supports the EU's goal of regulating the safety and trustworthiness of AI apps, but has concerns about the framework becoming a convoluted bureaucracy that could disadvantage homegrown AI startups.
- The company argues that the deployer should bear the risk and responsibility, and that direct pressure on model makers is unnecessary and unproductive.
- However, the Ada Lovelace Institute argues that a 'tiered' approach, which puts obligations on both downstream deployers and those who provide the tech they're building on, would be a fair compromise.