Despite these commitments, concerns remain about the ethical use of AI, particularly by defense contractor Palantir, known for its controversial data systems used by U.S. Immigrations and Customs Enforcement and predictive policing software. Other tech giants, including Google and Microsoft, have also faced criticism for their military contracts. The Biden administration is currently relying on non-binding recommendations and executive orders to manage AI risks, but there is no sign of imminent AI regulation from Congress.
Key takeaways:
- The Biden administration is encouraging major tech firms to be cautious and transparent in their development and use of AI, with several big tech companies agreeing to voluntary commitments on ethical AI.
- These commitments include sharing safety and safeguarding information with other AI makers, informing the public about their AI's capabilities and limitations, and using AI to address societal challenges.
- Despite these commitments, concerns remain about the lack of transparency in AI companies' generative AI training data, and the potential misuse of AI by companies like Palantir, which has been criticized for its role in creating data systems for U.S. Immigrations and Customs Enforcement and for fueling racist predictive policing software.
- The Biden administration's efforts to regulate AI have so far been limited to non-binding recommendations and executive orders, with no clear signs of imminent, substantial AI regulation from Congress.