The situation has caused widespread outrage, prompting even the White House to call on social media companies to reassess their role in preventing the spread of such content. In response, X-formerly-Twitter blocked all searches for "Taylor Swift" on its platform, a move that has proven ineffective. The article criticizes tech companies for their lack of preparedness in dealing with the harmful content enabled by the AI technology they fund. It also highlights the lag in laws surrounding the use of such technology, leaving victims like Swift with little legal recourse.
Key takeaways:
- Microsoft CEO Satya Nadella has called for more 'guardrails' to ensure safer content, following the spread of pornographic deepfakes of Taylor Swift on social media.
- Microsoft's AI image generator Designer was used to create these explicit images, but the company claims to have addressed the issue with an update.
- The White House has described the situation as 'alarming' and called on social media companies to reexamine their role in preventing such content from spreading.
- Despite Nadella's call to action, the article suggests that tech companies are ill-prepared for the reality of the AI they're funding, and that laws surrounding the use of this technology are lagging far behind.