OpenAI directed people asking ChatGPT about US voting to CanIVote.org, with 1 million responses leading up to election day. On election day and the day after, the chatbot generated 2 million responses, directing people to check news sources for results. OpenAI ensured that ChatGPT's responses did not express political preferences or recommend candidates. Despite these measures, there were still election-related deepfakes on social media, including a manipulated video of Kamala Harris.
Key takeaways:
- OpenAI's DALL-E image generator rejected over 250,000 requests to create images of real people, including politicians, as a safety measure to prevent the creation of deepfakes.
- The company had been preparing for the US presidential elections since the beginning of the year, implementing strategies to prevent its tools from being used to spread misinformation.
- ChatGPT directed 1 million responses to CanIVote.org in the month leading up to the election and generated 2 million responses on election day and the day after, directing people to check reputable news sources for results.
- Despite these measures, there are still plenty of election-related deepfakes circulating on social media, including a manipulated video featuring Kamala Harris.