OpenAI, the creator of ChatGPT, has policies against using its products to mislead people or for political campaigning, but enforcement is its responsibility. Mike Nellis, founder of AI campaign tool Quiller, called the campaign consultant's use of AI "completely irresponsible" and "unethical". He also stressed the need for local, state, and federal regulation of AI tools in politics.
Key takeaways:
- The campaign team of Philadelphia’s Sheriff Rochelle Bilal admitted to posting a series of positive “news” stories generated by AI chatbot, ChatGPT, on their website.
- Over 30 stories created by the AI were removed after a Philadelphia Inquirer story reported that local news outlets could not find these stories in their archives.
- Experts warn that such misinformation can erode public trust and threaten democracy, while the campaign insists the stories were based on real events.
- OpenAI, the creator of ChatGPT, has policies against using its products to scam or mislead people, and does not allow its systems to be used for political campaigning or lobbying.