However, the American Civil Liberties Union warns that using AI tools for social media surveillance could amplify inaccuracies or biases and chill online discourse. OpenAI, the maker of ChatGPT, states that it does not allow activity that violates people's privacy. Social Links insists it adheres to OpenAI policies and only uses ChatGPT for text analysis. The company also demonstrated its ability to search for facial recognition matches across social media once the tool has flagged someone as presenting "negative" sentiment. The use of AI in surveillance raises concerns about transparency, reliability, and bias.
Key takeaways:
- Social media surveillance companies are using AI tools like ChatGPT to monitor communications across social media platforms. These tools can perform sentiment analysis, predicting whether online activity could lead to physical violence.
- There are concerns that this kind of surveillance could amplify inaccuracies or biases and chill online discourse, as people may feel they are being watched by AI agents.
- Social Links, a company founded by Russian entrepreneur Andrey Kulikov, is using ChatGPT for analyzing text and sentiment on social media. The company denies any link to the 3,700 Facebook and Instagram accounts that Meta banned for scraping the social sites.
- At the Milipol homeland security conference, other AI tools were showcased, including Gens.AI, which creates convincingly human social media profiles for undercover investigations. This could lead to an AI echo chamber, where AI surveillance software is used to monitor AI personas.