The company seems to have quietly dropped its search for a new leader for its trust and safety team, which was tasked with preventing OpenAI’s models and products from generating disinformation, hate speech, and other harmful content.
Key takeaways:
- OpenAI has revamped its approach to detecting and eliminating disinformation and offensive content from its products, including ChatGPT.
- This change comes amid growing concerns about the spread of disinformation ahead of upcoming elections.
- Since Sam Altman's return as CEO, the company seems to have quietly dropped its search for a new leader for its trust and safety team.
- The trust and safety team's role was to prevent OpenAI’s models and the products built on them from generating disinformation, hate speech, and other harmful content.