Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI's GPT-4 with vision still has flaws, paper reveals | TechCrunch

Sep 26, 2023 - news.bensbites.co
OpenAI's text-generating AI model, GPT-4, has been held back due to concerns about abuse and privacy issues. The model, which can understand the context of images as well as text, has been used by a few thousand users of Be My Eyes, an app for low-vision and blind people. OpenAI has implemented safeguards to prevent misuse, such as breaking CAPTCHAs, identifying individuals, or drawing conclusions from non-present information in a photo. However, the model has shown limitations, including making incorrect inferences, missing text or characters, and failing to recognize obvious objects.

The company has warned against using GPT-4V to identify dangerous substances or chemicals in images, as it has been found to misidentify substances like fentanyl and cocaine from their chemical structures. It also struggles in the medical imaging domain, sometimes giving incorrect responses. The model doesn't understand the nuances of certain hate symbols and has been observed to discriminate against certain sexes and body types. OpenAI is working on mitigating these issues and expanding the model's capabilities in a safe way, but it remains a work in progress.

Key takeaways:

  • OpenAI has unveiled GPT-4, a text-generating AI model with the ability to understand the context of images as well as text, but has held back the model's image features due to concerns about abuse and privacy issues.
  • GPT-4V, the version of the model with vision capabilities, has been used by a few thousand users of the app Be My Eyes, and OpenAI has started to engage with 'red teamers' to probe the model for signs of unintended behavior.
  • OpenAI has implemented safeguards to prevent GPT-4V from being used maliciously, such as breaking CAPTCHAs, identifying a person or estimating their age or race, and drawing conclusions based on information not present in a photo.
  • Despite these safeguards, GPT-4V has shown limitations, such as struggling to make the right inferences, hallucinating facts, and missing text or characters, and OpenAI has cautioned against using it to identify dangerous substances or hate symbols.
View Full Article

Comments (0)

Be the first to comment!