Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI winds down AI image generator that blew minds and forged friendships in 2022

Apr 19, 2024 - arstechnica.com
The article discusses the rise and fall of OpenAI's DALL-E 2, an AI image generation model that could create photorealistic images based on text descriptions. The model, which debuted on April 6, 2022, sparked a period of innovation and ethical debate in the AI space. However, OpenAI recently discontinued the service, marking the end of an era for a group of artists and tech enthusiasts who saw the technology as a gateway to unlimited creativity.

The article also traces the history of AI in art, from the use of computers for image creation in the 1950s to the introduction of neural networks in the 1990s. Despite the discontinuation of DALL-E 2, the technology had a significant impact, marking a mainstream breakthrough for text-to-image generation and fostering a community of artists exploring the technology. The article ends by noting that OpenAI has begun rolling out DALL-E 3, which offers a higher level of detail.

Key takeaways:

  • OpenAI's DALL-E 2, an AI that could create photorealistic images based on text descriptions, was launched on April 6, 2022, sparking a significant debate about the future of art and AI.
  • Despite its groundbreaking capabilities, OpenAI has begun winding down support for DALL-E 2, as it has been surpassed by DALL-E 3's higher level of detail and editing capabilities.
  • A group of artists and tech enthusiasts who were early users of DALL-E 2 have expressed a sense of loss as the service is phased out, describing the early days of using the AI as a period of unlimited creative freedom.
  • Before DALL-E 2, AI image generation technology had been developing for some time, with notable milestones including the use of neural networks in the 1990s and Google's DeepDream in 2015, but DALL-E 2 marked a significant leap in text-to-image generation.
View Full Article

Comments (0)

Be the first to comment!