The new model will initially be available through the Firefly web app, but will soon be integrated into Creative Cloud apps like Photoshop. Adobe is also introducing new controls in the Firefly web app, including the ability to set the depth of field, motion blur, and field of view settings. Users can also upload an existing image for Firefly to match its style, and a new auto-complete feature has been added for writing prompts.
Key takeaways:
- Adobe has updated the models that power Firefly, its generative AI image creation service, improving its ability to render humans including facial features, skin, body and hands.
- Firefly’s users have generated three billion images since the service launched about half a year ago, with one billion generated last month alone. 90% of these users are new to Adobe’s products.
- The new model is larger and trained on more recent images from Adobe Stock and other commercially safe sources. Despite being more resource-intensive, it should run at the same speed as the first model.
- The new model will be available through the Firefly web app and will also come to Creative Cloud apps like Photoshop. Adobe is also introducing new controls in the Firefly web app, including the ability to set the depth of field, motion blur and field of view settings.