Adobe Firefly Google Gemini 3
Adobe has rolled out unlimited AI image generations in its Firefly app for a limited time, powered in part by Google’s latest image model, Gemini 3 (Nano Banana Pro). The update gives Creative Cloud Pro and Firefly plan subscribers more choice in how they create, refine and localise visuals with generative AI. Unlimited generations in the Firefly app are available for Firefly image models and partner models until 1 December for eligible subscribers.
Adobe is turning Firefly into an AI studio rather than a single-model playground. A recent global study of more than 16,000 creators found that over 60 percent already lean on multiple AI models, switching between them for photorealism, stylised art, typography or localisation. Instead of pushing people to juggle separate sites and subscriptions, Adobe is pulling those models into the apps creatives already use.
Inside Firefly and Photoshop, users can now pick from Adobe’s own Firefly Image Model 5 alongside partner models from Google, OpenAI, Black Forest Labs, ElevenLabs, Ideogram, Luma AI, Moonvalley, Pika, Runway and Topaz Labs. Nano Banana Pro is the latest addition, building on the earlier Gemini Flash Image 2.5 (Nano Banana) with a sharper focus on control, editability and clean text.
Nano Banana Pro is designed to respond more precisely to text prompts that target specific regions of an image. Creators can alter composition, aspect ratio, resolution, camera angle and lighting using natural language, while keeping the overall scene coherent. Text inside images is a particular strength, with cleaner, better integrated typography that can also be translated and localised. By drawing on Google Search’s knowledge base, the model also aims for more factually accurate visuals.
In Firefly’s Text to Image feature, users can upload up to six reference images and prompt Nano Banana Pro to merge or evolve them into a single cohesive visual. The same model powers Firefly Boards, where teams upload copy lines, logos and product shots, then ask the system to visualise them in real-world contexts, such as billboards or retail displays, for quick concept reviews.
In Photoshop, Nano Banana Pro now powers Generative Fill alongside Google’s Nano Banana and Black Forest Labs’ FLUX.1 Kontext [pro]. Because Generative Fill is deeply tied to layers, masks and selections, AI edits remain part of the familiar Photoshop workflow.
Creative professionals can extend canvases, reframe shots or add and remove objects using prompts, with Nano Banana Pro filling in realistic detail that respects perspective and lighting. A scene can be turned from day to night, a product shot can be adapted to a new aspect ratio, or a background can be rebuilt around a subject, then refined manually with standard Photoshop tools.
For agencies and independent creators, the combination of unlimited Firefly generations until 1 December and deeper integration of Google’s latest image model offers a useful window to stress test generative workflows at scale. Nano Banana Pro is available in Adobe Firefly and Photoshop from today, and Adobe is looking to its global community, including creators in India, to see how they put this mix of in-house and partner AI models to work.