How Adobe Firefly’s Generative Match Works

adobe firefly generative match works

Today there is a preponderance of generative AI models that use text input to generate an image based on what text is entered only. Adobe’s Generative Match, beta version, is a product of Adobe’s responsible AI development. Instead of using typical text to image generative AI, Adobe Firefly’s Generative Match uses both text input and a reference image combined to create an image:

Firefly’s generative AI will produce imagery that combines both your text prompt and your reference image. If your reference image shows a cat in a whimsical cartoon style, the bear in your new image will sport the same look. If the background in your reference image includes lots of purples, browns, and reds, the woods in your new image will, too.

Firefly Generative Match technology can be found at the Firefly website and in Adobe Illustrator. Later, this technology will be located in Creative Cloud tools.

Firefly Generative Match accelerates image generation based on a user’s unique style and matches an image more accurately:

With Generative Match, you can guide Firefly to produce images in one of your own unique styles. You’ll almost certainly take those images into a tool like Adobe Photoshop so you can tweak them and make them your own, but Generative Match gives you a head start.

Businesses also can benefit from Firefly’s Generative Match. For example, marketing teams working independently on a new campaign can create content with a consistent look.

To start using Firefly’s Generative Match technology, first go to the Firefly web application, Adobe’s site for AI-assisted creativity. Enter input in the text prompt in the Text-to-Image tool and select “Style Match” on the right. A user can either “pick a reference image from a pre-selected set of images in various styles, licensed by Adobe, or upload an image of your own — provided you confirm that you have the rights to use the image.”

See also  How AI Large Language Models Power Chatbots