Adobe’s Very Cautious Gambit to Inject AI Into Everything
Dall-E and Midjourney pushed the Photoshop behemoth to develop Firefly, a suite of AI tools. Will creatives get on board?

The Creation of Adam by Michelangelo reimagined with more than two dozen Firefly AI prompts in Photoshop.
Illustration: 731; Source: Alamy; Adobe generative AI
Last fall, Adobe Inc. offered select photographers from its vast network of seasoned professionals the opportunity to shoot 1,000 photos of bananas. For $60. Another commission called for snapshots of flags “in real-life situations,” and one paying $80 sought hundreds of close-ups of mouths chewing food. One assignment for pet portraits requested a minimum of 500 JPEGs of various dog and cat breeds, specifying that none should be shown “wearing any clothing.”
These very targeted “missions,” as Adobe generously referred to them in the job briefings, weren’t to meet a sudden demand for fruit/pie-hole/house-pet photos, but as necessary raw material to feed Firefly, its new, flagship artificial intelligence product released in beta last March. At the time, AI-art generators Midjourney and Dall-E had already gone viral as futuristic playthings for both artists and the visually inept. Suddenly anyone could turn a cocktail of words into unnervingly realistic images, be that a much-publicized deepfake of Donald Trump getting arrested or a viral collage of “a bottle of ranch testifying in court”—which, somehow, really did look like salad dressing reeling under cross-examination.
