For years, the conventional wisdom among Silicon Valley futurists was that artificial intelligence and automation spelled doom for blue-collar workers whose jobs involved repetitive manual labor. Truck drivers, retail cashiers and warehouse workers would all lose their jobs to robots, they said, while workers in creative fields like art, entertainment and media would be safe.
Well, an unexpected thing happened recently: A.I. entered the creative class.
In the past few months, A.I.-based image generators like DALL-E 2, Midjourney and Stable Diffusion have made it possible for anyone to create unique, hyper-realistic images just by typing a few words into a text box.
These apps, though new, are already astoundingly popular. DALL-E 2, for example, has more than 1.5 million users generating more than two million images every day, while Midjourney’s official Discord server has more than three million members.
These programs use what’s known as “generative A.I.,” a type of A.I. that was popularized several years ago with the release of text-generating tools like GPT-3 but has since expanded into images, audio and video.
It’s still too early to tell whether this new wave of apps will end up costing artists and illustrators their jobs. What seems clear, though, is that these tools are already being put to use in creative industries.
Recently, I spoke to five creative-class professionals about how they’re using A.I.-generated art in their jobs.
‘It spit back a perfect image.’
Collin Waldoch, 29, a Brooklyn game designer, recently started using generative A.I. to create custom art for his online game, Twofer Goofer, which works a bit like a rhyming version of Wordle. Every day, players are given a clue — like “a set of rhythmic moves while in a half-conscious state” — and are tasked with coming up with a pair of rhyming words that matches the clue. (In this case, “trance dance.”)
Initially, Mr. Waldoch planned to hire human artists through the gig-work platform Upwork to illustrate each day’s rhyming word pair. But when he saw the cost — between $50 and $60 per image, plus time for rounds of feedback and edits — he decided to try using A.I. instead. He plugged word pairs into Midjourney and DreamStudio, an app based on Stable Diffusion, and tweaked the results until they looked right. Total cost: a few minutes of work, plus a few cents. (DreamStudio charges about a cent per image; Midjourney’s standard membership costs $30 per month for unlimited images.)
“I typed in ‘carrot parrot,’ and it spit back a perfect image of a parrot made of carrots,” he said. “That was the immediate ‘aha’ moment.”
Read More on Artificial Intelligence
Mr. Waldoch said he didn’t feel guilty about using A.I. instead of hiring human artists, because human artists were too expensive to make the game worthwhile.
“We wouldn’t have done this” if not for A.I., he said.
‘I don’t feel like it will take my job away.’
Isabella Orsi, 24, an interior designer in San Francisco, recently used a generative A.I. app called InteriorAI to create a mock-up for a client.
The client, a tech start-up, was looking to spruce up its office. Ms. Orsi uploaded photos of the client’s office to InteriorAI, then applied a “cyberpunk” filter. The app produced new renderings in seconds — showing what the office’s entryway would look like with colored lights, contoured furniture and a new set of shelves.
Ms. Orsi thinks that rather than replacing interior designers entirely, generative A.I. will help them come up with ideas during the initial phase of a project.
“I think there’s an element of good design that requires the empathetic touch of a human,” she said. “So I don’t feel like it will take my job away. Somebody has to discern between the different renderings, and at the end of the day, I think that needs a designer.”
‘It’s like working with a really willful concept artist.’
Patrick Clair, 40, a filmmaker in Sydney, Australia, started using A.I.-generated art this year to help him prepare for a presentation to a film studio.
Mr. Clair, who has worked on hit shows including “Westworld,” was looking for an image of a certain type of marble statue. But when he went looking on Getty Images — his usual source for concept art — he came up empty. Instead, he turned to DALL-E 2.
“I put ‘marble statue’ into DALL-E, and it was closer than what I could get on Getty in five minutes,” Mr. Clair said.
Since then, he has used DALL-E 2 to help him generate imagery, such as the above image of a Melbourne tram in a dust storm, that isn’t readily available from online sources.
He predicted that rather than replacing concept artists or putting Hollywood special effects wizards out of a job, A.I. image generators would simply become part of every filmmaker’s tool kit.
“It’s like working with a really willful concept artist,” he said.
“Photoshop can do things that you can’t do with your hands, in the same way a calculator can crunch numbers in a way that you can’t in your brain, but Photoshop never surprises you,” he continued. “Whereas DALL-E surprises you, and comes back with things that are genuinely creative.”
‘What if we could show what the dogs playing poker looked like?’
During a recent creative brainstorm, Jason Carmel, 49, an executive at the New York advertising agency Wunderman Thompson, found himself wondering if A.I. could help.
“We had three and a half good ideas,” he said of his team. “And the fourth one was just missing a visual way of describing it.”
The image they wanted — a group of dogs playing poker, for an ad being pitched to a pet medicine company — would have taken an artist all day to sketch. Instead, they asked DALL-E 2 to generate it.
“We were like, what if we could show what the dogs playing poker looked like?” Mr. Carmel said.
The resulting image didn’t end up going into an ad, but Mr. Carmel predicts that generative A.I. will become part of every ad agency’s creative process. He doesn’t, however, think that using A.I. will meaningfully speed up the agencies’ work, or replace their art departments. He said many of the images generated by A.I. weren’t good enough to be shown to clients and that users who weren’t experienced users of these apps would probably waste a lot of time trying to formulate the right prompts.
“When I see people write about how it’s going to destroy creativity, they talk about it as if it’s an efficiency play,” Mr. Carmel said. “And then I know that they maybe haven’t played around with it that much themselves, because it’s a time suck.”
‘This is a sketch tool.’
Sarah Drummond, a service designer in London, started using A.I.-generated images a few months ago to replace the black-and-white sketches she did for her job. These were usually basic drawings that visually represented processes she was trying to design improvements for, like a group of customers lining up at a store’s cash register.
Instead of spending hours creating what she called “blob drawings” by hand, Ms. Drummond, 36, now types what she wants into DALL-E 2 or Midjourney.
“All of a sudden, I can take like 15 seconds and go, ‘Woman at till, standing at kiosk, black-and-white illustration,’ and get something back that’s really professional looking,” she said.
Ms. Drummond acknowledged that A.I. image generators had limitations. They aren’t good at more complex sketches, for example, or creating multiple images with the same character. And like the other creative professionals, she said she didn’t think A.I. designers would replace human illustrators outright.
“Would I use it for final output? No. I would hire someone to fully make what we wanted to realize,” she said. “But the throwaway work that you do when you’re any kind of designer, whether it’s visual, architectural, urban planner — you’re sketching, sketching, sketching. And so this is a sketch tool.”