top of page

Can AI actually replace digital designers?

Not in the near future, anyway, according to these designers.

Image courtesy Erik Carter.

Profile picture of Margaret Andersen

8.3.2022

5 min read

If you’ve spent any time on design Twitter lately, you’ll have likely come across a flurry of surreal images in your feed with tags like "#dallemini", "#dalle2", or "#midjourney". (Chat GPT now has this capability, too, btw.)


While those hashtags might sound like a reference to NASA aircraft or Activision Blizzard’s latest game release, they’re actually referencing the latest tools in the rapidly evolving world of AI art, where anyone can generate realistic images from a written text description. Most users generate art that tends to veer towards the absurd, but designers are already experimenting with potential commercial applications—and that’s leaving many to question how long it will be before this new technology makes certain creative industry jobs obsolete.


That’s because AI text-to-image tools like DALL-E 2 and MidJourney are simple to use, and the results can be pretty darn good. Here’s how it works: Users type in a prompt like “female blobfish with long beautiful eyelashes relaxing on a desert rock,” for instance, and the results are, well, uncanny. The first time you try it really does seem like magic, but achieving consistent results still requires a designer’s guiding hand–or rather, words.


Smiley face DALLE-2 renderings by Erik Carter. Images courtesy Erik Carter.



These tools haven’t been around long. DALLE-2, built by OpenAI, the now-famous non-profit research lab founded by Sam Altman, Peter Thiel, and Elon Musk, among others. Running on the mission to ensure that “artificial general intelligence benefits all of humanity,” the organization is launching the system in phases. MidJourney, a rival, self-funded AI research lab, just expanded beta access to the public through their Discord community.


So how are designers using text-to-image generators in their work? In June, Cosmopolitan commissioned director and digital artist Karen X Cheng to create the magazine’s first ever AI generated cover art. The image, a powerful woman represented as an astronaut, was based on the creative direction from the Cosmo design team and Cheng. The headline and subhead on the resulting cover read, “Meet the world’s first artificially intelligent magazine cover. And it only took 20 seconds to make.” While it's true it might have only taken DALL-E twenty seconds to render the image, that statement doesn’t take into account the time it took to decide on the creative direction or craft the right prompt to achieve the desired image.

Cheng posted a process video on Instagram, showing the hundreds of iterations of text prompts she typed before landing on: “wide-angle shot from below of a female astronaut with an athletic feminine body walking with swagger toward camera on Mars in an infinite universe, synthwave digital art.” The final image is undoubtedly impressive, but the cover received mixed reactions online. Many viewed this as a sign that art department design jobs are doomed. Others pointed out that this actually wasn't the first AI generated magazine cover at all, but that The Economist released its own robot-rendered art, just a week prior to Cosmo, using MidJourney instead of DALL-E.


It’s not just Cosmo: These platforms are receiving mixed reactions industry-wide. Designer David Rudnick commented in reference to Fabric London’s recent event poster series that, “There is no artist in AI

art.” He added, “the person ‘commissioning’ the work can type the same 9 words into the prompt…They don't need to pay anyone ever again.” It’s not just about commission losses either: Questions around ethics and authorship have also been a part of the general discourse, especially since the U.S. Copyright Office ruled AI art can’t be copyrighted.


But despite Rudnick’s criticism of the technology, he’s also experimented with it on commercial projects. He recently worked with Dexter Tortoriello, co-founder and head of technology of Friends With Benefits, to integrate DALL-E generated art, which used the David Hockney painting called A Bigger Grand Canyon as a base for prompts, into the branding for the upcoming web3 conference FWB Fest.


For his part, Tortoriello says there is a certain level of skill and creativity required to create compelling AI artwork. “It allows for a unique collaboration between an artist and a piece of technology,” he explains. “If all you're asking for is pictures of sunsets, that's all you're going to get. If you ask to see Francis Bacon paintings of neo-Las Vegas you will get something vastly more interesting.” According to Tortoriello, systems like DALL-E 2 are “equal parts scary and exciting for obvious reasons, but on the biggest scale possible, more human creativity is a net positive for the world.”


While it is scary to think that designers could be replaced by an artistic version of Her, there is also skepticism around the idea the DALLE-2 spells doomsday for digital designers. Graphic designer and art director Erik Carter, whose work has appeared in The New York Times, The New Yorker, and The Atlantic, says he doesn’t foresee a robot takeover anytime soon—except in instances like creating placeholder images in mockups, or outputting variations of already designed layouts. In fact, Carter noted in a recent newsletter that AI art could possibly free up illustrators from repetitive work or lower paying, content-hungry gigs, and allow them to focus on more creatively fulfilling projects.


The reality is algorithm-driven design tools are already part of the creative industry. Adobe Sensei’s AI is embedded into Premiere Pro through automated captions, in Illustrator through tracing and vectorizing sketches, and through skin-smoothing and other retouching tools in Photoshop’s neural filters. This archive of AI use-cases from the last few years also shows how ubiquitous automated design has become, but also proves that without creative intervention from human designers, the results of AI generated design can feel derivative and homogenized.


Midjourney image renderings by author Margaret Andersen. Image 1: "The prompt 'Scottish highland cow wearing a sweater, hyperrealistic, extremely detailed,' gave me a creature that was more cable knit sweater than cow." Image 2: "The prompt 'Barbie aesthetic submarine' produced just what I had imagined—a hot pink submarine floating in water."Images courtesy the author.



Wider adoption of these platforms could also mean designers need to learn how to use them like any other tool, as part of the creative process. “Creative professionals will need to learn different skills for how to use AI to work effectively by testing ideas, making variations or automating some parts of their creative process,” says Luba Elliot, a curator, producer, and researcher specializing in artificial intelligence in the creative industries. While AI will automate some aspects of design work, she doesn't see it completely replace humans.


That’s also because results can be unpredictable. Carter started using DALL-E for his own work by trying to source images for illustration sketches, and the results are often pretty hit or miss, especially when looking for something specific. “I often get the best results when I find something that isn’t what I’m looking for, by letting go and being surprised.” He adds, “It can be an interesting tool for art, and possibly web and digital design, but for now, the best use-case I’ve seen is just for goofing off.”

Find new ways FWD

Thanks for submitting!

By subscribing, you agree to receive the Wix Studio newsletter and other related content and acknowledge that Wix will treat your personal information in accordance with Wix's Privacy Policy.

Do brilliant work—together

Collaborate and share inspiration with other pros in the Wix Studio community.

Image showing a photo of a young group of professionals on the left and a photo highlighting one professional in a conference setting on the right
bottom of page