There's just a few days left to snag some Marketplace swag at a discount when you... Donate Today! 🎁
Text-to-image AI tools are taking the internet by storm. But is it art? Or the end of art?
Jan 20, 2023

Text-to-image AI tools are taking the internet by storm. But is it art? Or the end of art?

HTML EMBED:
COPY
Art generated by artificial intelligence has gone viral recently, with the help of human art on the internet. As the technology expands, some artists are sounding the alarm about what they call unethical practices.

Images created by artificial intelligence programs, like Stable Diffusion and DALL-E, are just about everywhere now, dazzling users with their ability to instantly create any image that can be dreamed up.

The AI works by scraping billions of images from the internet, which are often created by artists who may not be thrilled that their life’s work is helping to build technology that could threaten their livelihoods.

Steven Zapata, a designer, illustrator and art teacher in New York City, has concerns about what this means. It makes no sense, he told Marketplace’s Meghan McCarty Carino, that these machine-learning systems may go on to compete with the creators whose work the models trained on. He also believes that an ethical version of these artmaking systems can be developed and would be valuable.

The following is an edited transcript of their conversation.

Steven Zapata: My main concern is the precedent that we would be setting here by allowing these systems to scrape the creative labor of millions of people off of the internet, and to train models that then go on to compete with the people that they trained off of in their very same creative markets. If we were to decide collectively that that is somehow OK and legal and ethical, that basically leaves open a giant legal loophole that will allow this to happen again and again, in every market that these systems come to. It doesn’t make sense to allow any startup machine-learning company to have carte blanche to use all the creative work that people have shared online to make models that will compete directly against those very same people.

Meghan McCarty Carino: Are artists actually allowing this? Is there any consent given or is there any option to opt out?

Zapata: In the current models, no. There was no consent. No consent was asked for, no credit has been given and no compensation has been given. As things have moved along, products like Stable Diffusion have said they will allow opting out of future models. But at the initial launch, they snuck it in under the wire. We didn’t really understand what was going on until after the products had come to market. The systems really should be “opt in” from the beginning.

McCarty Carino: Under current conditions, do artists have any legal recourse?

Zapata: Currently, there are no easy legal grooves for this. And that’s because this is cutting-edge stuff. We have to admit that a lot of the questions that are being raised here are occurring within a series of legal gray areas. And it’s extremely complicated. Almost every question that you could ask, like is it copyright infringement? Do the systems duplicate training data? Is this kind of use authorized under fair use? And if it is fair use, is it just in the United States? Is it fair use in the United Kingdom? Things change from jurisdiction to jurisdiction, so it’s extremely complicated. But it is going to have its day in court. We’ve had recent developments, including a class-action lawsuit in the U.S. on behalf of artists to look for some sort of legal recourse. And in the UK recently, Getty Images announced they are bringing a suit against Stability AI, the makers of Stable Diffusion, for training their systems off of the copyrighted work of Shutterstock and its users.

McCarty Carino: There’s an argument that all art is referential, and all art is in dialogue with other work that’s been done before. What’s different about how this AI is operating?

Zapata: It’s extremely different, by my estimation. For example, I can imagine that when I share my work online, if I were to inspire another artist, in some cynical viewpoint I could interpret their inspiration to make work like mine, or to achieve my skill level, as them rushing to compete with me in the market. But the difference when you’re having that sort of distant interaction with another human being is that I know what is waiting for them on the journey. I know that learning those skills and trying to reach my level is worth it on its own, and it’s going to make their life better. No matter what happens, they’re going to appreciate the attempt, and they’re going to experience the self-affirming journey of artmaking. And even if they did come up and meet me on my level in the market where they could potentially take jobs from me or something like that, I’m happy to see them. I’m like, I know how hard this has been for you, and I want you to succeed. That is just not something that is happening with these machines. These machines are, they’re devoid of experience, there is no joy being felt and there is no ennobling factor of being on a journey. They are simply outputting. We’re outsourcing one of life’s great pleasures and work that we really like doing. Art really becomes an existential buoy, right? We’re outsourcing that to something that isn’t feeling all the benefits and isn’t enjoying any of it. And I think that to make this sort of false equivalence between what they are doing and what we are doing reduces what humans do with art to a sort of very sad grotesquerie that doesn’t map to the personal experiences of most of the artists that I know.

McCarty Carino: Not all artists are having the same kind of negative reaction to this. I’ve seen some artists saying, maybe this could be a tool that we can incorporate into our own creative process. Do you think that there’s any way that artists themselves could harness some of the power of this technology for their own use?

Zapata: Any individual artist can choose to use these systems however way they want. And for people who have artistic experience, a lot of them are used to making sure that they are in control of their process and not the other way around. So for them, it’ll be very easy to decide where the machine is taking too much. Artists are very crafty, they’re very creative, and they’re always going to be able to engage their creativity to manage the process. But the point that I want to make here is that every good or noble thing that we can imagine doing with these systems is possible with the ethical version of these systems. All of the hopes that it will democratize things, the hopes that it will allow people who otherwise wouldn’t have been creative to engage creativity, the hopes of increasing the accessibility to art, everything. Every utopian and noble thing that we might say or dream about these systems, every single one of them can be said and equally applied to an ethical version of these systems rather than the unethical version.

McCarty Carino: What would an ethical system look like to you?

Zapata: It looks like a system that is built on public domain and Creative Commons work. And on top of that, also artwork that is voluntarily provided to the companies that are training the systems. If all of those things could be honored and integrated well, and a system could be made that wasn’t impinging on the rights of right holders and wasn’t stepping on people in the market and unfairly using their names to generate derivative works, then I see no reason why artists shouldn’t be allowed to engage this technology. And the possibilities sound very exciting.

If you’d like to hear more from Steven Zapata on this topic, you can watch his video essay, “The End of Art: An Argument Against Image AIs.” He elaborates on many of the arguments we discussed, and — bonus — he does it while sketching.

As Zapata mentioned, there has been big legal news in the past couple of weeks. Three artists filed a class lawsuit against Stability AI — that’s the one that makes Stable Diffusion — as well as another AI company, Midjourney, and the online art portfolio platform DeviantArt. In the lawsuit, they say the three companies violated the copyrights of “millions” of artists by scraping their images without their consent.

Stability AI had this to say about the lawsuit: “The allegations in this suit represent a misunderstanding of how generative AI technology works and the law surrounding copyright. We intend to defend ourselves and the vast potential generative AI has to expand the creative power of humanity.”

And if you’re wondering whether your work or even images of you have ended up in the training data sets for these AI models, there’s a website for that.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer