What happens when robots write sci-fi?
Share Now on:
It seems very meta — a tool seemingly straight out of science fiction writing its own science fiction stories. But it’s not all fun and games for the online magazine Clarkesworld, which published short fiction sent in by writers in the sci-fi and fantasy community.
Clarkesworld typically pays 12 cents per word for chosen stories, but the magazine’s editor, Neil Clarke, said last month that it’s closing down submissions. He said they’d been inundated with material generated by artificial intelligence and sent in by people looking to make a quick buck.
Clarke told Marketplace’s Meghan McCarty Carino that the problem began in December, when they noticed about 50 stories in their pile that seemed to be written by AI. In January, that number doubled, and by Feb. 20, they’d received more than 500 AI-generated submissions. That day, Clarkesworld closed submissions.
Below is an edited transcript of the conversation between McCarty Carino and Clarke.
Neil Clarke: We were on track to have one of these AI-generated submissions for every one of the legitimate submissions, a 1-to-1 ratio, and we weren’t seeing any sign of it slowing. If we’d kept submissions open, it probably would have gotten worse. As best as we can tell, these submissions are largely coming in from people who are following side hustle blogs, videos and TikTok channels with posts that say something like “Make easy money with ChatGPT,” and then they walk them through the process and send them to a list of magazines that they can send AI-generated writing to. We’re unfortunate enough to be on one of those lists. If I could share some of the stories with you, it would be clear that these are people that probably aren’t reading science fiction because if they were, they’d identify from their own story that it wasn’t good. We’re fairly confident that they’re just copying, pasting and sending them in. The quality is very poor — there was never any danger of these stories being published. We’ve been doing this for 17 years, and these are among some of the worst stories that we’ve seen.
Meghan McCarty Carino: What’s at stake for writers and for the future of creativity in this genre?
Clarke: We’re safe for a time because science fiction is fairly imaginative and that’s one thing that these programs are weak at. They can’t make a leap or jump to do something more creative. Someday that may change. Imagine a science fiction scenario where an AI becomes imaginative, fully aware and is capable of sending out its own stories. It can outproduce a human author. It might take months for a human author to produce their story, whereas this hypothetical AI could churn out hundreds in a day. The human authors would be drowned out.
McCarty Carino: How do you conceive of the purpose of science fiction, and what does it mean for artificial intelligence to potentially be writing it?
Clarke: Science fiction serves a bunch of different purposes. There’s entertainment and escapism, but there’s also a cautionary aspect, like trying to prevent a future that we want to avoid but also trying to shape one that we want to see. Every now and then someone will say, “Science fiction predicted this” and I say, well, no, actually, science fiction inspired that. AI has been in science fiction for quite some time, and it’s been portrayed both in the dark and in the light. There are some wonderful things being done with AI in medicine. There’s also some work using AI to analyze large data sets that might show us where other planets are in the universe. There are a lot of interesting things being done there, but I have a hard time calling what we have now the AI of science fiction. The AI we see in science fiction is typically aware and is doing more than just writing bad text.
McCarty Carino: Do you think humans are ready for this technology?
Clarke: Some people aren’t. As we can see, there’s already people abusing it. Science fiction is filled with lots of examples of when we’re not ready for something. Take “Star Trek,” for example. They have a concept called the Prime Directive, which states that they won’t visit a planet until it has reached a certain level of technological capabilities. That exists to protect them as much as it is to protect us.
McCarty Carino: What are you going to be watching for as this technology moves ahead?
Clarke: I think that from here on out, we’re going to be in a state quite like that of the people who work on spam filters or antivirus software or credit card fraud. We’re never going to beat this, but we’re going to have to find a way to live with it. And every new innovation is going to have to require one from our side as well.
Related links: More insight from Meghan McCarty Carino
You can find a lot more information about Clarke and his decision in his blog post “A Concerning Trend.”
As of this writing, Clarkesworld had not reopened submissions, but Clarke told me they plan to reopen the gates sometime this month — knowing they’ll probably have to close them again to address this same problem.
Clarke said they have tried out some of the software being marketed to detect AI-generated text, but even the makers of such software have acknowledged that it’s not fully reliable. OpenAI, the company that developed ChatGPT, found in tests that its own detection software correctly identified AI-generated text only 26% of the time.
Luckily, as Clarke said, the AI-generated submissions to his magazine haven’t been too hard to spot because they’re so badly written.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.