Costs of AI spur quest for a cheaper chatbot
Jun 12, 2023

Costs of AI spur quest for a cheaper chatbot

HTML EMBED:
COPY
It's expensive to train and maintain generative AI chatbots. That could solidify the power of the big players and divide the market between low-end and high-quality, according to Will Oremus at The Washington Post.

Generative artificial intelligence tools such as ChatGPT have caught on like wildfire, largely because of their impressive capabilities, but also because they’re free, or nearly free, to use.

But just because a service doesn’t charge users doesn’t mean it doesn’t have costs. In reality, sophisticated large language models cost a lot to build and maintain. AI companies will have to recoup that investment eventually, in one way or another.

Marketplace’s Meghan McCarty Carino spoke about the high costs of AI chatbots with Will Oremus, technology news analysis writer for The Washington Post. Oremus recently delved into how the financial aspect of AI development could influence the course of the technology.

The following is an edited transcript of their conversation.

Will Oremus: So there’s a huge initial cost when you’re training the models, but then that cost in the long run is actually dwarfed by the cost of just running the models — the computing cost every time you use a chatbot or every time you call on one of these models. And it runs only on specific high-end chips called [graphics processing units]. A set of GPUs that can run AI applications can cost in the tens of thousands of dollars. And it’s not just chatbots. I mean, Microsoft is putting GPT-type software into everything from Microsoft Excel, Word, PowerPoint, Bing, Skype, you know, and so the proliferation of these large language models across all kinds of applications just racks up more and more and more computing cost.

Meghan McCarty Carino: Yeah, and I mean, in the case of ChatGPT, this has sort of been referred to as like the most quickly adopted technology in history, you know. In a matter of months, [it] got 100 million users, and people using it for just all kinds of silly things, probably without kind of considering the cost of asking ChatGPT to, you know, write a poem about beef Stroganoff or something.

Oremus: Right. And the companies that make these models have encouraged us not to think about the cost. They don’t even want to talk about it. So I tried OpenAI, I tried Microsoft, I tried Google for my story. I tried Anthropic, which makes a large language model called Claude. None of them would talk at all about the cost of running these models. It’s a very sensitive subject for them. [OpenAI CEO] Sam Altman said late last year that a single conversation with ChatGPT costs single-digit cents in computing. That doesn’t sound like much if it’s just me. I mean, I have a conversation with ChatGPT. It’s free for me, it costs OpenAI a few cents, what’s the big deal? Well, OpenAI quickly reached 100 million users reportedly in February. It’s probably way beyond that now. And so you start adding up a few cents per conversation across 100 million users, and you’re starting to talk real money.

McCarty Carino: Right. I was really struck by your reporting because I feel like so many of the conversations around this technology and sort of the trajectory that it’s on, you know, kind of that these models will just keep getting better and better and more widely used in all of these different use cases, that this calculation might not really be taking into account the actual costs of that kind of development and deployment, even if it is possible.

Oremus: That’s right. So the fact that these companies are losing money every time we use their AI models isn’t just a problem for the companies. It’s a problem for everybody who wants to rely on this technology and for all the projections that say sectors across the economy are going to be heavily relying on AI, and there’s gonna be an AI revolution that replaces all these forms of human labor. Well, right now that AI revolution is being heavily subsidized by companies that are competing for market share. Eventually they’re going to have to turn a profit, just like any company. And so they’re going to have to raise the price, [or] they’re going to find cheaper, more lightweight models that maybe aren’t as capable. Or they’re going to, you know, in the free products we’re likely to see ads. It’s not something that Sam Altman has said he wants to do with ChatGPT. But when asked about it at a Senate hearing recently, he wouldn’t rule out putting ads in ChatGPT because they just can’t afford to keep losing this much money.

McCarty Carino: And of course, advertising has been the dominant model for subsidizing all of the internet services that consumers are used to getting for free, like search or social media. I mean, to what extent does that model work here?

Oremus: Most of the analysts I talked to don’t think advertising will be enough, particularly for the more high-end versions of this AI technology that are really going to be needed for professional applications. Now something like ChatGPT, they might continue to offer a free version that has ads. But the sources I talked to said they don’t think that the ChatGPT that we get for free will ever be as good as the models that are offered for enterprise use. And it’s because they just can’t afford to put the best, biggest models into a free product in the long run. So one of the interesting things about this from the consumer side is usually when you have a new technology that comes out like ChatGPT, you expect as people use it, as the company gains market share, it’s going to be reinvesting that revenue into making the product better and better. But if it’s a money loser, if the free product is a money loser, it actually is investing its resources not into making ChatGPT better and better, but into making it cheaper and cheaper so it stops losing so much money on it. And so we actually know that GPT-4, which is the newer model of OpenAI’s large language model, it is actually significantly better at not making up falsehoods, you know, it’s not “hallucinating,” is the industry jargon. It’s significantly better at that. But we’re not getting that in the free version of ChatGPT. We’re not going to get their best product for free.

McCarty Carino: What are the implications of all of this for the business of AI? Like, who gets to play in the sandbox?

Oremus: Really, it limits the number of companies that can do that. I mean, if you’re a startup and you say, I’m gonna go compete with OpenAI and Google and try to build another great language model and an AI chatbot, investors, they’re gonna ask, well, how in the world are you going to afford to lose as much money as these big players for as long as they can in order to compete? And so what we see instead is that startups are mostly competing around fine-tuning the large models that the biggest players offer. So they’re looking at ways to take GPT, or to take Google’s LaMDA or PaLM models, and refine them for a specific application. But underlying that technology is still going to be one of the giants.

McCarty Carino: And what could these kinds of high costs mean for, you know, open source, more open AI models, if there is such a strong incentive to need to recoup these costs?

Oremus: Well, one silver lining here is that for most of the history of the development of these large language models, this was really an academic endeavor. These tech giants were competing not to build products that people would use, they were competing to build the most powerful models that they could publish papers about and say that they’ve, you know, set new benchmarks, that they’ve pushed the boundary of what’s possible. It was really all about publishing, and so you didn’t even have to think about the cost because you didn’t have to worry about a billion people using the product. And one of the, the things that we’re seeing is a push toward open source models. So maybe, if you could take an open source model and kind of bootstrap it, you can train it on the outputs of GPT-4, train it on the outputs of Google’s model. You can get it not as good, but you can get it maybe halfway as good or two-thirds as good for a much, much cheaper price. And so we’re just now starting to see the push to make these cheaper, more lightweight models. And that includes a push toward using open source.

McCarty Carino: So what’s your read on kind of how big of a throttle this could be to some of the most optimistic predictions about the expansion of AI?

Oremus: A lot depends on how well companies are able to find ways to make these models smaller without sacrificing too much quality. I think the biggest impact is the fact that they have to be focused on that right now instead of being focused fully on let’s address problems of misinformation, right? Let’s address the problem where it makes up stuff about people, sometimes damaging information about people. Certainly they’re focusing on those things, but they have to be even more focused on how to make it cheaper. And so I really think the biggest impact in the short term is that we’re not going to get the advances in quality that we might hope for, as the companies instead spend their energy trying to make these things more affordable to run.

Those powerful graphics processing units needed to train and run AI chatbots are so expensive in part because demand for them is sky-high.

And basically just one company — Nvidia — designs them, and one manufacturer — Taiwain Semiconductor Manufacturing Co. — builds them.

In May, we spoke with historian and author Chris Miller about the potential for another chip shortage from the AI boom.

That’s something Sam Altman, OpenAI’s chief executive, acknowledged in his recent appearance before Congress. He said he’d actually love it if people used ChatGPT less because “we don’t have enough GPUs.”

Of course, Nvidia, which sells those GPUs, is doing just fine. The company has seen its share price balloon by 163% just this year, and it’s now valued at almost $1 trillion.

Will Oremus and I also discussed how these AI systems not only incur high financial costs, but environmental ones as well.

That’s a topic we explored in a little more depth with Sasha Luccioni, the climate lead for AI company Hugging Face. She noted that the process of training an early version of ChatGPT emitted roughly the same amount of carbon dioxide as a gas-powered car driving over 1 million miles.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer