Donate today and get a Marketplace mug -- perfect for all your liquid assets! Donate now
What it means for nations to have “AI sovereignty”
Mar 21, 2024

What it means for nations to have “AI sovereignty”

HTML EMBED:
COPY
Venture capitalist Vinod Khosla says tailoring large language models to countries' artificial intelligence systems may strengthen their national security and cultural independence.

Imagine that you could walk into one of the world’s great libraries and leave with whatever you wanted — any book, map, photo or historical document — forever. No questions asked.

There is an argument that something like that is happening to the digital data of nations. In a lot of places, anyone can come along and scrape the internet for the valuable data that’s the backbone of artificial intelligence. But what if raw data generated in a particular country could be used to benefit not outside interests, but that country and its people?

Some nations have started building their own AI infrastructure to that end, aiming to secure “AI sovereignty.” And according to venture capitalist Vinod Khosla, the potential implications, and opportunities, are huge.

The following is an edited transcript of Khosla’s conversation with Marketplace’s Lily Jamali.

Vinod Khosla: These language models are trained in English, but there’s 13 Indian scripts, and within that there’s probably a couple of hundred languages or language variants. So the cultural context for these languages is different. We do think it deserves an effort to have cultural context and nuances, like in India: You don’t speak Hindi and you don’t speak English, you mix the two, what’s sometimes called Hinglish. So those kinds of things have to be taken into account. Then you go to the other level. Will India rely on something that the technology could be banned, like a U.S. model?

Lily Jamali: So you were just talking about the cultural context. There is a huge political overlay —

Khosla: Political and national security. So imagine India is buying oil [from] Iran, which it does. If there’s an embargo on Iranian trade, is that possible that they can’t get oil or they can’t get AI models? So every country will need some level of national security independence in AI. And I think that’s a healthy thing. Maybe it’ll make the world more diversified and a little bit safer.

Jamali: More safe. Why? Why do you say that?

Khosla: Because everybody can’t be held hostage to just an American model. The Chinese are doing this for sure. But if there’s a conflict between India and China, can it 100% predict what the U.S. will do? They may care more about Taiwan than the relationship between India and China, for example.

Jamali: And can you explain why you think it is important for each country to have its own model?

Khosla: I’m not saying in India they’ll only use the Indian model. They will use all sorts of models from all over the world, including open-source models. Now China, I have a philosophical view [that we are] competitors and enemies, and I take a somewhat hawkish view on China. The best way to protect ourselves is be well-armed to be safe against China and avoid conflict if it’s mutually assured destruction, so to speak. In countries like India or Japan, they’ll use all sorts of models from everywhere in the world, including their own local models, depending upon the application or the context.

Jamali: As some of our listeners may know, you were very early to the AI trend, and we’d love to know what you think might come next. So what do you think?

Khosla: Here’s what I would say. AI has surprised us in the last two years. But it’s taken us 10 years to get to that ChatGPT moment, if you might. What has happened since is there’s a lot of resources poured in. And that will accelerate development. But also, it diversified the kinds of things we worked on pretty dramatically. And so I think we’ll see a lot of progress. Some things are predictable, like systems will get much better at reasoning and logic, some things that they get critiqued for. But then there’ll be surprises that we can’t predict.

Jamali: Although we may try.

Khosla: Other kinds of capabilities will show up in these systems. Reasoning is an obvious one. The embodied world, which is generally meant to represent what happens in the real world, of which is mostly robotics, will see a lot of progress in the next five years. So think of logic and reasoning, rapid progress. Think of robotics, artificial intelligence, rapid progress. Think of diversity in the kinds of algorithms being used. They’ll be really interesting and probably not one people are generally expecting.

Jamali: “Diversity in the kinds of algorithms.” What kind of diversity are we talking about?

Khosla: If you take the human brain, sometimes we do pattern matching, and there’s all kinds of emergent behavior that emerge from that. And [large language models] are going to keep going. And they may do everything. And we may reach AGI, or artificial general intelligence, just with LLMs. But it’s possible there’s other approaches, what’s called sometimes neurosymbolic computing. Reasoning is symbolic computing — planning, being able to make long-term plans, things like that. We do a lot of probabilistic thinking — this might happen or that might happen, what’s the likelihood of this happening? That’s generally called probabilistic thinking. They’ll start to emerge. So those are just some examples. And of course, I’ll be surprised.

More on this

Another person talking a lot about this is Jensen Huang, CEO of Nvidia, which designs industry-leading graphics processing units. This week, the company announced a collaboration with software and cloud company Oracle to “deliver sovereign AI solutions to customers around the world.”

Huang envisions AI factories that can run cloud services within a country’s borders. The pitch: Countries and organizations need to protect their most valuable data, and Oracle CEO Safra Catz said in a statement that “strengthening one’s digital sovereignty is key to making that happen.”

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer