“Superagency” explores how AI can enhance human potential to new heights
Jan 28, 2025

“Superagency” explores how AI can enhance human potential to new heights

HTML EMBED:
COPY
LinkedIn co-founder and AI investor Reid Hoffman argues we shouldn't be afraid of AI or its disruptive capabilities in his new book "Superagency: What Could Possibly Go Right with Our AI Future."

There’s no shortage of bullish voices on artificial intelligence among the titans of tech. But even many of the leading evangelists, in addition to prevailing pop culture narratives, tend to strike a note of impending doom when envisioning the future of the technology.

Reid Hoffman wants us to consider the alternative. He’s the co-founder of LinkedIn, and a founding investor and former board member of OpenAI before he branched into other ventures, like Inflection AI. And his new book “Superagency: What Could Possibly Go Right with Our AI Future?” explores those alternatives.

Marketplace’s Meghan McCarty Carino spoke with Hoffman about what he means by the idea of “superagency.”

The following is an edited transcript of their conversation

Reid Hoffman: Well, what I mean by “superagency” is that elevation of human agency that we get when we get new superpowers from technology, and in particular, when millions of us get that new superpower at the same time. And so, for example, you know, when you got a car, well, that increased your mobility. You could drive places. But when other people got cars too, doctors could come do home visits, moving became a lot easier, just a whole set of things, and that’s what super agency is.

Meghan McCarty Carino: And the sort of second part, what could possibly go right with our AI future? I mean, why did you want to write about this now?

Hoffman: Well, I find that most of the discourse around AI is motivated by fear, uncertainty, what are all the things that could go wrong? And you never get the future you want by prohibiting the ones you don’t. And so that imagining of like, what would we want, what could possibly go right? And actually, in fact, magically right, is the thing that we need to be putting more of our collective discourse and attention to.

McCarty Carino: Right, I mean, you cite surveys of Americans, recent surveys that show they’re quite pessimistic about AI. Why do you think that is?

Hoffman: Well, it’s not a new phenomena, even if you go back to the printing press or electricity, or even, you know, more recently, mainframe computers. We forget this, but the dialogue at the time is the same kind of pessimism we’re seeing about AI, which is, you know, the technology is going to destroy human agency. It’s going to destroy human society. And it’s literally every massive technology, you know, cars, steam engine, the entire history. And so I’m trying to remind people of that history to say, look, in every other time we’ve gotten to super agency, we’ve gotten to these superpowers, we’ve gotten to the modern society that is so much better to live in now than 50 or 100 years ago. The only real discussion is, is this time uniquely different? And of course, we have to address those and navigate those. And what my hope is, and this is the optimist is, this time is uniquely different because we can learn from the earlier challenges and navigate the transitions, which are always painful better.

McCarty Carino: So tell me more about your vision for super agency, for the kind of, you know, the positive case for how AI could kind of super charge humankind, and what kinds of possibilities are on the horizon?

Hoffman: I can already see with current technology, no invention required how to get a quality medical assistant on every single smartphone on the planet. And that could be, you know, 11pm, you have a concern about your child, your partner, your your grandparent, you know, any of these things, and you can engage. And you don’t have to even be healthcare insured in order to do that. And that’s like one. Another one is a tutor on every subject for every age. And so, for example, I myself have been using ChatGPT in order to better understand quantum mechanics, because I’m really curious about quantum computing. It’s also kind of useful already today in all of these everyday ways. These are my ingredients in my refrigerator. What can I make? Hey, how do I figure out how to debug this problem with my kitchen appliance? And as what we’re going to see, I think, this year and next year is a set of tools growing also for being able to kind of work more effectively. I already use a variety of AI agents to work more effectively. You know, for example, in writing “Superagency,” I went to various AI agents and said, from a historian of technology perspective, give me the critical analysis of what I’m writing. And of course, most of what comes back is, well, you’re writing this book about AI, so you’re being overly simplistic about the steam engine. And I was like, yeah, but I’m just trying to make my point, right. It’s not that I’m wrong. It’s just that, you know, a historian would write a 400-page book or an 800-page book on just the steam engine, as opposed to four pages.

McCarty Carino: One of the reactions that comes up a lot in our current moment and throughout history, when confronted with a major technological change, is to kind of revert to the precautionary principle, you know, the idea that we need to have some top down control of this new technology until it is proven safe beyond a reasonable doubt. What is problematic about that approach to you?

Hoffman: Well, part of the reason I like the car analogy is if you kind of start and you say, well, I’m not going to get in the car and go to the supermarket until you prove that there is zero chance of anything wrong before I get there, right? You’re never going to get to the supermarket. You know, driving is one of the most dangerous things that most of us do in our daily lives. So precautionary principle can be taken way too far. You have to kind of balance it to, you know, kind of, what are probabilities? And people say, well, you know, the AI, [according to] all the doomers, you know, could be killer robots. And so I worry about this being added as an existential risk to humanity. And actually, in fact, when I look at AI, I say, well, you know, you should look at, for example, take precautionary principle, sure there’s issues around potential, you know, kind of Terminator killer robots, whether by, you know, very remote possibility, by accident or by humans doing things. But when I look at like other existential risks like pandemic, the only way that I can think to solve them, ones that are much more dangerous and much more disastrous than the COVID-19 one that we just went through is AI – to detect them, to analyze them, to create vaccines and cures. AI is the only thing that’s going to operate on that kind of time scale. And so when you say precautionary principle against the Terminator robots, you’re also applying the precautionary principle to increase the chances that we have an existential risk with pandemic. And so that’s, you have to be smart about how you apply it.

McCarty Carino: So how are you thinking about the role of regulation as we move forward into this AI future?

Hoffman: So, you know, ultimately, in all technologies, you do get to some regulations, because you realize that there’s some things that are kind of very important to do. Now, right now, when we’re at our very earliest beginning, like if you’d said, hey, a car, you can’t launch it until there’s zero, you know, fatalities, zero issuesm then we wouldn’t have any cars. We wouldn’t have suburbs. The impact upon the expansion of all of the middle class would have been, you know, severe. You know, similarly, you guys say, well, we got to be innovating to the future, and there will be some errors and some faults. Right now, let’s try to prevent the ones that are most catastrophic. So let’s try to make sure that, you know, rogue states are not overly empowered. Let’s make sure that terrorists can’t do things that are really bad. And let’s be very specific and limited on those. And on the others, let’s engage in, you know, kind of call it dialogue and monitoring. So like, for example, one of the things that the, you know, Biden executive order called for, which I hope we preserve, is asking developers of scale AI models to make sure that they’re running safety teams and red teaming process to address what big problems might be, so that, you know, if one of these models runs into a problem, or if someone has a has a question, they can contact the company, like the government can contact the company and say, okay, show us your red team, you know, your kind of safety analysis and what kind of testing you’re doing for this, so we can make sure it’s up to snuff on the kinds of things that we’re specifically worried about. You insert the regulation in small specific ways as you’re developing.

More on this

Here’s a quick note about that Biden administration executive order Reid Hoffman talked about which asked AI companies to share red-teaming or safety testing data — in addition to a number of directives to federal agencies to prepare for AI, President Donald Trump did repeal that Executive Order on his first day in office, as he promised to, though exactly how much can or will be completely rolled back is still a bit unclear.

We asked Hoffman, who has been one the most prominent Democratic donors in tech, whether the new political reality has changed how he thinks of any of the concepts in his book.

He said it just means the onus is even more on private industry to lead the way to a positive AI future.

And this week Hoffman himself announced a new $25 million investment in a company using AI to discover new cancer drugs.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer