“Artificial intelligence” is now a household term, whether it’s powering driving directions, spotting tumors in cancer patients or driving big discussions over ethics, bias, autonomous weapons or the future of work. But despite the fact that the first neural network was created in the late 1950s, a lot of what I just described has taken place over only about 10 years.
In his new book, “Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World,” New York Times tech correspondent Cade Metz writes about the history of AI and the corporate forces that have shaped it since the mid-2000s. He told me AI pioneer Geoffrey Hinton really rebranded neural networks as “deep learning,” and that happened just as a bunch of other factors were coming together. The following is an edited transcript of our conversation.
Cade Metz: Really, 50 years after this idea was first proposed, we finally had the two things that were needed to make it work. What we needed was all that data, and the internet gave us that. It gave us lots of photos, lots of text, lots of sounds that those neural networks can analyze. But we also needed the computer processing power needed to crunch all that data. And by 2010, we had both. And so as [Hinton] was rebranding the idea, we also had the technology needed to make it work.
Molly Wood: And then you have this period of 10 years where it seems like AI creeps into our life in ways that are pretty invisible. Talk about the life we’re living now, that is built on these discoveries and this work that we may not even realize.
Metz: Well, you see this in your daily lives today. When you power up your iPhone, and you speak a command into Siri, the way that Siri is able to recognize what you’re saying is because of a neural network. As time goes on, we’re going to see chatbots start to emerge — we’re seeing this now — and that’s driven by a neural network, as well as self-driving cars and other robotics that are coming to the fore. They all rely on this one idea.
Wood: And then, high-level, what starts to follow on from that? In some ways, it’s also a tale as old as time — everything is sort of all shiny promise, and then the problems, the dark side, starts to creep in.
Metz: What you had were very idealistic people like Geoff Hinton who really believed in this one idea. And then, once it started to work, those people were sucked into industry, some of the biggest companies on Earth — Google, Facebook and Amazon. And the aims of those companies, often driven by the profit motive, really clashed with the ideals of these researchers. And we’re seeing that in so many areas now, whether it’s bias against women and people of color baked into these systems, or it’s the use of this technology for autonomous weapons, which concerns a lot of people.
Wood: As you wrote this book, you chose to follow certain people, Geoff Hinton and Andrew Ng among them. How did you choose who to highlight?
Metz: Well, in part, it was very easy because there was such a small group of people who believed in this idea over the decades. And then, once the idea started to work, like I said, they moved into industry, so it became about the people who were snatched up, often paid millions of dollars by these companies to do this work in industry. Where I was lucky was that these were incredible people, and each incredible and fascinating in their own particular way. It became a story about a tiny group that ended up having an enormous effect on daily lives, not only here in the U.S., but across the world.
Wood: There is this chapter in this ongoing conversation about bias. You write about prominent researchers of color who have since been pushed out, actually, of major companies like Google. When you were looking back, was that ever a part of your research, looking at the people of color who were also early pioneers in the field or the people who did some of this work or laid the groundwork but didn’t get bought up by Google?
Metz: Absolutely. There’s a great moment in the book, where several researchers — two in particular, Timnit Gebru and Margaret Mitchell — start to realize that this bias issue is there. And they start to call attention to it and eventually get the industry’s attention. They were both hired by Google to look into this particular problem and to deal with it. They created a team at the company to do this. But this also clashed with the aims of the company. Both of them have now, as you indicated, been ousted from the company, because of a clash over this very issue. These companies want to get this technology out. Their aim is to always move fast. And in a sense, as we start to realize these sorts of issues, the bias problem in particular, the companies have to slow down. And it’s created this moment where there is this clash, and the industry is not sure what it needs to do. We’re going to see this not only at Google, but many different companies over the months and the years to come.
Wood: I should point out that Timnit Gebru, who was fired from Google last year, said on Twitter that you may have left some Black pioneers out of your book. Are you worried that there’s this ecosystem now where even the storytelling, even choosing of heroes and the stories perpetuates these existing inequalities?
Metz: Well, I think that that’s always worth considering. And that’s something that I have certainly learned in pulling this book together and in my daily coverage at The New York Times. You do have to step out of the popular narrative and look at what’s going on. And you have to make a real effort to call attention to everyone working in the field. It’s an enormous problem that me as a journalist needs to look at, as well as people in the field.
Wood: Do you think it’s fair to say that the kind of top-line takeaway from your book is that the more valuable technology is, the more dangerous it is?
Metz: You know, I hate to make broad statements, but that’s generally correct. As the technology becomes more powerful and more pervasive, there are more things to worry about.
Related links: More insight from Molly Wood
The Financial Times has a piece about how this version of AI — neural networks and deep learning — came to be the sudden winners in artificial intelligence development. These mathematicians and engineers are the mavericks Metz speaks of. The ones who believed for a really long time that computers could actually learn from massive amounts of data, draw inferences and conclusions and think, kind of. That’s the big leap that this technology brought about. Previously, artificial intelligence research had assumed that everything would have been programmed with specific rules and instructions, until that combination of tons of data and computing power that Metz described came along and turned computers into little toddler brains that suck up everything around them and start having ideas.
However, I agree with the FT’s assessment that the question of harm is a little underexplored in the book. Metz’s chapter on bias includes a lot of the people we’ve talked to on the subject, but does little to lay out what might happen in a world where biased AI makes decisions about medical care, hiring or credit checks. So, while it’s a good read — full of interesting details that help explain how we got to where we are — it’s probably not the only thing you should read on the topic.
Stanford’s 2021 AI index and a write-up about it note that the last decade has been huge for AI. The last year saw a huge increase in private investment in the field, lots of it into medical and health care applications and drug discovery. And that while there have been some specific efforts to improve diversity in the field, it’s still quite low.