Google’s “bold and responsible” approach to AI
Share Now on:
Google revealed a slew of new products this week at its annual developer conference, I/O. But it was artificial intelligence that stole the show, from new search integrations and updates to its Bard chatbot to an automatic translation dubbing service.
Google is clearly going big on AI as it tries to fend off competition from Microsoft and OpenAI. It’s part of a strategy to be simultaneously bold and responsible, says James Manyika, senior vice president of technology and society at the internet search leader.
Marketplace’s Meghan McCarty Carino spoke with Manyika about what that “bold” and “responsible” stance means in practice. The following is an edited transcript of their conversation.
James Manyika: We are trying to be bold, meaning we’re trying to focus on what are the most impactful ways that AI can benefit people, businesses [and] society. We also want to take the responsible part pretty seriously, partly because, you know, AI is still an emerging technology. And as such, we need to be very mindful about the risks, the challenges, the complexities that come with it. Those two ideas — [the] idea of being bold and responsible — while they may sound like they’re in tension, we actually think we can embrace both of those and make productive use of that tension. That’s what we’re trying to do.
Meghan McCarty Carino: I mean, this is a space where there is such a striking divergence in views, even among, I would say, leaders in the field. You know, you have people imagining amazing medical breakthroughs, you know, rising standards of living on one hand, and then others literally warning about destroying civilization. I mean, how are you thinking about balancing the risks and the benefits of these tools?
Manyika: The benefits are, in fact, something we’re very excited about. But we have to balance that together by thinking about the risks and the challenges and the responsibility. I mean, let me describe some of the things we are paying particular attention to.
McCarty Carino: Yeah, what keeps you up at night?
Manyika: Yeah, so we think a lot about several kinds of risks and challenges. First, are those risks that have to do with the outputs from these systems? We know that they’re not always going to be accurate. We know that sometimes it can be toxicity and biased outputs from the system. So we pay a lot of attention to addressing that and doing all we can and we’re doing a lot of research to address that. Second, we also think a lot about the possibility of misuse. Because some of these same technologies, you know — take something that we actually talked about at I/O, the universal translator, which is an incredibly useful technology that allows the ability to dub and help learners learn speech across different languages, be able to translate and so forth, which is incredibly helpful. But the same technology can be misused to create deepfakes. So we pay a lot of attention to putting guardrails, to making sure that it’s only available to authorized partners. The other thing that we also pay a lot of attention to is the possibility that, in fact, there may be second-order effects to these technologies that we have not yet fully understood collectively as a society.
McCarty Carino: As you noted, Google made several announcements about new AI developments and tools at the I/O conference. What are you most excited about?
Manyika: Oh, there’s so much, Meghan. You probably saw us show how we’re building these AI capabilities into the products that people use every day, whether it’s Maps or Google Translate or Lens. I’m also quite excited about some of the newer capabilities we’re bringing to things like Workspace. I don’t know if you saw one of the products we announced, something called Tailwind, which I’m very excited about. It’s a way to organize my own information, notes that I’ve made for myself, and use it to write, to do research and compose and so forth. Very exciting. Then I’m also excited about Bard. We’re now taking Bard to, you know, 180 countries, and we’re on path to have it work in 40 languages. We’re building ways for it to be able to use other tools, whether it’s Google Docs, Google email, Gmail, even other third-party products. We announced some collaborative work we’re doing with Adobe, to be able to call Wolfram [a computational engine], so the ability for Bard to connect with other systems and other tools makes it incredibly helpful.
McCarty Carino: One intriguing development that you talked about is the idea of AI image watermarking. Tell me more about that.
Manyika: Well, you know, one of the things that’s so important is, particularly in this period of generative AI where we’re able to create so much content, the ability to also understand and be able to trust that information becomes really, really important. So we’ve been working on lots of watermarking so that at least, you know, we can provide some context. So for example, soon you’ll be able to see something called About [this] image, where if you look at an image, there’ll be enough context provided to you so you can know whether this was generated by an AI, where you might have seen this, its history, where it’s come from. And then in addition, we are working on watermarking technology so that you’ll be able to tell when something’s been generated by an AI system. We think this is very important to give people confidence and be able to assess and evaluate and have context when they look at generated material on the internet.
McCarty Carino: Right. The spread of misinformation and disinformation is one of the biggest concerns about these generative AI tools. We have an election year coming up in the U.S. Do you think that tech companies have a responsibility to build tools that help to counteract some of these harms that their other technology kind of introduces?
Manyika: Misinformation is very much on our minds. And that’s why [we’re working on] these innovations like watermarking, and it’s also the reason why we try to be very thoughtful about, just because we can doesn’t mean we should. We ask ourselves, should we do this? Should we put this out into the world? Have we done everything we can to make this safe? And are we being as responsible as we can be? But I think at the end of the day, Meghan, this is going to have to be a collective effort. It’s not enough for just technology companies to focus on this because there are other people who might use these systems, whether they’re organizations, individuals and others. I think governments are going to need to be involved. This is going to be very much a collective effort if we’re going to get this right.
Related links: More insight from Meghan McCarty Carino
The new, large language model-supercharged Google search engine hasn’t been released to the public yet, but you can see a preview of what it looks like at Wired.
It sounds sort of similar to what you get with Microsoft’s Bing, which incorporates the underlying language model of ChatGPT but connects it to internet search results. We tried that out with The Wall Street Journal’s Joanna Stern in March.
And earlier this week, we touched on the problem of AI-generated fakes and misinformation, which Google’s new watermarking system seeks to address. We talked to a philosopher about what all this means for our collective sense of truth and reality.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.