Artificial intelligence expands its terrain as legislation and ethics rules try to catch up
Mar 31, 2022

Artificial intelligence expands its terrain as legislation and ethics rules try to catch up

HTML EMBED:
COPY
The cost of using AI tools has fallen, allowing smaller companies to develop and implement the technology.

Private investment in artificial intelligence more than doubled last year, according to Stanford University’s AI Index Report. It tracks and visualizes data about this tech, including business investment, software costs and research trends.

The project is funded by groups like Google, research lab OpenAI and grantmaking foundation Open Philanthropy.

And it’s co-chaired by Jack Clark, who’s also a co-founder of the AI safety and research company Anthropic. He explained why we’re seeing more of this technology now. The following is an edited transcript of our conversation.

Jack Clark: AI is getting much, much cheaper. If I wanted to train a computer vision system to identify objects in images, which is a very common task, that would cost me about $1,200 to do in 2017, if I was using Amazon or Google or Microsoft’s cloud. Today, that costs me $5. So it’s got way cheaper, and it will continue to get cheaper. And when something gets cheaper, there tends to be a lot more of it.

Kimberly Adams: When talking about AI, there’s often a focus on how large companies use the technology, but how is it being used by smaller companies these days?

Clark: We’re seeing smaller companies incorporate AI into their products via services. Amazon or Google will rent you a computer vision system that you can access. And so if you’re a little consumer startup, right? You might want an image identification capability. And then you’re essentially going to rent it from these larger companies. But, as I said, it’s got cheaper, so there’s going to be a lot more of these smaller companies utilizing AI because it’s gone from some big bet for a company expense to a small line item that your financial officer won’t have too big of a problem of you spending money on and integrating into your products.

Jack Clark smiles as he stands in front of a window pane.
Jack Clark (Courtesy James Cham)

Adams: You also tracked how much research on AI ethics was published in academic journals, which, according to your report, has really jumped. What are people studying specifically?

Clark: Well, they’re studying the sorts of examples like why do certain AI systems display certain types of biases? And where do these biases come from? Do they come from the underlying data set? Do they come from the data set, and perhaps the algorithm which you train on top of that data set? That’s one big swath of the issues. Another one is on misuse. So if the system is behaving perfectly well, like I have a system that lets me predict, say, how to build interesting things using chemistry, then how do we stop someone using that system to build really effective explosives?

Adams: What do we know about how this research, across all of these topics, is actually being deployed by tech companies? I mean, you have Google, which I should mention is a major funder of this report, which had a very public falling out with one of its top ethicists, leading to her departure.

Clark: Yeah, this is one of the big challenges here. These technologies have become useful. They are being deployed. Google has incorporated a language model called BERT into its search engine. So has Microsoft. This language model has become one of the more significant things driving Google’s search engine. And obviously, that’s a huge business for Google. Yet, at the same time, we’ve seen people leave Google’s ethical AI team under highly controversial circumstances, with many of the reasons attributed to the fact that they highlighted some of the ethical issues inherent to these language models. And I think that gives you a sense of how the industry works today. We have systems that are really capable and are being deployed, but they have known problems. And so this tension isn’t going to go away. As the report shows, there’s tons more research being done in this area because I think companies are just waking up to the very real stakes of deploying this stuff.

Adams: How well has U.S. regulation kept up with the developments in the field of AI?

Clark: Year after year, U.S. legislators are bringing more and more bills to the floor in Congress about AI. And yet the number of bills for this passing is basically one a year. It is quite dispiriting. But there’s a silver lining here, which is that politicians do this when they know that their constituents care. And after constituents care about something enough, you do start to get meaningful legislation. It just takes a while. And when we look at the state level, you are seeing more states pass individual bills relating to AI, which are having a slightly higher success rate than what I’ve mentioned in the Congress. The biggest deal is really going to be driven by Europe, where the European Commission has actually passed through a big batch of AI legislation, which companies like Facebook, Microsoft and Google will be subject to in Europe. So I expect what you’ll see is how the companies respond to what happens in Europe will sort of guide U.S. legislators on the legislation we’ll eventually do here.

Adams: Having looked at all of this data about the data and AI, what do you think were the major takeaways?

Clark: My main takeaway is that AI has gone from an interesting thing for researchers to look at to something that affects all of us. It’s beginning to be deployed into the economy, legislators are thinking about it and multiple countries are doing huge amounts of research. So it’s going to be up to all of us to pay attention to this and to find ways to work on the very real issues that it causes so that we can get the benefits as a society from this technology.

Related links: More insight from Kimberly Adams

The full Stanford AI Index report is publicly available online. It includes a whole chapter on ethics and how existing biases could be amplified by language models we use on the web every day.

Clark said as the datasets feeding these models get bigger, in some cases the output gets even more biased or toxic.

In one example from the report, a language model trained on a data set of e-books returned a surprising amount of toxic text. It turned out there were some pretty explicit romance novels in the mix, which may have added different vocabulary from what you might want in the predictive text for a work email.

We also link to reporting from The Verge about how Google’s new language model, MUM, is designed to notice searches that might indicate someone is in a crisis. So if someone’s searching for terms likely to indicate they are contemplating suicide, rather than Google returning information that might help them harm themselves, the search engine can direct the user to resources like hotlines and support services.

And The Guardian has a story this week about an AI system that beat eight champion players in a modified bridge tournament. Even with the tweaks, the win is a pretty big deal, since bridge relies on communication among players, who all have incomplete information and have to react to other players’ decisions.

Which gets the technology a lot closer to how humans make decisions, and win at cards.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daniel Shin Producer
Jesús Alvarado Associate Producer