What do generative AI and social media have in common? A lack of regulation.
Oct 1, 2024

What do generative AI and social media have in common? A lack of regulation.

HTML EMBED:
COPY
Teens seem to have eagerly adopted artificial intelligence tools, often without their parents’ involvement. Nathan Sanders of Harvard’s Berkman Klein Center for Internet and Society says online hazards that reward social media firms, like data harvesting, misinformation and divisive content, might be lucrative for the AI industry too.

This week, we’re talking about how teenagers are using artificial intelligence tools like chatbots and image generators, often without the knowledge of their parents and teachers, according to a report from the nonprofit group Common Sense Media.

Monday we heard about that research from Jim Steyer, the organization’s founder and CEO. Now we want to home in on a specific piece of what he said: “If you look back at the advent of social media about 20 years ago, we pretty much blew the regulatory side of that, but also the educating teachers and parents part of that. And we left kids on their own.”

So we called up Nathan Sanders, an affiliate of the Berkman Klein Center for Internet and Society at Harvard who has written about the overlapping risks of AI and social media. The following is an edited transcript of his conversation with Marketplace’s Meghan McCarty Carino.

Nathan Sanders: We recognize that the monetization strategies that have been used by social media companies, that have incentivized them to disregard consumer privacy and to exploit all of us for the sake of advertising revenues, those could apply to AI as well as it starts to interact with us and potentially be used to harvest information about us also, and as those companies developing AI are going to start increasingly needing to monetize those platforms. And we thought about the role social media has in the viral distribution of content and how that’s created rewards for users and for platforms when they share content that is outrageous or creates an emotional response and does not necessarily reward factual accuracy or being informative. And we’re concerned that the same risks apply to AI.

Meghan McCarty Carino: When we think about kind of the specific effects that social media has had for young people, what concerns you most about some of this overlap with the proliferation of AI?

Sanders: Well, the first thing I would say is that I do think both social media and AI pose risk to young people. Lots of young people have almost unfettered access to harmful content on the internet. I think a lot of schools have actually done great work building media literacy into their curricula and talking about some of those risks of social media and online misinformation. And I hope we can take a similar approach to educating kids about the right way to interact with AI tools.

McCarty Carino: This week, we are reporting on a Common Sense Media report that shows kind of the extent of how much middle and high school students are using AI technology, already using generative AI for schoolwork and for fun, often without the knowledge or understanding of their parents or their teachers. What do you make of that?

Sanders: I think it was a very interesting report with a lot of really helpful data. My first reaction to those stats about parents’ awareness is not to be surprised. I think, if anything, probably a survey like this that’s asking people to self-report their interactions with these tools is almost surely understating just how often all of us are now interacting with generative AI technologies, because they’ve been rapidly integrated into tools we use every day. There are integrations of AI tools into search engines, they’re integrated into video platforms like YouTube for captioning videos. As the report calls out, they’re integrated with a lot of social media apps that kids use, like Facebook, Instagram, Snapchat. And not all these are necessarily bad use cases. I think there’s a lot of benefit that comes from some of those integrations, but I don’t think anyone should be surprised that kids are encountering AI technologies at a very high rate.

McCarty Carino: It certainly feels like we are at a crossroads right now in terms of alarm and efforts to maybe address some of the harms that social media is having for young people, especially. What do you make of those efforts and how they might apply to AI?

Sanders: Well, I really hope that as a society we respond in at least two different areas. I do hope that we have programs that educate kids about healthy and effective ways to use new technologies, including AI. But at a macro scale, I really also think we need government action to set up the development and use of AI technologies in a way that’s really beneficial for all of us. The U.S., in particular, and the Congress at the federal level, has effectively not taken action to regulate the social media space, and it’s created lots of harms over two decades now. But it’s not too late to take action on AI, and there are really impactful things I think government could be doing to shape the development and use of these tools in a way that helps it work for all of us and not work against us.

McCarty Carino: We’ve been sort of comparing social media and AI as if they are separate technologies, but in many cases it is the same companies developing artificial intelligence tools — they are being embedded into social media platforms. What is the effect of AI as kind of a force multiplier for the harms that we’ve seen from social media?

Sanders: My first thought is that, clearly, many of the Big Tech companies that have profited from some of the harms created by social media see AI as very valuable to them. We see some of those same social media companies being real technical innovators and leaders in the development of AI, and they’re clearly doing it because they see an opportunity for return on that investment. We see rapid integration of AI technologies and social media platforms, and we should recognize those are platforms that exist to create profit from our use. Those are platforms that are enriching corporations by, in many cases, harvesting our data, attracting advertisers with the idea that they may know more about us than we know about ourselves and can therefore persuade us to buy products. And it seems clear that those companies think that they can do better on that business model using AI, and I think we should view that with concern.

McCarty Carino: Given the situation with social media that you have examined so deeply, do you have hope for AI to play out any differently?

Sanders: I definitely have hope — not necessarily perfect confidence, but definitely hope. First of all, I think there are really beneficial uses of AI that are coming into practice. In the educational context, I know many parents don’t speak the same language their kids do in school, and they rely on technologies like neural machine translation, products like Google Translate to communicate with the school staff and teachers. And that’s, I think, a wonderful use case. It’s not a perfect technology, but it’s pretty good, and I think it really helps there. We talked about automatic video captioning on YouTube earlier; what a great accessibility tool, being able to let people access content and information they couldn’t otherwise. I think those are really beneficial use cases that should be improved as much as possible, but also embraced and encouraged.

McCarty Carino: It certainly feels like there is a bit more skepticism at baseline around AI than there was maybe at the dawn of the social media age, when I think many of the harms of this technology were a bit less obvious to imagine.

Sanders: I think that’s right. I think we should definitely learn from the lessons of the past two decades in our experience with social media to make better decisions and to act faster around controlling the development and use of this newer technology, AI. I think that early optimism about social media is also a legitimate recognition that new technologies do bring new capabilities, and they can be applied to beneficial uses. And it’s our job as a society, and it’s the job of the policymakers that represent us, to make sure to steer the technology as best as possible in that direction, and I do think there are specific practical things that we can do to achieve that.

More on this

We mentioned how it feels like we’re at a crossroads for action on social media. Of course, U.S. Surgeon General Vivek Murthy has issued an advisory, and in June called for a warning label, on social media for kids. A growing number of states, like Utah and Louisiana, have passed bills requiring parental consent or age verification of minors on social media, though many are now tied up in legal challenges.

And the Kids Online Safety Act, which would hold social media platforms liable for certain harms to young people, passed the Senate this summer. A version is now making its way through the House of Representatives, but it too faces wide-ranging opposition, from Big Tech to civil rights groups like the American Civil Liberties Union.

Meanwhile, California Gov. Gavin Newsom vetoed a sweeping bill to regulate AI in the state Sunday, saying that although it was “well-intentioned,” he found it overly broad and onerous for the nascent industry.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer