Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace

The GM strike marches on

Sep 20, 2019

Latest Episodes

Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
This Is Uncomfortable
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Where to Listen:
ABOUT SHOW
Subscribe
Aug 28, 2019

More schools are analyzing students’ online lives in the name of safety

Share Now on:
HTML EMBED:
COPY
Faiza Patel of NYU explains how some districts are using computer programs to flag potential threats on social media and in email.

As a new term begins, a growing number of schools will be scouring students’ social media posts and emails for warning signs that they may pose a safety threat.

Taking this security step comes at a cost to privacy — and not just for the students. Some of these platforms also monitor the activities of people who live near schools. As the technology is adopted in more schools, there have been reports of more students getting flagged incorrectly.

Marketplace’s Jed Kim spoke with Faiza Patel, who’s co-director of the Liberty and National Security Program at the Brennan Center for Justice at New York University’s School of Law. Kim asked her how the monitoring works. The following is an edited transcript of their conversation.

Faiza Patel: It’s computer programs, and they flag lists of words that have been identified as potentially indicating a threat. So words like “shoot” or “shooter,” “gun,” “kill.” Words of that nature are identified by computer programs that scan social media posts.

Jed Kim: It’s one thing to flag these alerts and to send it on to the police, but how seriously is law enforcement taking this?

Patel: This is a relatively nascent thing. So I think probably a few hundred school districts are doing this, and we have 13,000 school districts in the United States. I do think that police do take these things seriously. So there’s one example that I’ve seen of a student who posted that he was going to shoot the professor for scheduling an early morning exam on Twitter. It seems to me to be an obvious joke, and the kid was arrested. Things like this do happen. But it’s not just about law enforcement. If you get flagged to your school district and to your principal as somebody who’s a troublemaker, who’s potentially a school shooter or at risk of other kinds of violence, that can have a lot of consequences for kids. It can lead to investigations. Your parents are probably brought into the mix. Even a very harmless tweet can lead to consequences.

Kim: Do [you] have any sense of how effective these services are at reducing violence in schools?

Kids often talk in slang, they have pop culture references, their speech is often coded in ways that adults and certainly algorithms don’t understand.

Faiza Patel

Patel: Not at all. The companies that market these services will say, “We’ve identified many instances of where we’ve prevented harm or violence from happening,” but none of those stories have ever been audited. And there’s certainly no systematic research indicating that these tools are effective. In other contexts, where people have tried social media monitoring in order to identify threats, they have been found to be not effective.

The Department of Homeland Security, for example, ran five pilot projects that we know of to vet visa applicants, people looking to come to the United States for asylum or on fiance visas. And their own assessment was that the social media monitoring really wasn’t effective at identifying threats. That’s not the school system, but that’s another context in which social media monitoring has been tested. Frankly, it’s not surprising that it’s hard to figure out when somebody is joking or when somebody is serious. Social media really amplifies all of that, particularly when you’re talking about kids. Kids often talk in slang, they have pop culture references, their speech is often coded in ways that adults and certainly algorithms don’t understand. For example, in Florida, where they’ve deployed many of these systems, Social Sentinel, which was one of the services that’s one of the market leaders in this space, their algorithm identified posts about the movie “Shooter.” It identified posts about a basketball shooting clinic, because it’s looking for the word “shoot.” These are the kinds of things that mostly get flagged by these services.

Kim: Why do you think school districts have begun using the services?

Patel: I think a couple of things. One is there’s obviously a huge amount of pressure on schools from students and from parents to do something about school shootings, which obviously strike at our core emotions. We all want safety, and safety for our children is one of the main things that parents want, so there’s a huge amount of pressure on schools to do this. And some of the responses that people have proposed, such as gun control, for example, are often taken off the table because of political reality. So schools are searching for something to do.

And these social media companies come along, and they’re like, “We can help you; we can identify the next school shooter. And by the way, we can also help you identify bullying and kids who are suicidal.” It seems like a good thing to do. And the cost is low, so that’s probably tempting for schools as well. It generally works out to around $1.50 or $2 a kid per year. They’re all these incentives that push schools toward this. I will say this, though, not all schools have adopted this by any means. And when we’ve talked to school officials, they have expressed a lot of concern about normalizing this system and about creating a culture of constant surveillance of kids.

Related links: more insight from Jed Kim

Education Week has an excellent article detailing several monitoring services and the concerns they raise. It’s fascinating and terrifying to see the evolution of one service that over the course of just a few years went from protecting kids from seeing obscene materials to monitoring emails and sending parents weekly reports of what their kids are browsing on the web. A quote from one company’s CEO: “Privacy went out the window in the last five years. We’re a part of that. For the good of society, for protecting kids.”

Gallup has been tracking anxiety over school safety for decades. Concerns over sending kids to school tends to spike after school shootings. This year’s numbers show parents’ worry levels are like what they were after the shootings in Parkland, Florida, and Newtown, Connecticut. Gallup says it could be reaction to two recent mass shootings. Or it could just be the new normal. On a slightly happier note, children’s fears over going to school this year have dropped.

The site Government Technology reports on the many safety measures being undertaken by one North Carolina school district. They include hiring the district’s first full-time director of school safety, requiring more secure entryways and the locking of classroom doors while classes are in session. Apparently, that’s a national trend. My favorite, though, is that the district is adopting the extended-arm stop signs that come 6 feet out from the sides of school buses. Those are aimed at keeping drivers from passing stopped buses, which is illegal and a total jerk move.

The team

Molly Wood Host
Eve Troeh Senior Producer
Stephanie Hughes Producer

Thanks to our sponsors