This month, Facebook, Twitter, TikTok and Google all signed on to new commitments to address online abuse and women’s safety on the web. More than a third of women report personal experiences with online violence.
The companies say they will give users more control over who can interact with their posts and make it easier to report harassment and abuse. They’ll also test out new tools, including one that would give users the chance to put the brakes on a video that unexpectedly goes viral.
“We recognize that each of these platforms have, you know, they’re different services, they have different infrastructure, they have different content moderation policies,” said Emily Sharpe, director of policy at the World Wide Web Foundation, which worked with the tech firms to come up with the 11 tool prototypes. “The point is that they need to do better by women on their platforms.”
Facebook, Twitter and Google didn’t make specific pledges about when they would be testing the new tools, but TikTok did.
“Actually, TikTok has said that they will start testing and implementing these prototypes as early as this year,” Sharpe said. “So they have said they will be making progress over the next few months, which is terrific.“
The solutions were developed around the needs of women who face harassment online, represented by five semi-fictional personas. Like a character named Yvonne. “She is a black female politician in the United Kingdom. She and her staff have received tremendous online abuse to the extent that it causes them to moderate their offline activity,” Sharpe said. “So when we think about the types of prototypes that would be developed for her, we want to take into account her identity, her intersecting identities.”
Facebook says one of the prototype tools, a way for users to track the status of their cases, is very similar to what it already does. And the company says it doesn’t anticipate needing to make many changes to the platform to meet its new commitments.
“I’m looking forward to taking these prototypes and spending some time comparing what we already have in place, and then seeing where the gaps are. And I think most of our gaps are going to be communications,” said Cindy Southworth, Facebook’s global head of women’s safety.
But some activists say the gaps are bigger than that. Nina Jankowicz, a global fellow at the Wilson Center and author of the upcoming book, “How to Be a Woman Online,” is one of hundreds of scholars and activists who signed a letter organized by the Web Foundation urging the four tech companies to follow through on their pledges.
“It’s a lot of pretty words, but I think TikTok is pretty well equipped to enact these things,” Jankowicz said. “Twitter has been making a lot of steps in the right direction recently. Facebook has been receptive to these conversations. But again, I haven’t seen a lot of material changes and how the infrastructure of their platform works. And Google, there’s a ton of misogynistic abuse on YouTube. So I’d like to see a lot more support from Facebook and Google in particular.”
Sharpe, with the Web Foundation, said while none of the big tech companies are doing enough at the moment, getting them to focus on this issue is victory in itself.
“And so we expect that this will continue going forward,” she said. “But again, we will be watching them, and many hundreds and thousands of women around the world will also be watching.”
Related Links: More insight from Kimberly Adams
To get a sense of just how bad abuse is for women online, Amnesty International has a project called Troll Patrol, which uses artificial intelligence and volunteers to survey tweets sent to journalists and politicians, and found the subjects were sent problematic or abusive tweets roughly once every 30 seconds. It was even worse for women of color, and particularly Black women, who were 84% more likely than white women to be the subject of abusive or problematic tweets.
Just before these new commitments were announced, Facebook rolled out its new Women’s Safety Hub, which centralizes all of its tools for preventing and reporting harassment in one place.
Twitter told us it doesn’t have a way for users to track reports of abuse all in one place, but it was working on a “safe mode” to automatically mute accounts that might be using insults, so users won’t have to see abusive content targeted at them.
TikTok, like Twitter, said it has tools to nudge users who might be posting something offensive and asks them to reconsider.
The future of this podcast starts with you.
Every day, Molly Wood and the “Tech” team demystify the digital economy with stories that explore more than just “Big Tech.” We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.