How to imagine the worst possible use of your product, and then stop it from happening
Mar 26, 2021

How to imagine the worst possible use of your product, and then stop it from happening

HTML EMBED:
COPY
Slack allowed users to connect with anyone, even if they don't work together. One flaw: Email invites could be used to send harmful messages.

Slack rolled out a new feature this week to let people connect with anyone, even if they don’t work in the same company. One flaw became immediately obvious: Anyone with your email address could send you a connection invite and a message that could be harassing or harmful. Slack promptly changed the feature, and invites no longer contain customized messages.

But it made us wonder: How can companies do a better job anticipating how features could be harmful and fix them before they get rolled out? It’s a topic for “Quality Assurance,” where we take a second look at a big tech story. I spoke with Sarah Kunst, managing director at Cleo Capital. The following is an edited transcript of our conversation.

A headshot of Sarah Kunst, managing director of Cleo Capital.
Sarah Kunst (Photo courtesy of Cleo Capital)

Sarah Kunst: Companies need diverse and robust trust and safety teams to help think through these issues, to help say, “What are vectors of abuse, and how do we proactively think about lessening those risks?” Particularly on a platform like Slack, where you actually can’t block people. The question for Slack in the larger tech world is: How do you think about these things before you roll them out? That thoughtfulness, building with empathy, is something that the entire industry really needs to take more seriously.

Molly Wood: Right. And that’s why I wanted to talk to you specifically, because you are empowering entrepreneurs to build businesses. And I wonder how you start to build in to the process of creating a company, empathy, or even just planning for the worst possible usage of your product.

Kunst: Yeah, well luckily, as my therapist would tell you, I am a very good catastrophic thinker. It’s one of those things where you have to be your own risk manager and say, “What is the absolute worst outcome that could happen here? How can this be hurtful? How could this be misused? How could this be abused?” And some of it is baking just empathy into your product team, into your engineering team, into your sales team to think about these things. And seeing what works and what doesn’t and really talking to people to understand their experiences and where things have gone wrong, and then go from there.

Wood: Do you think every company needs to understand at this point, almost no matter what it’s doing, but particularly if it’s connecting people in any way, does every company need to realize that everything is content moderation?

Kunst: Yes. I mean, everything is content moderation. How we speak to each other … if you’ve ever been at a bar close to closing time, and somebody mouths off to a waitress and gets kicked out by a bouncer, that is [real-life] content moderation. The world is full of content moderation. You can’t say “bomb” on a plane. It is not a freedom of speech issue. Freedom of speech means the government can’t arrest you for stating your opinion. It doesn’t mean that Slack has to allow you to massively spam or harass tons of people every day.

Wood: You know, if companies make this part of their planning process, and VCs make this part of their due diligence, what is going to happen when the worst-case scenario planning includes, “In order to avoid the worst-case scenario, we’re going to have to make less money”?

Kunst: You know, I don’t know if I believe that there’s always that trade-off. You know, it’s kind of like climate change. We think it’s so expensive to decarbonize, it’s so expensive to divest, it’s so expensive to switch to electric cars, whatever. Look at the long-term cost. And so I think that the challenge here for companies and investors is not to say, “Hey, is it going to be expensive tomorrow?” It’s saying, “Hey, can we afford not to do this five years out, 10 years out? What does this mean for our platform long term?” And for the most part, I don’t think it’s that companies can’t afford to do this. I think it’s that they can’t afford not to.

Wood: There is, of course, a question of representation and your ability to do worst-case scenario planning, when you don’t have a team that has ever experienced a worst-case scenario. And this seems to be just yet another example of how important that is.

Kunst: Yeah. And I haven’t looked at their latest diversity stats. I think that Slack has a relatively diverse team, overall. But the question is, are the people doing your worst-case scenario planning people who understand the specific threat vectors? A story I always like to tell is, as a Black woman, I had a sunburn once. I was in my early 20s, never had one before. I thought I was getting poisoned. It’s a little tiny case of sun poisoning. I had no idea what it felt like. I couldn’t imagine every time I saw friends get sunburned that it actually hurt because they did it all the time. And so literally, that one experience, like 12 years ago now, has completely changed the way that I think about anytime I see somebody getting red. I’m like, “You need some sunscreen.”

And it’s not that I don’t have empathy. And it’s not that they don’t care about skin care. I just hadn’t experienced that. And so when you’re building these teams, when you’re building product teams, when you’re doing strategy, when you’re putting it through a trust-and-safety review, are the people on your teams the people who are most likely to have experienced the downsides of issues? And so really baking empathy into your team, and making sure that you have people around the table from a huge variety of experiences who can help you understand that, yes, sunburns really hurt, is so important. And I think this is a great example of it. But it’s also a call to the wider tech industry to say if a company that’s historically tried pretty hard to do the right thing is able to have blind spots like this, then you have to assume your company does, too. How do you proactively guard against them?

Related links: More insight from Molly Wood

Speaking of anticipating problems or trying to clean up messes after the fact. The CEOs of Facebook, Google and Twitter were on Capitol Hill on Thursday for a hearing on disinformation and hate speech and some Section 230, of course. And on the one hand these hearings are starting to feel like a “Groundhog Day” of lawmakers posturing and bringing up pet peeves and seeming like they don’t understand and somehow still expecting that if they shout, “Yes or no?” enough times at the CEOs, they’ll all of a sudden snap to and change everything about their content moderation strategies. But, I really don’t want to be nihilistic, actually.

All the above is kind of true. But with each passing hearing, lawmakers seem to home in just a little more on the issues at stake here: the business model of engagement, the misinformation loophole that advertising can represent and, crucially, the algorithms that amplify misinformation and content, giving them more reach and more power to radicalize and confuse and misinform. And if I’m being honest, the solutions really aren’t easy. They’re easier than these CEOs make it seem sometimes, that is true. Hard-and-fast rules about speech are problematic and will harm people they’re not intended to harm — they always have.

But all that said, it really is time to act. Maybe some rules about the thing where Facebook is apparently auto-generating pages for militia organizations, according to a report this month from the Tech Transparency Project, and its algorithms are continuing to recommend militia groups and other extreme content. It seems like that could be a rule, somehow, requiring transparency about how and why some content is promoted. Or an actual federal privacy law that prescribes how ads could be targeted or how people could opt out of targeting. Or antitrust laws that make it so two companies alone don’t control the entire digital advertising ecosystem.

Literally anything that isn’t just Mark Zuckerberg continuing to be incapable of a normal answer to a question. Lawmakers continuing to offensively mispronounce Sundar Pichai’s name and Jack Dorsey just continuing to insist that he’s thinking about things when he clearly is not actually even present. He is floating in an astral plane in the place he would rather be. We can all see it.

Anything but that again. I beg you.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer