From hate speech to privacy, to labor and biased algorithms, society is reckoning with the power of technology and how it affects our lives.
Emanuel Moss is a researcher at the nonprofit Data & Society and author of an upcoming report called “Ethics Owners.” It argues that data-driven tech companies increasingly are, and definitely should be, creating specific positions for people whose entire job is to think about policies and product development with ethics in mind. Basically, a person who is hired to imagine the worst possible usage of a product or how it might affect people who either do or don’t use it. The problem, Moss said, is that ethics can be really hard to measure. The following is an edited transcript of our conversation.
Emanuel Moss: There is no good way right now to measure the consequences or the effects of these ethics initiatives. There’s no metric for ethics. Most of the people that we spoke with expressed a desire for good metrics that could be used to track the success of interventions they make, or even to track the effects of their products on society.
Molly Wood: Can you give us an example, maybe a dilemma, that a person in this job might face or a problem that an ethics officer caught that you can tell us about?
Moss: Data-driven companies have all sorts of information about their users, but they have relatively little information about people who don’t use their products. A ride-share company, for example, knows a lot about how changes they make to their product affect people who use their service or might then decide not to use their service because of a change they made. But they don’t know very much at all about people who don’t use their service. They don’t know how people who rely on public transportation, say, and never use their service are affected by the rising predominance of ride-shares in their neighborhoods. That’s a key dilemma. How do you attend to the harms that people who aren’t directly engaged with your service are exposed to?
Wood: It seems like diversity would be a really important part of doing this job well.
Moss: Yes, very much so. One thing that we heard in many different ways was an idea along the lines of, “Well, before we release a product or make a design decision, we get together in a room and think really hard about what could go wrong with this product.” But, it’s very difficult to imagine what might go wrong for people whose lives don’t look very much like the people who are inside that room doing that imagining.
Wood: What might end up being the job responsibility, or responsibilities, of someone who is the either official or unofficial ethics owner within a corporation?
Moss: Ethics owners are tasked with converting values that the company might assert into practices for employees to follow or to engage with as they go about their business. It might involve things like convening a “red team” to probe the vulnerabilities of a system or to think about how a system might be misused. It might involve adapting the tools that are used to track software development in order to document decisions that were made that have ethical implications and to mitigate those before the product rolls out. It also might involve convening a task force, or review boards, or ethics boards to review and suggest changes to products.
Wood: Do you think that this kind of thinking, that thinking that might go into a job like this, should go all the way back to the STEM curriculum? If you’re learning data science in college, ethics should ideally be a part of that?
Moss: The short answer is yes, full stop. The way that you embed ethics in curricula needs careful thought. All too often it is taught as the last week of a semester or as a standalone, almost elective course even, if it’s required. Some very good pedagogy has been developed in universities across the country that look at ways of embedding ethics into the same modules that teach fundamental methods for computer science. There are a few researchers that have been leading the way on this. I know Brandeis Marshall has worked on this. Casey Fiesler at the University of Colorado keeps a very popular database that’s usually the pinned tweet on her Twitter account that lists all of the ethics curricula and shows the different ways it’s been embedded in STEM curricula.
Related links: More insight from Molly Wood
The University of Notre Dame announced last week its new Tech Ethics Lab funded by IBM, where researchers will try to anticipate ethical issues that could come up in technology. The founders say that is especially urgent around elections, bias and artificial intelligence, the spread of facial recognition technology and even more so as we develop new technologies like quantum computing. There’s a good piece in the Washington Post about the new lab and other schools that are trying to build ethics curriculum into computer science education, including Stanford and Carnegie Mellon universities. Unintended consequences don’t have to be unforeseen.
The future of this podcast starts with you.
Every day, Molly Wood and the “Tech” team demystify the digital economy with stories that explore more than just “Big Tech.” We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.