Are brain implants a privacy issue?
Jun 9, 2023

Are brain implants a privacy issue?

HTML EMBED:
COPY
Brain-computer interface technology can benefit people with disabilities by restoring mobility and communication. The University of Washington’s Sara Goering says it also allows potentially monetizable access to the center of our thoughts and feelings.

The field of brain-computer interfaces is quickly advancing.

Elon Musk’s brain implant company, Neuralink, received approval from the U.S. Food and Drug Administration last month to begin to test brain implants in humans. Its rival, Paradromics, is even further along in the process.

Neurotechnology could be revolutionary for people with severe paralysis, amyotrophic lateral sclerosis or other disabilities that affect communication. But Sara Goering, a philosophy professor at the University of Washington, says it comes with ethical concerns.

Marketplace’s Meghan McCarty Carino spoke with Goering about those concerns, which include the potential monetization of information gleaned from a person’s cognitive core. The following is an edited transcript of their conversation.

Sara Goering: I think it’s really promising technology and there’s a lot of good that could come from it, but it is also giving us access into a very intimate sphere of our functioning that I think we want to be very careful to protect.

Privacy is a huge kind of concern here. Sometimes when I talk about privacy, people will say, “Well, but I’m not concerned about privacy because I have nothing to hide.” But it’s not really that we have guilty or nonguilty people. I think controls around privacy are the way that we manage our intimacy. We get to decide who we let into our most personal ideas, thoughts and feelings and who we keep at arm’s distance or who is a complete stranger. I think being able to exercise agency over that is really important for our sense of ourselves as individuals.

Putting electrodes into our brains in ways that allow us to record can be good when we want to share, like in the case of somebody with locked-in syndrome who would be able to express themselves to family members, but it also opens up this opportunity to collect other data. This is essentially the market of the internet, where there’s a way to collect our data, monetize it and make use of it. This is where I worry, and I want to put up protections around that very sensitive space.

And even if it’s not mind reading per se, we still want to be careful about the broad neural data that we collect because it can become more interpretable over time.

Meghan McCarty Carino: This technology is attracting a lot of investment right now. What are the implications of mixing this very sensitive space with potentially profit incentives?

Goering: On the one hand, when it’s done to enable well-being and help people who have significant challenges with things like communication with our loved ones, it’s a good use of this technology. On the other hand, imagining the kinds of marketing that could happen — based not on what I click and search on the internet, but instead on what I’m thinking — seems very distressing.

I also think our brains are not the same as our identity, but they are very central to our sense of ourselves and our consciousness. And we’ve learned from deep-brain stimulation studies that there can be stimulation of the brain that can change people’s behaviors a little bit. When that happens, the individual doesn’t always recognize it about themselves. So I think we want to be really careful about forms of control when we know there’s a profit motive behind an investment and ways that you might be able to direct people to particular kinds of purchasing behavior, for example.

McCarty Carino: What kinds of protections are you talking about?

Goering: One kind of protection is just being much more careful about how people opt in to sharing data. On the other end of it, I have been a little bit involved with some groups thinking about a human rights approach to trying to preserve this privacy. It doesn’t mean that the devices couldn’t move forward when they’re beneficial, but it provides a pushback for individuals if they feel they’ve been violated.

McCarty Carino: Are there ethical concerns that come up specifically in the context of testing this kind of technology?

Goering: I think people generally do a very good job of running studies, but I think they are underexplored parts of this. One of them, which the National Institutes of Health have taken an interest in recently, is thinking about what happens when the study ends. So, if you have an implantable device in your brain, removing it requires another brain surgery. But if it’s been working for you, of course, you would want to keep it, and keeping it might be one thing, but then you also need the upkeep of it because it’s a technical device. Electrodes fail or wires go bad or there might need to be maintenance. That’s a very technical kind of maintenance and not just any old physician or neurologist can perform that sort of function. So, there’s now been a big conversation at the NIH and internationally about what we call post-trial obligations. People are trying to understand what is the extent of them and what would count as enough of a benefit that you would be permitted to keep the brain device.

McCarty Carino: In your research, you’ve spoken to people who use BCI technology. What kinds of things have they told you about their experiences?

Goering: I think these are some really cool people who are willing to be pioneers in this area. They put in a lot of time in the lab to try to learn how to use a device to control a robotic arm or a cursor. In our first grant study, we asked them about their sense of agency while using the device. Agency seems very central to who we are as human beings. We want to be able to feel that we’re doing something, to be responsible for that, to trust the action. I trust my body that I can move in certain ways, but if I’m moving with the help of a brain-computer interface, I want to be able to trust that it’s reliable and will do the things that I want it to do. At the same time, the device is giving me feedback, so some of the devices are trying to do both read-out and write-in, and some of the information coming in would be sensory information. Engineers can put sensors at the end of the fingertip of a robotic arm, so now I’m not only looking at the arm reaching, but I can feel pressure when I hold onto a coffee mug. It’s great because it will feel much truer to typical human experience when I’m using that device.

But, if we have access to stimulate particular percepts in the sensory cortex, that is how our external world is made and that’s how we get access to what we think is in the world. So that opens up this really interesting possibility of not trusting what’s coming in.

Some of the people we’ve talked to will say things about the device like, “It’s pulling down and to the left.” And you think, “Well, what is it pulling on? Because you’re not holding anything, you’re not using your muscles, right? But somehow, it’s pulling in your brain.”

We’ve also talked to people who say when it’s working really well, they say they feel like they’re in control and are responsible for the movement of the cursor on the screen, for example. But then other times, and it’s not always clear what causes this, they say things like, “I feel like I’m fighting against it.” If we’re fighting against a computer that’s inside of our brains, it can be hard to know who we are. Are we now combined with the device? Or is it something separate from us that we push back against?

Another really interesting thing we’ve heard from people in BCI studies is that the concentration part is tiring if you’re in the lab for a few hours running these experiments, but also their muscles feel tired. But they’re not moving the muscles during the experiment. It’s fascinating to learn more about our experience of agency by talking with people who are in these first-in-human device studies.

If you’re interested in learning more about the ethical and human rights issues associated with neurotechnology, Goering recently contributed to a paper in the Cambridge Quarterly of Healthcare Ethics that lays them out.

I mentioned Neuralink and Paradromics, two leading companies in what The Washington Post reported has turned into a bit of an arms race since Neuralink launched six years ago. According to The Post, 42 people worldwide have used BCI implants in clinical trials, including a Pennsylvania man who, last summer, set a record by wearing his BCI implant for over seven years.

Closer to home, our producer Daniel Shin tried out a piece of noninvasive BCI technology last fall, when he visited AE Studio in Los Angeles. While reporting that story, Daniel was fitted with an external neural net to play a computer game using only his mind. It’s pretty impressive.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer