Hospitals and other health care systems are eager to find patterns in their patient data that can help treat and prevent illness and cut costs. In England, the National Health Service is collecting the medical histories of up to 55 million patients to share with third parties. Here in the United States, Google will help the hospital chain HCA Healthcare store and analyze health data. Amazon, IBM and Microsoft have similar partnerships.
I spoke with Deven McGraw, chief regulatory officer at the health records startup Ciitizen. She’s also an adviser to Verily, a sister company to Google. McGraw says predictive analytics can help providers anticipate things like which patients are at risk of coming back to the hospital after surgery. The following is an edited transcript of our conversation.
Deven McGraw: There can be really great applications, like improving care, making sure that people are getting the right interventions at the right time. But then, of course, there can also be very negative uses for the same dataset to say, “Oh, these folks are likely to end up being readmitted after the surgery. I want to try to avoid providing care to them, because they’re going to cost me money.” It’s part of the reason why people are very nervous when they hear about health big data.
Amy Scott: The health care system is already using data that can be flawed. I mean, especially around race and gender, as we’ve seen. What are the risks of using flawed data to make predictions, and how do you mitigate that?
McGraw: Yes, there are definitely risks of using flawed data. If the makeup of the data is just all white people, for example, is it safe to draw conclusions from that data that are going to apply to people who are not white? It really is an issue of recognizing the limitations in your data, and then also addressing the issue of being able to collect more data from populations that are missing, for example, so that the conclusions that you may be able to draw from that data are more robust in terms of their applicability.
Scott: It’s interesting that we’re seeing some of the really big tech companies move into this space. Why do you think the health care system should be contracting with companies who specialize in, say, phones or search?
Privacy track record
McGraw: I think the appeal of looking to these companies is that they have a lot of expertise in crunching data and in doing predictive analytics, albeit largely outside of the health care space, but with huge amounts of data. So I think it is very attractive for a health care system to think, “Well, if they’ve been able to really be successful in other sectors, will they generate better results?” And so I think that’s what the systems are thinking of when they go to these companies. On the other hand, this is the kind of thing that a lot of members of the public get very nervous about because the track record of some of those companies in the privacy space is less than stellar. How do you assure that you get all of the good out of what these technology companies might be able to bring to this set of very difficult problems in health, but really hold companies accountable for handling this data in ways that people sort of expect out of the health care system?
Scott: Well, along those lines, you previously worked on health care privacy in the federal government. Do you think that the main health privacy law, HIPAA, needs to be changed to account for Big Tech companies moving into health care?
McGraw: Yes, I do. I mean, HIPAA has its limitations. And as far as I can tell, a lot of these arrangements between tech companies and health care systems are likely covered by HIPAA, but it’s not entirely clear. And I think that HIPAA could be improved, even if HIPAA does does apply, or when HIPAA applies to those arrangements, and there will be circumstances when it won’t apply. Like, just to give an example, when deidentified data, [data] that are deidentified in accordance with HIPAA standards, are shared with a tech company, it’s not going to be covered by HIPAA at all. And certainly, some of the news coverage of some of these arrangements, the health care systems have reported that they have taken steps to control how these tech companies can use data. And that’s all well and good. But that’s really not providing the kind of level of accountability I think that is going to be important to establish here, because it’s just a contract between two parties.
Scott: And when you say deidentified, you mean data that’s had the identifying information stripped from it?
McGraw: Yes. There are methodologies that can be used to deidentify data that involves either stripping identifiers, or they mask them, or they introduce noise or perturb the data in some ways so that it creates a very low risk that the data can be sort of tagged to any one particular person. But it doesn’t mean that there’s zero risk of reidentification. And we certainly know that there is a lot of data combination that goes on, even when some identifiers have been removed from the data. Most of health data analytics that occurs today in the commercial sector occurs with data that have been deidentified under HIPAA, and yet they’re able to amass entire profiles of people in terms of their experience with respect to health data, combine it with data out in the commercial space, and all of that, presumably, deidentified. So it’s not the kind of protection that I think most people in the public think about. When they think about deidentified data, they think, “Oh, nobody knows it’s me.” I don’t think people imagine that it also involves an entire profile that’s been created about them in lots of uses of data in ways that they don’t really even know about, much less have had any choice in.
Scott: Privacy advocates would argue that Big Tech isn’t really the solution here. The solution is more better relationships with doctors, more time spent with the patient and provider and better follow-up. Why not invest in that instead?
Invest in data or more time with doctors?
McGraw: Well, we need to invest in both, would be my answer to that. I mean, it’s not an either/or, it’s a both. Data helps us discover what the problems are, data helps us discover what might be causing certain outcomes, what might be impacting certain outcomes, what factors are contributing so that we can plug in the right interventions to help that. But that doesn’t mean that if you think about improving health care overall that we’d wouldn’t also benefit from physicians’ ability, physicians, nurses, clinical professionals’ ability to spend more time with patients and greater use of telemedicine so people don’t always have to come into the office. Something that we saw during the pandemic, but we’re not sure whether it’s going to stay. I mean, not every problem in health care gets fixed with data, but lots of problems, we can improve our ability to address those issues with data. So it’s not either/or.
Scott: Do you think the big promise of predictive analytics and big data in health care is the cost savings? Or is it about helping patients, or both?
McGraw: I mean, it should be both because both are a huge problem. I mean, when health care costs are as high as they are, we struggle to make sure that health care is accessible to people across the board. But at the same time, again, a treatment may be affordable to you because you’re lucky enough or fortunate enough to have good health care insurance. But if it’s not really an effective treatment for you, that’s not really helpful either. So right treatment to the right person at a manageable cost — I don’t want to say low cost, that’s probably unrealistic — is what we ought to be aiming for. And again, data should be able to help us on both fronts, again, if we’re doing it responsibly, and if we’re assuring that we’re getting the benefits and then taking steps to minimize the risks.
Related links: More insight from Amy Scott
We talked about health systems relying on flawed data. Here’s a piece by A. Rochaun Meadows-Fernandez in Experience magazine. She pointed out that a lot of medical data comes from clinical trials, which often don’t include very many people from “racial minorities and other marginalized groups.” But people from these groups can have different responses to treatment and different outcomes, often because of systemic racism.
Last summer, we heard from Dorothy Roberts at the University of Pennsylvania. She talked about a study that found that Black women in Chicago were dying from breast cancer at a much higher rate than white women. Not because their health was worse, but because white women were more likely to get advanced treatment found in private hospitals.
And speaking of flawed assumptions, last week, the NFL said it would stop using a controversial practice called “race norming” as part of its billion-dollar settlement with former players with brain disease stemming from hits to the head. The practice made it harder for Black players to qualify for payouts by assuming they had a lower-level of cognitive function to begin with. The decision came after former Pittsburgh Steelers Kevin Henry and Najeh Davenport filed a civil rights lawsuit, saying they were denied settlements but would have qualified had they been white.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.