This article is part of Marketplace Tech's ongoing series on The Data Economy. You can read the other parts here.
In Europe and other countries, privacy is considered a human right. It’s written into some 150 constitutions all over the world, according to the Constitute Project.
Ours, here in the U.S., is not one of those constitutions, and digital privacy has, for years, been considered an acceptable casualty of the data economy (and post 9/11 law enforcement, as the Snowden revelations demonstrated). Some have even referred to modern advertising as “surveillance capitalism.”
Since the beginning of the web as we know it now, we’ve been trading some personal information in exchange for free services. Free email, and then stock quotes, and then search, and then social media and online banking and smart-phone operating systems and digital maps and dating services and … well, you get the point.
Heck, before anybody was worried about Facebook and data collection, there was Google, which pioneered cookies and click tracking and scripts for tracking your browsing habits and search data and stock portfolio and credit card information and YouTube comments and the contents of your email almost from the very start — and that’s before it hired people to literally drive around taking pictures of your house and logging information from your WiFi router at the same time.
Privacy advocates have been raising concerns about how much information we share with tech companies for the last dozen years, if not longer. But we kept at it, mainly because there wasn’t any clearly viable alternative, and to most there wasn’t any obvious harm.
Even as data breaches piled up and identity theft became consumers’ number one concern over the years, the harm still wasn’t measurable enough to stop the sharing. And even in the wake of major data breaches like Yahoo, Target, the GAO, and Equifax, federal officials enacted zero new regulations to dictate how personal information is collected and stored, or how consumers should be notified when there’s a breach. One expert told me bluntly, "there's no blood."
Facebook itself, despite the apologies Mark Zuckerberg delivered to Congress about its role in the Cambridge Analytica scandal, has been apologizing for violating its users' privacy for the last 14 years, as Marketplace regular Zeynep Tufecki points out in Wired magazine, but it continues to push the boundaries of data collection. In some cases the company removed features, then waited a few years and re-introduced those that had previously caused a privacy controversy (think: News Feed). Just this month, in the middle of the latest apology tour, the company rolled out new facial recognition features that scan and identify photos of you on its network. And that technology is activated by default.
And all this time consumers weren’t being explicitly told that their data might be collected once while they were shopping online, then collected again when tracking technology followed them not only across websites, but into the mobile apps on their smartphones, and a few more times based on their emails, instant messages, social posts, credit card purchases and the photos they were tagged in online — and that all that information would be bagged up, passed to advertisers, and sold and re-sold by data brokers.
“It's like the data equivalent of Fukushima,” says Mark Surman, executive director of the nonprofit Mozilla Foundation, an internet advocacy organization that's connected to the Mozilla company, which develops Firefox and other software. “And when you think about something like Fukushima or Cambridge Analytica, it's a big, hot point. And we take notice. But like, if you take a nuclear disaster, it isn't just about that hot point. It's about a bigger question of how we produce energy and just the whole economy. I think we know we're making data every day maybe, we probably don't think about it much, but we definitely don't step back to look at that bigger picture of the data-driven advertising economy.”
And the privacy conversation is only getting more important. Data now drives not only advertising but artificial intelligence, connected devices, and predictive technologies that are built into apps and digital assistants. (if Google or Siri know you have an appointment coming up, they can helpfully send you driving directions and tell you to leave on time!)
The promise of data-driven AI is great in some ways: It can save lives by figuring out when airplanes need repair (as in a recent commercial for IBM Watson). It can tutor kids. It can help diagnose health issues before they actually flare up. It can reduce traffic by analyzing where people are driving and suggesting better city planning. All of those are good uses, but data about our location, our health, and how our kids learn is sensitive. Shouldn’t we get some control over how it’s gathered and where it goes? So far, we don’t have it.
Surman says this to anyone overwhelmed by the loss of control over their data: Consumer pressure can work.
“Talk to the companies that are making this stuff because you are their customers, and ultimately if enough people say, 'Hey, I want to know more about how my data is being used,' [companies will] build that knowledge, that transparency into the product,” he says. “There are many places in history where industry has gone too far and citizens have stepped up and said, ‘Hey, I like your product, but I want the way you make it or the way it treats me to be different.’ I mean if you think about Nike today versus Nike 20 years ago, people like their Nikes but they said, 'I don't like your labor practices,' and they changed them.”
Apple CEO Tim Cook has been on a tear lately about privacy, calling it, in fact, a “human right,” and touting it as a boost to his company's bottom line, saying Apple is a better privacy choice over Android, or than using Facebook. Apple’s not perfect on data collection and storage either, but even if it's a brazen sales grab, Cook's comments at least lend gravity to the need for customer privacy. That gravity is much needed.