In some ways policy is the easiest thing to discuss when we’re talking about the data economy. That's because, well, there really isn’t any.
During the Clinton administration, "it was a very intentional act...to let the internet flourish with a minimum of regulation," says Nuala O'Connor of the Center for Democracy and Technology.
And not only was there very little put in place in the 1990s to restrain the tech industry, there was one foundational piece of policy created that enabled it: Section 230 of the 1996 Communications Decency Act.
The bipartisan bill said, among other things, that “providers of interactive computer services” (remember, this was 1996) would not be treated as the publishers or speakers of the content that was published on the service. So basically a website operator like Yahoo or AOL -- or eventually Google, Facebook, Reddit, Craigslist and almost every site we think of now as a “platform” -- wouldn’t be liable for content on its site, as long as that content didn’t break the law in some way.
Without Section 230 we would not have the modern web, because the first time something objectionable or libelous or pornographic showed up in a forum or on a bulletin board or a web search, the site operator would have been sued out of existence.
These protections have been a great thing in terms of growing the digital economy in the U.S. into a global powerhouse (with lots of great effects, let's not forget). Now lawmakers are looking at limits. In fact, recent legislation aimed at stopping sex trafficking takes a chip out of Section 230 in a way that has internet advocates extremely concerned.
But for the most part, legislators have remained hands off on regulating tech companies around privacy, data collection, advertising practices or any other method of doing business in the data economy. That’s helped tech companies get big, profitable, and potentially dangerous.
“It’s not unlike any early stage of any industry,” says O’Connor. She says a common analogy is business that had a negative impact on the environment. “Industrial and extractive industries at the turn of the century or in the late 1800s, early 1900s, were doing real damage to the environment, to the planet. And the successors of those companies, companies like General Electric where I used to work, had to pay a huge price to do things, like clean up the Hudson River.”
Right now in the U.S. we have zero comprehensive data privacy regulation. Online privacy scholar Joel Reidenberg told me and Kai Ryssdal back in episode 13 of Make Me Smart that the current privacy regime relies on “notice and choice.” Notice of company's terms of service, and choice about whether to agree to them and receive the service, or not.
“It’s not a public law that defines the parameters of privacy in the United States,” he said, “and it’s increasingly a real fiction that notice and choice can work to properly protect individuals’ privacy.”
In fact, even steps toward that protection have been undone: the incoming Trump administration almost immediately rolled back protections last year that would have prevented broadband internet service providers from selling your browsing data without your express permission, and would have laid down breach notification requirements, too.
Fast forward almost exactly a year from then to now, though, and the picture may be changing. Many states have already enacted laws around privacy, data collection, and how soon companies have to notify you if there’s been a breach involving personal information. California is among the strictest in the nation.
And in the wake of the Facebook and Cambridge Analytica scandal, lawmakers seem to be willing to consider federal laws that would deal with some of the same issues, and sew up the patchwork of enforcement that right now stretches from state and city attorney offices to the Federal Trade Commission.
Plus, a new data and privacy regime may be coming from across the pond. In Europe, things evolved very differently on the privacy front. Privacy is considered a human right, and laws governing data collection have long been in place.
In May, the General Data Protection Regulation will go into effect, requiring companies to get explicit permission to use people’s personal information, require them to tell customers exactly what they’ve gathered (a big sticking point in this week’s Congressional hearings with Facebook CEO Mark Zuckerberg), and let them request that their data be deleted.
The GDPR affects any company, no matter where it’s based, that traffics in the data of anyone who lives in the EU. So it’s going to have global ramifications, and it has real teeth: the potential fines for violations are as high as 4 percent of the company’s entire revenue. One expert told me the GDPR is the reason that by late this year or early next, you could see the world’s first billion-dollar fine against a tech company.
So the future regulatory regime might not even originate in the United States, but it’s possible that we could all benefit. O’Connor said it’s probably time for a global standard for data and privacy, at this point.
“I haven't seen a member of Congress ask this question, although I would really love it if somebody would,” she says. “Which is, so tell me, you're going to give rights...to citizens of Luxembourg, when my constituents in Arkansas, or Alabama or wherever, is not entitled to that same kind of transparency and accountability?”
There are, she said, international systems already and “it's actually easier to build to one standard and give everybody the same set of rules and rights and privileges in their data.”
She says the U.S. could still benefit from a comprehensive set of privacy rules around the data economy, but worst case, much like the way California’s vehicle emissions requirements ended up forcing carmakers into an accidental global standard, the GDPR could just make it easier for companies to have better privacy rules for all.