The FTC Is Rewriting the Rules of the Internet, Just in Time for the AI Sea Change

The FTC Is Rewriting the Rules of the Internet, Just in Time for the AI Sea Change

There’s a widespread misconception about whether or not federal law protects your privacy. It doesn’t, at least not explicitly. Congress has managed to squander a decade’s worth of bipartisan agreement about the internet’s data problems. In the absence of legislation, one group of regulators recently stepped in to fill the void. It’s a ragtag group of government cowboys that calls itself the Federal Trade Commission.

Over the past year, the FTC picked up the few meager laws on the books that have anything to do with privacy and repackaged them into a way to address big data’s worst offenders. Through innovative legal arguments and landmark settlements, the FTC is rewriting the rules of the internet — just in time to usher in a platform shift as AI and other technologies spark a new era of the web.

The Federal Trade Commission Act only gives the agency the authority to regulate “unfair or deceptive” business practices. For years, privacy experts assumed that meant consumers were out of luck: as long as companies weren’t telling outright lies, they were free to do as they pleased with your data. The FTC reached a $US5 billion privacy sentiment with Facebook in 2018, but the case hinged on ways the company misled users — rather than allegations that the unpleasant ways Facebook used data were inherently unlawful.

But under the leadership of Lina Khan, the Biden-appointed FTC chairperson, the commission has taken up data misconduct with unprecedented vigour.

The FTC does have some rule making authority, but it’s a slow, arduous process. In the meantime, it is changing tech policy by stretching existing regulations to places no one believed they could go.

Most significant in this novel legal offensive has been a case against GoodRX, a prescription medication coupon service. Contrary to popular belief, the Health Information Portability and Accountability Act (HIPAA) generally doesn’t apply to anyone other than doctors, insurance companies, and their business associates. But based on an investigation by this reporter that found GoodRX shared user’s prescription data with Google, Facebook, and other companies that work in advertising, the FTC adopted a rule that requires health companies to disclose data breaches. The FTC argued GoodRX broke the Health Breach Notification Rule by failing to disclose its data sharing practices, setting a precedent that extends legal protections to medical data for the very first time.

The FTC has reached several other groundbreaking settlements in the past year, such as a case against Fortnite maker Epic Games. The Fortnite case marked the government’s first major intervention in the realm of “dark patterns,” a term for intentionally confusing website and app designs that trick consumers. Epic Games agreed to a half-billion-dollar fine. Other recent landmark cases saw the FTC redoubling kids’ privacy protections and cracking the whip on Amazon for significant privacy violations with its Alexa smart speakers and Ring smart doorbells.

Ronald Reagan once said the most frightening words in the English language are, “I’m from the government, and I’m here to help.” For anyone who makes their money spying on Americans, that may be true when it comes to the FTC.

Gizmodo sat down with Samuel Levine, the Director of the FTC’s Bureau of Consumer Protection, for an extended interview on how FTC envisions its groundbreaking attack on privacy problems, its plans for the future, and an effort to build a new regulatory environment that protects consumers without stifling a rapidly shifting tech landscape.

This interview has been edited for clarity and consistency.

Thomas Germain: Sam, why don’t we start with a broad overview of what’s changing. Since the dawn of the internet, it’s felt like companies could almost do whatever they want as long as they can get you to click “I agree” on privacy policy.

Samuel Levine: We’re done preaching this fiction that the markets can self correct, or that consumers can protect themselves by reading privacy policies. For the last two decades we’ve had a regime where companies felt like they could put anything in their privacy agreements and get away with it if consumers say yes.

Big picture, the shift we’ve made as an agency is stating plainly what I think many people already knew, but hasn’t really been said by anyone in government: the notice and notice and choice regime is not working. It might have made sense two decades ago, but it does not make sense today. It’s unreasonable to put the burden on consumers to be reading hundreds of thousands of pages of privacy policies, let alone to understand them.

We’ve worked through at least half a dozen cases that include data minimization, outright prohibitions on sharing sensitive data, and other substantive protections that people didn’t think were possible two or three years ago. We’re also considering market wide rules on commercial surveillance and digital security.

TG: This privacy policy, notice and choice regime has been the status quo for a long time. In the absence of more input from Congress in terms of a federal privacy law, what’s the alternative? What does the FTC expect from companies?

SL: First, we want Congress to pass privacy legislation. We’re doing everything we can, but nothing we do is a substitute for comprehensive federal legislation. That remains our position. However, we still expect companies to add or accurately disclose how they’re handling people’s data. And if they fail to do so, we’re going to hold them accountable.

That said, what we’ve tried to do is remind the marketplace through our enforcement actions that “Deception” is not our only authority. We also have authority to prohibit and take action against “unfair” practices which are defined in our statute as practices that cause injury, that are not reasonably avoidable by consumers, and that don’t have countervailing benefits to consumers or competition. If a company’s data practices harm people, we’re prepared to take action, even if those practices are accurately disclosed. In other words, we’re not just looking at whether companies are telling the truth about how they’re using people’s data, we’re thinking about whether companies are using people’s data in a way that is likely to harm us.

TG: What exactly does the word “harm” mean here? That’s an ongoing debate with privacy problems. People say, “Sure, maybe you’re creeped out, but you’re not losing money or anything. What’s the big deal?”

SL: It depends on the context, but to be clear, we’re not only looking only at financial injury. And our statute is not only “harm” but also “likely to harm.” That’s an important distinction. I don’t want to comment on pending litigation, but for example we have practices by data brokers that can lead to stigma, discrimination, an increased risk of stalking, things of that nature. These are real risks, and we’ve taken the position that those harms are recognisable under the FTC Act, even if there is no monetary injury.

As a society we are long past the point where we buy into the idea that not losing money means you’re not going to be harmed. They’re all sorts of ways — and they’ve been well documented by you and many other journalists, as well as in our cases — ways that people are harmed by reckless data practices in a manner that can’t always be quantified in dollars and cents.

TG: If that’s the definition of harm, you could apply that logic to almost the entire data broker industry. You could imagine the FTC almost wiping the data broker business off the map entirely.

SL: That’s not the goal. The goal is to curb practices that we believe are breaking the law. One of the things you see in that industry is there are companies out there that are taking no steps to filter out sensitive data, no steps to ensure that only responsible parties can purchase sensitive data, and no steps to ensure that this data isn’t being used in ways that could harm people. You’re right that these problems are widespread. But for purposes of enforcement action, we’re looking squarely at individual companies, individual cases, and individual ways that consumers can be harmed.

TG: I’ve got a hypothetical for you, which is probably a bad word for someone who works in government. Some experts I’ve spoken to say we’re moving in a direction where AI and predictive analysis becomes so effective that we can do things like ad targeting, for example, and barely collect any data about you whatsoever. Companies are getting better at saying “we know what type of person you are, so the specifics of your behaviour don’t matter.” That could leave us in a place where this problem has nothing to do with “privacy,” and it’s just an exercise in power. All the existing laws we have are pretty much about consent, rather than banning harmful practices outright. What do regulators do if that becomes a reality?

SL: I think it’s an excellent observation. I guess I would make two points, one looking back and one looking forward. Looking back, we are now living through the consequences of many years of unfettered data harvesting. And it’s true, so much has been collected, in so many ways, by so many companies, across so many devices, and in so many forms. So much so that for many of these companies, especially the largest firms, they may no longer need to collect additional data in order to target people… which, by the way, could raise some competition concerns as well. It tends to advantage incumbent firms.

The reality we find ourselves in now is directly attributable to that this has been a Wild West for so long. Looking forward, we need to be thinking about this kind of situation where companies can make assumptions about people without collecting new information about them. In fact, that’s something we already talk about in our rule-making. It’s uncharted, but it’s increasingly becoming a part of the common model for larger firms. However, I don’t think it goes beyond the FTC jurisdiction.

TG: Both industry and consumer advocates have been sitting around waiting for a comprehensive privacy law for a very long time. It seems like in the absence of legislation the FTC is saying, “well, if Congress isn’t going to do anything, we’re going to take the few measly rules and laws we have and do it ourselves.” Is that a fair way to describe what’s going on?

SL: Well, we’ve been doing privacy and data security for a long time, and it’s not the case that we’ve given up on Congress passing legislation. But what we said publicly, and I feel it deep in my bones, we’re not just going to sit on our hands and wait for Congress. What we tried to do over the last couple of years is inventory all the tools we have, whether it’s the Fair Credit Reporting Act, the Health Breach Notification Rule, COPPA [the Children’s Online Privacy Protection Act], and, of course, section five of the FTC Act. And I think we’ve had a lot of successes on this, and companies are noticing. My hope is that success begets success, and just as we’ve taken a fresh look at our tools, companies are taking a fresh look at themselves to make sure they’re not engaging in the kind of practices that led us to bring enforcement actions.

TG: Let’s talk about the existing privacy laws, both at home and abroad. In general, all the privacy rules are the kind of notice and choice privacy policy regimes we started off this conversation talking about. Is that good enough, or do we need Congress to go further?

SL: Making sure consumers know what data is being collected about them and giving them a chance to opt in or out is essential. The question is whether it’s sufficient, especially when we’re talking about services that consumers really don’t have the choice about using, and areas with especially sensitive information like more health data or kids data.

That’s why in the Premom action and the BetterHelp Action, we did not require those firms to disclose to consumers that they were selling or sharing their sensitive data for advertising purposes. We required those companies to stop engaging in those practices. For other companies, I hope they’re paying attention to the signals we’re sending about the unfettered monetisation of sensitive data. It also underscores some of the limitations of a regime that relies entirely on consumers reading lengthy privacy policies, which we know places too much burden on people.

TG: If Congress passes a federal privacy law, I assume the enforcement would come down to the FTC unless they create some new regulatory agency just for privacy, which is something that gets talked about. But If Congress didn’t include any extra funding for enforcement, would the FTC be able to enforce it as thoroughly as it should?

SL: I don’t need to tell you that we have a fraction of the resources of data regulators have in smaller countries. We actually have fewer staff at the FTC than we did in 1980, when the economy was a lot smaller. You know, there was a real conscious effort in the eighties to weaken this agency. We’ve expanded over the last couple of years, but we’re still not where we were four decades ago.

Speaking for myself, if Congress passed strong federal legislation, I would certainly hope that they would pair it with the resources to enforce it. However, I can’t underscore enough that If Congress passes a law and tells us to enforce it, we’re going to enforce it. We will find a way. But obviously, privacy isn’t all we do. In order to minimise the effects on our other critical work, it’ll be really important for Congress to pay for our legislation with adequate resources to meet the moment and to do the job.

TG: I want to shift gears for a second, because I think these days it’s against the laws for me to publish an article that doesn’t have the letters “AI” in it. What are your concerns about how AI will factor into consumer issues, whether it’s amplifying problems we already have or creating new ones?

SL: As an agency we’ve been pretty clear about our confidence that the FTC Act applies to many of the practices we’re seeing in the AI space. There’s a lot of fear right now around the end of humanity and things of that nature in the future. But some of the harms from, say, bad algorithmic decision making is something the FTC has been working to address for a long time. We made it clear years ago that algorithmic decision making that can result in harm to protected classes can be unfair under the FTC Act. One of the nice things about having flexible authority is that we believe that we can address a lot of concerns that people have in this arena.

I’ll give you one example as a template for this in the data privacy context. We just reached an order against Ring, and one of the things we required Ring to do was actually delete models and other data products that were trained on data that we allege Ring collected illegally. It doesn’t take a huge leap to think about how that might apply in the AI context. This is something the FTC already is doing and has done. We hope that the market is seeing that even though people are acting like this is the Wild West, there are laws on the books that apply to these practices and we’re prepared to use them.

TG: OpenAI’s CEO Sam Altman just went in front of congress, and it seems like he wants lawmakers to focus on apocalyptic future that looks like “RoboCop” or “The Matrix.” As Congress considers legislation — which hopefully doesn’t take as long as privacy legislation — do you think they should be worrying about this SciFi future at all, or should the focus be how AI will be used in the next year?

SL: I certainly think Congress has a responsibility to think about the evolution of this technology and all the places it can go, as good or bad as that can be. But you’re absolutely right. We don’t want Congress taking their eyes off the ball when it comes to the harms we’re seeing already, and others that are quite foreseeable. We’ve talked publicly about a lot of the risks with this technology. For example, the risk of fraud, fairness issues with hiring or housing decisions, biases in algorithmic decision making that can result in harm to minorities, or people who are disabled, or other protected classes. A lot of the harms from this technology are in the here and now. I certainly would not discourage Congress from thinking about long term risk from this technology, there are challenges today that I think all of us need to be confronting. I am hopeful that Congress will do so, and that’s exactly what we’re doing at the FTC.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.