Another day, another horrific Facebook privacy scandal. We know what comes next: Facebook will argue that losing a lot of our data means bad third-party actors are the real problem that we should trust Facebook to make more decisions about our data to protect against them. If history is any indication, that’ll work. But if we finally wise up, we’ll respond to this latest crisis with serious action: passing America’s long-overdue federal privacy law (with a private right of action) and forcing interoperability on Facebook so that its user/hostages can escape its walled garden.
Facebook created this problem, but that doesn’t make the company qualified to fix it, nor does it mean we should trust them to do so.
In January 2021, Motherboard reported on a bot that was selling records from a 500 million-plus person trove of Facebook data, offering phone numbers and other personal information. Facebook said the data had been scraped by using a bug that was available as early as 2016, and which the company claimed to have patched in 2019. Last week, a dataset containing 553 million Facebook users’ data—including phone numbers, full names, locations, email addresses, and biographical information—was published for free online. (It appears this is the same dataset Motherboard reported on in January). More than half a billion current and former Facebook users are now at high risk of various kinds of fraud.
While this breach is especially ghastly, it’s also just another scandal for Facebook, a company that spent decades pursuing deceptive and anticompetitive tactics to amass largely nonconsensual dossiers on its 2.6 billion users as well as many billions of people who have no Facebook, Instagram or WhatsApp account, including many who never had an account with Facebook.
Based on past experience, Facebook’s next move is all but inevitable: after regretting this irretrievable data breach, the company will double down on the tactics that lock its users into its walled gardens, in the name of defending their privacy. That’s exactly what the company did during the Cambridge Analytica fiasco, when it used the pretense of protecting users from dangerous third-parties to lock out competitors, including those who use Facebook’s APIs to help users part ways with the service without losing touch with their friends, families, communities, and professional networks.
According to Facebook, the data in this half-billion-person breach was harvested thanks to a bug in its code. We get that. Bugs happen. That’s why we’re totally unapologetic about defending the rights of security researchers and other bug-hunters who help discover and fix those bugs. The problem isn’t that a Facebook programmer made a mistake: the problem is that this mistake was so consequential.
Facebook doesn’t need all this data to offer its users a social networking experience: it needs that data so it can market itself to advertisers, who paid the company $84.1 billion in 2020. It warehoused that data for its own benefit, in full knowledge that bugs happen, and that a bug could expose all of that data, permanently.
Given all that, why do users stay on Facebook? For many, it’s a hostage situation: their friends, families, communities, and professional networks are on Facebook, so that’s where they have to be. Meanwhile, those friends, family members, communities, and professional networks are stuck on Facebook because their friends are there, too. Deleting Facebook comes at a very high cost.
It doesn’t have to be this way. Historically, new online services—including, at one time, Facebook—have smashed big companies’ walled gardens, allowing those former user-hostages to escape from dominant services but still exchange messages with the communities they left behind, using techniques like scraping, bots, and other honorable tools of reverse-engineering freedom fighters.
Facebook has gone to extreme lengths to keep this from ever happening to its services. Not only has it sued rivals who gave its users the ability to communicate with their Facebook friends without subjecting themselves to Facebook’s surveillance, the company also bought out successful upstart rivals specifically because it knew it was losing users to them. It’s a winning combination: use the law to prevent rivals from giving users more control over their privacy, use the monopoly rents those locked-in users generate to buy out anyone who tries to compete with you.
Those 553,000,000 users whose lives are now an eternal open book to the whole internet never had a chance. Facebook took them hostage. It harvested their data. It bought out the services they preferred over Facebook.
And now that 553,000,000 people should be very, very angry at Facebook, we need to watch carefully to make sure that the company doesn’t capitalize on their anger by further increasing its advantage. As governments from the EU to the U.S. to the UK consider proposals to force Facebook to open up to rivals so that users can leave Facebook without shattering their social connections, Facebook will doubtless argue that such a move will make it impossible for Facebook to prevent the next breach of this type.
Facebook is also likely to weaponize this breach in its ongoing war against accountability: namely, against a scrappy group of academics and Facebook users. Ad Observer and Ad Observatory are a pair of projects from NYU’s Online Transparency Project that scrape the ads its volunteers are served by Facebook and places them in a public repository, where scholars, researchers, and journalists can track how badly Facebook is living up to its promise to halt paid political disinformation.
Facebook argues that any scraping—even highly targeted, careful, publicly auditable scraping that holds the company to account—is an invitation to indiscriminate mass-scraping of the sort that compromised the half-billion-plus users in the current breach. Instead of scraping its ads, the company says that its critics should rely on a repository that Facebook itself provides, and trust that the company will voluntarily reveal any breaches of its own policies.
From Facebook’s point of view, a half-billion person breach is a half-billion excuses not to open its walled garden or permit accountability research into its policies. In fact, the worse the breach, the more latitude Facebook will argue it should get: “If this is what happens when we’re not being forced to allow competitors and critics to interoperate with our system, imagine what will happen if these digital trustbusters get their way!”
Don’t be fooled. Privacy does not come from monopoly. No one came down off a mountain with two stone tablets, intoning “Thou must gather and retain as much user data as is technologically feasible!” The decision to gobble up all this data and keep it around forever has very little to do with making Facebook a nice place to chat with your friends and everything to do with maximizing the company’s profits.
Facebook’s data breach problems are the inevitable result of monopoly, in particular the knowledge that it can heap endless abuses on its users and retain them. Even if they resign from Facebook, they’re going to end up on acquired Facebook subsidiaries like Instagram or WhatsApp, and even if they don’t, Facebook will still get to maintain its dossiers on their digital lives.
Facebook’s breaches are proof that we shouldn’t trust Facebook—not that we should trust it more. Creating a problem in no way qualifies you to solve that problem. As we argued in our January white-paper, Privacy Without Monopoly: Data Protection and Interoperability, the right way to protect users is with a federal privacy law with a private right of action.
Right now, Facebook’s users have to rely on Facebook to safeguard their interests. That doesn’t just mean crossing their fingers and hoping Facebook won’t make another half-billion-user blunder—it also means hoping that Facebook won’t intentionally disclose their information to a third party as part of its normal advertising activities.
Facebook is not qualified to decide what the limits on its own data-processing should be. Those limits should come from democratically accountable legislatures, not autocratic billionaire CEOs. America is sorely lacking a federal privacy law, particularly one that empowers internet users to sue companies that violate their privacy. A privacy law with a private right of action would mean that you wouldn’t be hostage to the self-interested privacy decisions of vast corporations, and it would mean that when they did you dirty, you could get justice on your own, without having to convince a District Attorney or Attorney General to go to bat for you.
A federal privacy law with a private right of action would open a vast possible universe of new interoperable services that plugged into companies like Facebook, allowing users to leave without cancelling their lives; these new services would have to play by the federal privacy rules, too.
That’s not what we’re going to hear from Facebook, though: in Facebookland, the answer to their abuse of our trust is to give them more of our trust; the answer to the existential crisis of their massive scale is to make them even bigger. Facebook created this problem, and they are absolutely incapable of solving it.