How to Fix Facebook, According to Facebook Employees

Internal research documents provide a blueprint for solving the company’s biggest problems.
A worker with a wrench.
Illustration: Elena Lacey; Getty Images

The Facebook Papers

In December 2019, as Facebook was bracing for the looming chaos of the 2020 election, a post appeared on its internal discussion site. “We are responsible for viral content,” the title declared. The author walked through the ways in which Facebook’s algorithmic design helps low-quality content go viral, concluding with some recommendations. Among them: “Rather than optimizing for engagement and then trying to remove bad experiences, we should optimize more precisely for good experiences.”

That might sound obvious—optimize for good experiences. And yet Facebook’s disinterest in doing that is a persistent theme in The Facebook Papers, internal documents revealed by Frances Haugen, the former employee turned whistleblower who recently testified before Congress. The files, first reported on by The Wall Street Journal, were included in disclosures made to the Securities and Exchange Commission by Haugen and provided to Congress in redacted form by her legal counsel. The redacted versions were reviewed by a consortium of news organizations, including WIRED.

They reveal Facebook’s own employees agonizing over the fact that, in their view, its central algorithms reward outrage, hatred, and viral clickbait, while its content moderation systems are deeply inadequate. The documents are also full of thoughtful suggestions for how to correct those flaws. Which means there is good news for Facebook and Mark Zuckerberg in the files, if they choose to see it: a blueprint for how to fix some of the company’s biggest problems.

Try Making Your Products Good

Quite a few Facebook employees seem to agree that the company has failed to pursue any positive value besides user engagement. Sometimes this is framed explicitly, as in a document published in 2020 with the title “When User-Engagement ≠ User-Value.” After explaining why keeping users glued to Facebook or Instagram isn’t always good for them, the author considers possible solutions. “A strong quality culture probably helps,” they conclude, in what reads as dry understatement. The author goes on to cite the example of WhatsApp—which Facebook acquired in 2014—as a company that built a successful platform not by testing features to optimize for engagement but by making “all their product decisions just based on their perceptions of user quality.”

In other files, researchers only indirectly acknowledge how little attention company leadership pays to factors besides engagement when making product changes. It’s treated as so obvious a fact that it doesn’t require explanation—not just by the authors, but in the extensive discussions with fellow employees that follow in the comments section. In a discussion thread on one 2019 internal post, someone suggests that “if a product change, whether it’s promoting virality, or increasing personalization, or whatever else, increases the severe harms we’re able to measure (known misinfo, predicted hate, etc.), we should think twice about whether that’s actually a good change to make.” In another 2019 post, a researcher describes an experiment in which Facebook’s recommendations sent a dummy account in India “into a sea of polarizing, nationalistic messages,” including graphic violence and photos of dead bodies. The author wonders, “Would it be valuable for product teams to engage in something like an ‘integrity review’ in product launches (eg think of all the worst/most likely negative impacts that could result from new products/features and mitigate)?”

It’s almost cliché at this point to accuse Facebook of ignoring the impact its products have on users and society. The observation hits a little harder, however, when it comes from inside the company.

Facebook rejects the allegation. “At the heart of these stories is a premise which is false,” said spokesperson Kevin McAlister in an email. “Yes, we're a business and we make profit, but the idea that we do so at the expense of people’s safety or well-being misunderstands where our own commercial interests lie.”

On the other hand, the company recently fessed up to the precise criticism from the 2019 documents. “In the past, we didn’t address safety and security challenges early enough in the product development process,” it said in a September 2021 blog post. “Instead, we made improvements reactively in response to a specific abuse. But we have fundamentally changed that approach. Today, we embed teams focusing specifically on safety and security issues directly into product development teams, allowing us to address these issues during our product development process, not after it.” McAlister pointed to Live Audio Rooms, introduced this year, as an example of a product rolled out under this process.

If that’s true, it’s a good thing. Similar claims made by Facebook over the years, however, haven’t always withstood scrutiny. If the company is serious about its new approach, it will need to internalize a few more lessons.

Your AI Can’t Fix Everything

On Facebook and Instagram, the value of a given post, group, or page is mainly determined by how likely you are to stare at, Like, comment on, or share it. The higher that probability, the more the platform will recommend that content to you and feature it in your feed.

But what gets people’s attention is disproportionately what enrages or misleads them. This helps explain why low-quality, outrage-baiting, hyper-partisan publishers do so well on the platform. One of the internal documents, from September 2020, notes that “low integrity Pages” get most of their followers through News Feed recommendations. Another recounts a 2019 experiment in which Facebook researchers created a dummy account, named Carol, and had it follow Donald Trump and a few conservative publishers. Within days the platform was encouraging Carol to join QAnon groups.

Facebook is aware of these dynamics. Zuckerberg himself explained in 2018 that content gets more engagement as it gets closer to breaking the platform’s rules. But rather than reconsidering the wisdom of optimizing for engagement, Facebook’s answer has mostly been to deploy a mix of human reviewers and machine learning to find the bad stuff and remove or demote it. Its AI tools are widely considered world-class; a February blog post by chief technology officer Mike Schroepfer claimed that, for the last three months of 2020, “97% of hate speech taken down from Facebook was spotted by our automated systems before any human flagged it.”

The internal documents, however, paint a grimmer picture. A presentation from April 2020 notes that Facebook removals were reducing the overall prevalence of graphic violence by about 19 percent, nudity and pornography by about 17 percent, and hate speech by about 1 percent. A file from March 2021, previously reported by The Wall Street Journal, is even more pessimistic. In it, company researchers estimate “that we may action as little as 3-5% of hate and ~0.6% of [violence and incitement] on Facebook, despite being the best in the world at it.”

Those stats don’t tell the whole story; there are ways to reduce exposure to bad content besides takedowns and demotions. Facebook argues, fairly, that overall prevalence of offending content is more important than the takedown rate, and says it has reduced hate speech by 50 percent over the past three quarters. That claim is of course impossible to verify. Either way, the internal documents make clear that some of the company’s public statements exaggerate how well it polices its platforms.

Taken together, the internal documents suggest that Facebook’s core approach—ranking content based on engagement, then tuning other knobs to filter out various categories after the fact—simply doesn’t work very well.

One promising alternative would be to focus on what several of the internal documents refer to as “content-agnostic” changes. This is an approach that looks for patterns associated with harmful content, then makes changes to crack down on those patterns—rather than trying to scan posts to find the offending content itself. A simple example is Twitter prompting users to read an article before retweeting it. Twitter doesn’t need to know what the article is about; it just needs to know if you’ve clicked the link before sharing it. (Facebook is testing a version of this feature.) Unlike policies that target a certain category, like politics or health information, a content-agnostic change applies equally to all users and posts.

Facebook already does this to some extent. In 2018, it changed the algorithm to prioritize “meaningful social interactions” between users. Optimizing for MSI meant, for example, that posts that generated a lot of comments—or, for that matter, angry-face emoji—would get a big boost in the News Feed. As The Wall Street Journal reported in September, the shift had dreadful side effects: It provided major boosts to sensationalist and outrage-provoking pages and posts, which in turn raised the pressure on publishers and politicians to cater to the lowest common denominator. (This isn’t shocking when you consider what kinds of posts generate the liveliest comment threads.) It was, in other words, a bad content-agnostic change. Particularly problematic was a component called “downstream MSI,” which refers not to how engaging you will find a post but how likely you are to reshare it so that other people engage with it. Researchers found that, for whatever reason, the downstream MSI metric “was contributing hugely to misinfo.”

To Facebook’s credit, documents show that in 2020, the company tried to tackle the problem. It stopped ranking by downstream MSI for civic- and health-related content, a move that researchers predicted would cut down on “civic misinformation” by 30 to 50 percent. More recently, McAlister said, it turned the downstream models off “for crime and tragedy content, in some at-risk regions (e.g. Afghanistan), and for content about COVID.” But the company could still go further. According to an April 2020 document, a member of the integrity team pitched Zuckerberg on jettisoning downstream MSI across the board, but the CEO was loath to “go broad” with the change “if there was a material tradeoff with MSI impact,” meaning a loss in engagement.

An even bigger red flag than downstream MSI, according to the documents, are what the company calls “deep reshares”: posts that end up in your feed after someone shares them, and then someone else shares that person’s share, and so on. One January 2020 research paper reports that “deep reshares of photos and links are 4 times as likely to be misinformation, compared to photos and links seen generally.” Another internal report, from 2019, describes an experiment suggesting that disabling deep reshares would be twice as effective against photo-based misinformation than disabling downstream MSI. But Facebook only turns down recommendations of deep reshares “sparingly,” McAlister said, because the technique is “so blunt, and reduces positive and completely benign speech alongside potentially inflammatory or violent rhetoric.”

Here’s one last simple example. It turns out that a tiny subset of users account for a huge share of group invitations, sending out hundreds or thousands per day. Groups are a key source of what appears in the News Feed, making them an efficient way to spread conspiracy theories or incitements to violence. One 2021 document notes that 0.3 percent of members of Stop the Steal groups, which were dedicated to the false claim that the 2020 election was rigged against Donald Trump, made 30 percent of invitations. These super-inviters, on which Buzzfeed News has previously reported, had other signs of spammy behavior, including having half of their friend requests rejected. Capping how many invites and friend requests any one user can send out would make it harder for a movement like that to go viral before Facebook can intervene.

It’s possible that even more radical reform is needed, though, to truly fix the feed. In her congressional testimony, Haugen argued for replacing engagement-based ranking with pure reverse chronology: the top of your feed would simply be the latest post made by someone you follow.

An October 2019 post by Jeff Allen, then a Facebook data scientist, argues for yet another approach: ranking content according to quality. That may sound improbable, but as Allen points out in the white paper, which he posted right before leaving the company and which was first reported by MIT Tech Review, it’s already the basis of the world’s most successful recommendation algorithm: Google Search. Google conquered the internet because its PageRank algorithm sorted web sites not just by the crude metric of how often the search terms appeared, but whether other prominent sites linked to them—a content-agnostic metric of reliability. Today, Google uses PageRank along with other quality metrics to rank search results.

Facebook already crawls the web and assigns quality scores to websites, something known as Graph Authority, which the company incorporates into rankings in certain cases. Allen suggests that Graph Authority should replace engagement as the main basis of recommendations. In his post, he posits that this would obliterate the problem of sketchy publishers devoted to gaming Facebook, rather than investing in good content. An algorithm optimized for trustworthiness or quality would not allow the fake-news story “Pope Francis Shocks World, Endorses Donald Trump for President” to rack up millions of views, as it did in 2016. It would kneecap the teeming industry of pages that post unoriginal memes, which according to one 2019 internal estimate accounted at the time for as much as 35 to 40 percent of Facebook page views within News Feed. And it would provide a boost to more respected, higher quality news organizations, who sure could use it. (Disclosure: I’m confident this includes WIRED.)

These sorts of changes to Facebook’s ranking algorithm would address problematic content on the supply side, not the demand side. They would largely side-step claims of censorship, though not entirely. (Republican politicians often accuse Google of biased search results.) And because they don’t depend on language analysis, they should scale more easily than AI content moderation to markets outside the US. Which brings us to the next lesson from Facebook’s employees.

Stop Treating People in Developing Countries as Second-Class Users 

The most important findings in the internal documents concern Facebook’s lack of investment in safety and integrity in much of the non-English speaking world, where the vast majority of its users live. While Facebook often claims that more than 90 percent of hate speech removals occur proactively—that is, through its AI systems—that figure was only 0.2 percent in Afghanistan as of January 2021, according to an internal report. The picture is similar in other developing countries, where Facebook appears unwilling to spend what it takes to build adequate language models.

Arabic is the third-most spoken language among Facebook users, yet an internal report notes that, at least as of 2020, the company didn’t even employ content reviewers fluent in some of its major dialects. Another report from the same year includes the almost unbelievable finding that, for Arabic-speaking users, Facebook was incorrectly enforcing its policies against terrorism content 77 percent of the time. As much criticism as Facebook’s integrity efforts get in the US, those efforts barely exist across much of the world. Facebook disputes this conclusion—“Our track record shows that we crack down on abuse outside the US with the same intensity that we apply in the US,” McAlister said—but does not deny the underlying facts. As my colleague Tom Simonite observes, hundreds of millions of users are “effectively second class citizens of the world’s largest social network.”

Hopefully the latest round of public scrutiny will push Facebook to break that trend. A company that promises to “connect the world” has no business being in a market where it can’t offer the baseline of quality control that it offers its American users.

Protect Content Policy From Political Considerations

Outside observers have complained for years that Facebook bases decisions not on consistent principles but in response to pressure from powerful political figures. A steady stream of news stories over the years have documented key moments when the company’s leaders pulled the plug on a proposal to penalize low-quality publishers after outcry from Republicans.

This turns out to be an internal criticism as well. “The Communications and Public Policy teams are routinely asked for input on decisions regarding (a) enforcing existing content policy, (b) drafting new policy and (c) designing algorithms,” wrote one data scientist in December 2020, shortly before leaving the company. “Those teams often block changes when they see that they could harm powerful political actors.” (Facebook denies this charge, arguing that public policy is only one of many teams that have a say in content enforcement decisions.)

Another document from September 2020 lays out a detailed approach for how to fix the problem. Titled “A Firewall for Content Policy,” it first identifies the organizational structure that its author believes leads to so much mischief. The head of content policy reports to the head of global policy, who reports to the head of global affairs, who reports to chief operating officer Sheryl Sandberg, who, finally, reports to Zuckerberg. As a result, “External-facing teams, especially the Public Policy team, are routinely given power in decision-making about content enforcement and the design of content policy.” Choices about what to demote, what to remove, and how to tweak the algorithm must pass three layers of management concerned with keeping powerful political figures happy before reaching Zuckerberg.

The researcher sketches a simple alternative. First, the content policy team could instead report to another unit, like the central product services division, which in turn reports directly to Zuckerberg. That would cut down on the number of politically motivated veto points. It also would place responsibility for overriding the content team more squarely with Zuckerberg.

Second, the author notes that under the status quo, when a certain decision, like a takedown or demotion, gets “escalated,” groups including public policy get to take part. A simple fix would be to keep those escalation decisions within content policy. Similarly, the employee argues to limit the public policy division’s involvement in developing content rules and in making changes to the algorithm. “Public Policy could have input on general principles used to evaluate changes, but those principles would have to be written, and the interpretation of the principles would be solely the responsibility of Content Policy.” It’s a bit like pro sports: NFL team owners vote on rule changes during the offseason, but they’re not down on the field telling the refs when to blow the whistle.

The employee makes a strong case that implementing a firewall “would help with pressing problems for Facebook.” Clearly it would be far from a cure-all. Google and Twitter, the note points out, have versions of a firewall, with “trust and safety” teams separated from public policy. Those companies aren’t immune to scandal. But only Facebook has been consistently shown to bend its own rules and stated principles to appease powerful political actors.

Take Your Own Research More Seriously

Facebook is a big company. Not every internal research finding or employee suggestion is worth listening to. Still, the frustration expressed in the leaked files strongly suggests that Facebook’s leaders have been erring too heavily in the opposite direction.

The release of these documents has obviously created a massive headache for the company. But it also reveals that Facebook, to its credit, employs some very thoughtful people with good ideas. The company should consider listening to them more.


More Great WIRED Stories

Read next