Meta

How Does Facebook Measure Fake Accounts?

This post is part of our Hard Questions series, which addresses the impact of our products on society.

By Alex Schultz, VP of Analytics

We’re regularly asked lots of questions about the fake account numbers in our Community Standards Enforcement Report (CSER) and SEC filings. With the increase in fake account removals, and prevalence, in the latest report, we thought now would be a good time to give more detail about how we measure fake accounts. We are also opening up even more fully to third parties, including on our fake account numbers, via the Data Transparency Advisory Group (DTAG). We know it’s important to have independent verification of our methodology and our work.

We believe fake accounts are measured correctly within the limitations to our measurement systems (which we disclose in our CSER guide and SEC filings). That being said, although reporting fake accounts is an industry standard — and something widely asked of us — it may be a bad way to look at things:

  • The number for fake accounts actioned is very skewed by simplistic attacks, which don’t represent real harm or even a real risk of harm. If an unsophisticated, bad actor tries to mount an attack and create a hundred million fake accounts — and we remove them as soon as they are created — that’s one hundred million fake accounts actioned. But no one is exposed to these accounts and, hence, we haven’t prevented any harm to our users. Because we remove these accounts so quickly, they are never considered active and we don’t count them as monthly active users.
  • Prevalence is a better way to understand what is happening on the platform because it shows what percentage of active accounts are likely to be fake.
  • However, even then, the prevalence number for fake accounts includes both abusive and user-misclassified accounts (a common example of a user-misclassified account is when a person sets their pet up with a profile, instead of a Page), while only abusive ones cause harm.
  • We focus our enforcement against abusive accounts to both prevent harm and avoid mistakenly taking action on good accounts.

As such:

  • We recommend focusing on the enforcement report metrics related to actual content violations, and
  • We’re evaluating if there is a better way to report on fake accounts in future.

Overall, we remain confident that the vast majority of people and activity on Facebook are genuine.

How We Enforce and Measure Fake Accounts

When it comes to abusive fake accounts, our intent is simple: find and remove as many as we can while removing as few authentic accounts as possible. We do this in three distinct ways and include data in the Community Standards Enforcement Report to provide as full a picture as possible of our efforts:

1. Blocking accounts from being created: The best way to fight fake accounts is to stop them from getting onto Facebook in the first place. That’s why we’ve built detection technology that can detect and block accounts even before they are created. Our systems look for a number of different signals that indicate if accounts are created in mass from one location. A simple example is blocking certain IP addresses altogether so that they can’t access our systems and thus can’t create accounts.

What we measure: The data we include in the report about fake accounts does not include unsuccessful attempts to create fake accounts that we blocked at this stage. This is because we literally can’t know the number of attempts to create an account we’ve blocked as, for example, we block whole IP ranges from even reaching our site. While these efforts aren’t included in the report, we can estimate that every day we prevent millions of fake accounts from ever being created using these detection systems.

2. Removing accounts when they sign-up: Our advanced detection systems also look for potential fake accounts as soon as they sign-up, by spotting signs of malicious behavior. These systems use a combination of signals such as patterns of using suspicious email addresses, suspicious actions, or other signals previously associated with other fake accounts we’ve removed. Most of the accounts we currently remove, are blocked within minutes of their creation before they can do any harm.

What we measure: We include the accounts we disable at this stage in our accounts actioned metric for fake accounts. Changes in our accounts actioned numbers are often the result of unsophisticated attacks like we saw in the last two quarters. These are really easy to spot and can totally dominate our numbers, even though they pose little risk to users. For example, a spammer may try to create 1,000,000 accounts quickly from the same IP address. Our systems will spot this and remove these fake accounts quickly. The number will be added to our reported number of accounts taken down, but the accounts were removed so soon that they were never considered active and thus could not contribute to our estimated prevalence of fake accounts amongst monthly active users or our publicly stated monthly active user number or even any ad impressions.

3. Removing accounts already on Facebook: Some accounts may get past the above two defenses and still make it onto the platform. Often, this is because they don’t readily show signals of being fake or malicious at first, so we give them the benefit of the doubt until they exhibit signs of malicious activity. We find these accounts when our detection systems identify such behavior or if people using Facebook report them to us. We use a number of signals about how the account was created and is being used to determine whether it has a high probability of being fake and disable those that are.

What we measure: The accounts we remove at this stage are also counted in our accounts actioned metric. If these accounts are active on the platform, we would also account for them in our prevalence metric. Prevalence of fake accounts measures how many active fake accounts exist amongst our monthly active users within a given time period. Of the accounts we remove, both at sign-up and those already on the platform, over 99% of these are proactively detected by us before people report them to us. We provide that data as our metric of proactive rate in the report.

We believe that of all the metrics, prevalence of fake accounts is the most important to focus on.

It’s Important to Get the Balance Right

We have two main goals with fake accounts. Preventing abuse from fake accounts but also giving people the power to share through authentic accounts. We have to strike the right balance between these goals.

To prevent abuse, we try to identify accounts that look abusive — but even here it is possible to take action on accounts that don’t deserve it. When someone joins Facebook and sends out lots of friend requests it can look like they are a spammer but in fact are super social person, for instance users in Brazil who are rapidly adopting social media or teenagers sending a large amount of messages a day. Sometimes someone signs up and behaves oddly because they are completely new to the internet and are figuring it out, like someone in the developing world or a senior just getting online. We believe giving people the power to build community is really important and so for accounts where we aren’t sure if they are abusive we will give them time to prove to us their intent. So, both from focusing on abusive accounts (not user-misclassified pet profiles) and from giving new accounts space to prove their intent, we expect there will always be a small percent of fake accounts on our services.

Preventing fake accounts is just one way to stop abuse — and we have other protections once content is being produced and people are interacting with these accounts. Also fake accounts are just one way abuse happens. Authentic accounts can be abusive too. As such, to evaluate our work on keeping the community safe overall, we recommend using the full suite of metrics we offer in the CSER and especially the prevalence metrics. Our work on fake accounts is just one driver of these.

In addition to the questions we get about abusive fake accounts, we also get questions about fake accounts in general as it relates to advertisers getting a return on their investment with us. In the same way we want people to share on Facebook and we know that they will only do that if they feel safe, we also know advertisers will only continue to advertise on Facebook if they get results — and we’re continuing to deliver returns for them despite the small occurrence of fake accounts.

We remain confident that the vast majority of people and activity on Facebook is genuine. We welcome feedback and scrutiny on fake accounts but are proud of our work to balance protecting the people and advertisers using our services while giving everyone the power to build community on Facebook.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy