Meta

An Update on Combating Hate and Dangerous Organizations

Update on October 17, 2022 5:00 AM PT: 

As we draw closer to the US 2022 midterm elections, we wanted to share a bit more about our ongoing work to keep our platforms safe. As a part of this continued work, in 2022 we have disrupted and taken down six distinct US-based neo-nazi and white supremacist networks trying to use our platforms. Some were known groups like the KKK and the Proud Boys. But about 25% of these disruptions targeted new groups and new adversarial behavior. 

One of the reasons we’ve been able to make this progress is because of a tactic we’ve been using called Strategic Network Disruptions (SND). Since 2020, when our team began leveraging this strategy to target a banned group’s presence across our apps, we’ve continued to grow and evolve this tactic. Though the majority of our actions against Dangerous Organizations and Individuals comes from routine content enforcement there are times when, as we face an especially determined or adversarial group, content enforcement alone is not enough. That’s why we leverage this targeted, precision SND approach to do three things in particular: 

  1. Disrupt an entire network at once, making it more difficult for them to return,
  2. Send a clear message to the group that we are aware of their presence and they are not welcome on our platforms, and 
  3. Continue to learn and improve our systems as we study new ways these groups are attempting to evade detection and enforcement. 

This is a rapidly changing and highly adversarial space, with networks that are often aggressive in their attempts to find new ways of sneaking back onto our platforms. SNDs allow us to be nimble and respond to new threats that automation or content moderation alone might miss. We are committed to this important work, and look forward to sharing more information as our approaches develop.

Update on May 14, 2020 at 3:00PM PT: 

We’re sharing a statement to mark the one-year anniversary of the Christchurch Call to Action.

“One year ago we committed to the Christchurch Call to Action in response to the March 15, 2019 attack in Christchurch, New Zealand. Since then, our companies have continued our shared work to prevent terrorists and violent extremists from abusing digital platforms. Through the Global Internet Forum to Counter Terrorism, we created a protocol to jointly combat the spread of terrorist content following an attack, established a growing advisory committee of government and international organizations to help inform our work, launched working groups to take new proactive steps to address terrorist and violent extremist content online, and continued to support academic research on how terrorists use digital platforms. This work is only the beginning, and we are committed to making meaningful progress in the years to come working closely with governments, international organizations, and the multiple stakeholders who support the Christchurch Call.”  – Amazon, Facebook, Google, Microsoft, and Twitter

Originally published on May 12, 2020 at 9:00AM PT:

Last year, we committed to being more transparent about how we combat hate and dangerous organizations on our apps. Today, we’re sharing an update on these efforts, including new enforcement metrics and details on the tactics we’ve developed to disrupt this behavior. This month also marks the one-year anniversary of the Christchurch Call to Action, which brought government and industry leaders together, led by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron, to curb the spread of terrorism and extremism online. We continue delivering on our commitments to the nine-point action plan for digital platforms, including publishing regular reports on detection and removal of terrorist or violent extremist content on our apps.

Removing Organized Hate and Dangerous Organizations

We ban groups that proclaim a hateful and violent mission from having a presence on our apps and we remove content that represents, praises or supports them. To date, we’ve identified a range of groups across the globe as hate organizations because they engage in coordinated violence against others based on characteristics such as religion, race, ethnicity or national origin and we routinely evaluate groups and individuals to determine if they violate our policy.

Three years ago, we started to develop a playbook and a series of automated techniques to detect content related to terrorist organizations such as ISIS, al Qaeda and their affiliates. We’ve since expanded these techniques to detect and remove content related to other terrorist groups and organized hate. We’re now able to detect text embedded in images and videos in order to understand its full context, and we’ve built media matching technology to find content that’s identical or near-identical to photos, videos, text and even audio that we’ve already removed. When we started detecting hate organizations we focused on groups that posed the greatest threat of violence at that time, and we’ve now expanded to detect more groups tied to different hate-based and violent extremist ideologies and using different languages. In addition to building new tools, we’ve also adapted strategies from our counterterrorism work, such as leveraging off-platform signals to identify dangerous content on Facebook, and implementing procedures to audit the accuracy of our AI’s decisions over time.

Sharing Metrics

In the first three months of 2020, we removed about 4.7 million pieces of content on Facebook connected to organized hate, an increase of over 3 million pieces of content from the previous quarter. Additionally, we increased our proactive detection rate for organized hate, the percentage of content we remove that we detect before someone reports it to us, from 89.6% in Q4 2019 to 96.7% in Q1 2020. We saw similar progress on Instagram where our proactive detection rate increased from 57.6% to 68.9%, and we removed 175,000 pieces of content in Q1 2020, up from 139,800 the previous quarter. In addition, since we built this system for detecting organized hate content based on what we learned from detecting terrorist content, we’ve been able to identify where content related to one problem is distinct from the other. For example, we’ve seen that violations for organized hate are more likely to involve memes while terrorist propaganda is often dispersed from a central media arm of the organization and includes formalized branding. Identifying these patterns helps us continue to fine tune the systems for detecting organized hate and terrorist content. 

Evolving Our Enforcement Tactics

We remain vigilant in learning and combating new ways people may try to abuse our apps. We work with external partners to get the latest intelligence about adversarial behavior across the internet, and we commission independent research from academics and experts. We also learn from different teams at Facebook about successful methods in combating other forms of abuse that can be applied to this work.

Over the last six months, we worked with colleagues on our Threat Intelligence team to leverage their strategy for combating coordinated inauthentic behavior in order to develop a new tactic that targets a banned group’s presence across our apps. We do this by identifying signals that indicate a banned organization has a presence, and then proactively investigating associated accounts, Pages and Groups before removing them all at once. Once we remove their presence, we work to identify attempts by the group to come back on our platform. We’re also studying how dangerous organizations initially bypassed our detection, as well as how they attempt to return to Facebook after we remove their accounts, in order to strengthen our enforcement and create new barriers to keep them off our apps.

We’ll continue working to disrupt and remove dangerous organizations from our platform and we’ll share how we’re doing at enforcing our policies and combating new ways people may try to abuse our apps.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy