Facebook Groups Are Destroying America

They’re built for privacy and community—and that’s  what makes them dangerous.
An emoji wave crashing down.
Illustration: Getty Images; WIRED Staff

The Covid-19 “infodemic” has laid bare how vulnerable the United States is to disinformation. The country is less than five months away from the 2020 presidential election, and Americans by the thousands are buying into conspiracy theories about vaccines containing microchips and wondering about the healing powers of hair dryers. Where does all this come from? Let’s not be too distracted by a fear of rumormonger bots on the rampage or divisive ads purchased with Russian rubles. As two of the leading researchers in this field, we’re much more worried about Facebook groups pumping out vast amounts of false information to like-minded members.

For the past several years, Facebook users have been seeing more content from “friends and family” and less from brands and media outlets. As part of the platform’s “pivot to privacy” after the 2016 election, groups have been promoted as trusted spaces that create communities around shared interests. “Many people prefer the intimacy of communicating one-on-one or with just a few friends,” explained Mark Zuckerberg in a 2019 blog post. “People are more cautious of having a permanent record of what they've shared.”

But as our research shows, those same features—privacy and community—are often exploited by bad actors, foreign and domestic, to spread false information and conspiracies. Dynamics in groups often mirror those of peer-to-peer messaging apps: People share, spread, and receive information directly to and from their closest contacts, whom they typically see as reliable sources. To make things easier for those looking to stoke political division, groups provide a menu of potential targets organized by issue and even location; bad actors can create fake profiles or personas tailored to the interests of the audiences they intend to infiltrate. This allows them to seed their own content in a group and also to repurpose its content for use on other platforms.

This was already evident in 2018, when associates of Shiva Ayyadurai, an independent candidate for US Senate, used groups as part of their astroturfing campaign to boost his online support. Today, Ayyadurai is one of the most dangerous vectors of health disinformation, racking up millions of engagements on posts that rail against vaccinations, claim Anthony Fauci is a member of the “deep state,” and instruct followers to point blow dryers down their throats to kill the coronavirus.

Groups continue to be used for political disinformation. The “Obamagate” conspiracy theory has yet to be defined in clear terms, even by its own adherents, and yet our analysis of Facebook groups shows that the false narrative that the Obama administration illegally spied against people associated with the Trump campaign is being fueled and nurtured there. Related memes and links to fringe right-wing websites have been shared millions of times on Facebook in the past few months. Users coordinating their activities across networks of groups and pages managed by a small handful of people boost these narratives. At least nine coordinated pages and two groups—with more than 3 million likes and 71,000 members, respectively—are set up to drive traffic to five “news” websites that promote right-wing clickbait and conspiracy theories. In May, those five websites published more than 50 posts promoting Obamagate, which were then shared in the linked pro-Trump groups and pages. The revolving door of disinformation continues to spin.

A recent Wall Street Journal investigation revealed that Facebook was aware of groups’ polarizing tendencies from 2016. And despite the company’s recent efforts to crack down on misinformation related to Covid-19, the Groups feature continues to serve as a vector for lies. As we wrote this story, if you were to join the Alternative Health Science News group, for example, Facebook would then recommend, based on your interests, that you join a group called Sheep No More, which uses Pepe the Frog, a white supremacist symbol, in its header, as well as Q-Anon Patriots, a forum for believers in the crackpot QAnon conspiracy theory. As protests in response to the death of George Floyd spread across the country, members of these groups claimed that Floyd and the police involved were “crisis actors” following a script. In recent days, Facebook stopped providing suggestions on the landing pages of certain groups, but they still populate the Discover tab, where Facebook recommends content to users based on their recent engagement and activity.

To mitigate these problems, Facebook should radically increase transparency around the ownership, management, and membership of groups. Yes, privacy was the point, but users need the tools to understand the provenance of the information they consume. First, Facebook needs to vet more carefully how groups and pages are categorized on the site, ensuring that their labels accurately reflect the content shared in that community. In the current system, a page owner chooses its category— Cuisine, Just for Fun, and so forth—which then shows up in that community’s search results and on its front page. Most groups, meanwhile, are categorized as General, which assists neither users nor Facebook’s threat investigation teams in understanding each one’s purpose. In both cases, owners can be misleading: A large page that shares exclusively divisive or political content might be categorized as a Personal Blog, so as to escape the added scrutiny that might come with a more explicitly political tag. Such descriptors should be more specific and be applied more consistently. That’s especially important for groups or pages with tens of thousands of members or followers. Facebook should also make it easier to spot when multiple groups and pages are managed by the same accounts. That way the average user can easily identify concerted efforts to flood the platform with particular content.

As The Wall Street Journal found, Facebook’s own research showed that algorithmically suggested groups and Related Pages suggestions lead users further into conspiracy-land. They should be eliminated entirely. If users had to search out groups for themselves, they might be a bit more thoughtful about which they joined. Finally, very large groups should not be afforded the same level of privacy as family groups where Grandma shares recipes and cousin Sally posts baby pics. If a group exceeds a certain membership threshold—say, 5,000 people—it should be automatically set to public, so that any Facebook user can participate. That way, these groups can be observed by the researchers and journalists on whom Facebook now relies to police its platform.

A few months ago, during the 2020 Super Bowl, Facebook ran an ad lauding the power of groups to bring people together. The 60-second spot was called “Ready to Rock?” But unless Facebook stops bad actors from taking advantage of the community that groups provide, perhaps we should be ready for an earthquake.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.