Facebook users in Ukraine can now post ‘Death to Putin’ after Meta relaxes its rules on hate speech against Russia

Meta relaxed its policies on hate speech to permit Facebook and Instagram users in certain countries to call for violence against Russia and its military on Thursday, as President Vladimir Putin continues the country’s war against Ukraine.

In a memo sent to employees, and seen by Reuters, Meta said it would also permit some posts that call for the death of Putin or Belarusian President Alexander Lukashenko—one of Putin’s closest foreign allies, who has aided in Russia’s war in Ukraine.

“As a result of the Russian invasion of Ukraine we have temporarily made allowances for forms of political expression that would normally violate our rules like violent speech such as ‘death to the Russian invaders.’ We still won’t allow credible calls for violence against Russian civilians,” a Meta spokesperson told Reuters in a statement.

Reuters reports Meta will still block posts calling for the death of Putin or Lukashenko if the messages include two indicators of credibility, such as detail on how or where to kill them. Meta didn’t respond to Fortune’s request for comment.

The countries Meta now allows to call for Putin’s death are mostly Russia’s neighbors. The permitted list covers Armenia, Azerbaijan, Estonia, Georgia, Hungary, Latvia, Lithuania, Poland, Romania, Russia, Slovakia, and Ukraine.

According to the Intercept, Meta is also temporarily permitting users to post messages in support of the Azov Battalion, a Ukrainian neo-Nazi paramilitary group, so long as the posts explicitly praise the far-right militia for resisting Russia’s invasion.

Meta’s move to increase its tolerance for hate speech—which comes after Russia blocked Facebook access in retaliation for the platform allegedly censoring Russian state media—might be a first for the social media platform, which has previously been accused of too rigidly imposing its policies on marginalized groups.

During the Black Lives Matter protests that swept the U.S. in 2020, for instance, activists claimed Facebook’s policies censored posts calling out racism and white supremacy. Facebook said any instances of such censorship were “mistakes, and they were certainly not intentional.”

Conversely, Facebook has at other times failed to protect marginalized groups by not censoring or containing hate speech enough.

Last year, a treasure trove of internal documents leaked by a whistleblower and dubbed the Facebook Papers showed how hate speech ran rampant on Facebook in India—particularly when it targeted the nation’s Muslim minorities. According to the documents, some of Facebook’s staff were concerned that the company wasn’t doing more to censor calls for violence against Indian Muslims.

In 2018, after the Burmese military led a genocide against the country’s Muslim minority, Facebook admitted it had failed to prevent hate speech circulating in Myanmar. Facebook’s failure led to the platform being used to “foment division and incite offline violence” against the local Rohingya minority population, the company said.

Never miss a story: Follow your favorite topics and authors to get a personalized email with the journalism that matters most to you.