The Washington PostDemocracy Dies in Darkness

Facebook to reexamine how livestream videos are flagged after Christchurch shooting

The company says it may expand the categories that get an ‘accelerated review’

March 21, 2019 at 9:11 a.m. EDT
People attend a vigil held at Forsyth Barr Stadium on March 21 in Dunedin, New Zealand. Fifty people were killed and dozens were injured in Christchurch on March 15 when a gunman opened fire at the Al Noor and Linwood mosques. (Dianne Manson/Getty Images)

The first user to alert Facebook to the grisly video of the New Zealand terrorist attack clocked in 29 minutes after the broadcast began and 12 minutes after it ended. Had it been flagged while the feed was live, Facebook said Thursday, the social network might have moved faster to remove it.

Now Facebook said it will reexamine how it reacts to live and recently aired videos.

To alert first responders of an emergency as fast as possible, the company says it prioritizes user reports of a live stream for “accelerated review.”

“We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground,” the company said in an update of its response to the Christchurch attack.

Fewer than 200 people watched the New Zealand massacre live. A hateful group helped it reach millions

Last year, Facebook said it applied this expedited review process to recently ended live broadcasts. That meant users who saw a potentially violent or abusive live stream after it aired could alert Facebook moderators with haste. But Facebook said this process covered only videos flagged for suicide. Other dangerous events — including the Friday attack on two mosques that left 50 people dead — were not covered under the expedited review process.

Facebook said this may change.

“We are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review,” the company said.

No one who watched New Zealand shooter’s video live reported it to Facebook, company says

Facebook and other social media platforms have come under criticism for their roles in enabling the viral spread of the mosque gunman’s video. The grisly 17-minute broadcast was viewed 4,000 times before the company took it down.

It later added the video to an internal list of banned material and said it began blocking cloned versions of it almost immediately. Facebook said it removed 1.5 million videos of the shooting within the first 24 hours.

The company said the vast majority of posts, pictures and videos it removes from its network are proactively detected by its artificial intelligence systems. But officials acknowledged that “this particular video did not trigger our automatic detection systems.”

Specific types of content, such as nudity, terrorist propaganda and graphic violence have been successfully limited on the social network through automated filters, Facebook said. But their effectiveness is tied to volume and repeated exposure to such content.

Facebook said its systems would need to be exposed to large volumes of similar data to automatically detect the horrific imagery seen in the New Zealand video, but such events are “thankfully rare.”

Facebook also pointed to the additional challenge of potentially flagging innocuous content that resembles offending video. For instance, “if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”

Facebook said it is reviewing whether it can deploy its AI systems to detect such videos and how to process reports from human users faster.