Christchurch shootings: Social media’s role

Police talking to relatives

Image copyright
Getty Images

Image caption

At least 49 people were killed in the shootings at two mosques in Christchurch

A video seemingly filmed by the man charged with murder after the killing of at least 49 people and wounding of at least 20 in shootings at two mosques in Christchurch, has been widely seen on social media.

The incident once again highlights how these platforms deal with such content.

While Facebook, Twitter, Reddit and YouTube raced to remove it, they failed to stop it being shared.

It raises questions about who is sharing it and why but, perhaps more importantly, how these platforms are dealing with the threat of far-right extremism.

Many members of the public have taken to Twitter to express shock and anger at the fact that the video is still in circulation on lots of platforms, with others pleading for people to stop sharing it.

One pointed out: “That is what the terrorist wanted.”

What was shared?

The video, which shows a first-person view of the killings, has been widely circulated.

  • About 10 to 20 minutes before the attack in New Zealand, someone posted on the /pol/section of 8chan, an anarchist alt-right message board. The post included links to the suspect’s Facebook page, where he stated he would be live-streaming and published a rambling and hate-filled manifesto
  • Before opening fire, the suspect urged viewers to subscribe to PewDiePie’s YouTube channel. PewDiePie later said on Twitter he was “absolutely sickened having my name uttered by this person”
  • The attacks were live-streamed on Facebook and shared widely on other social media platforms, such as YouTube and Twitter
  • People continue to report seeing the video, despite the firms acting pretty swiftly to remove the original and copies, and copies are still being uploaded to YouTube, faster than it can remove them.
  • Several Australian media outlets broadcast some of the footage, as did other newspapers around the world
  • Ryan Mac, a BuzzFeed technology reporter, has created a timeline of where he has seen the video, including it being shared from a verified Twitter account with 694,000 followers. He claims it has been up for two hours

What is the response of the social media companies?

All of the social media firms sent heartfelt sympathy to the victims of the mass shootings and reiterated that they act quickly to remove inappropriate content.

Facebook said; “New Zealand Police alerted us to a video on Facebook shortly after the live-stream commenced and we removed both the shooter’s Facebook account and the video.

“We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. We will continue working directly with New Zealand Police as their response and investigation continues.”

And in a tweet, YouTube said “our hearts are broken”, adding it was “working vigilantly” to remove any violent footage.

Image copyright
Twitter

In terms of what they have done historically to combat the threat of far-right extremists, their approach has been more chequered.

Twitter acted to remove alt-right accounts in December 2017. Previously it has removed and then reinstated the account of Richard Spencer, an American white nationalist who popularised the term “alternative right”.

Facebook, which suspended Mr Spencer’s account in April 2018, admitted at the time that it was difficult to distinguish between hate speech and legitimate political speech.

This month, YouTube was accused of being either incompetent or irresponsible for its handling of a video promoting the banned Neo-Nazi group, National Action.

British MP Yvette Cooper said the video-streaming platform had repeatedly promised to block it, only for it to reappear on the service.

What needs to happen next?

Dr Ciaran Gillespie, a political scientist from Surrey University, thinks the problem goes far deeper than a video, shocking as that content has been.

“It is not just a question about broadcasting a massacre live. The social media platforms raced to close that down and there is not much they can do about it being shared because of the nature of the platform, but the bigger question is the stuff that goes before it,” he said.

As a political researcher, he uses YouTube “a lot” and says that he is often recommended far-right content.

“There is oceans of this content on YouTube and there is no way of estimating how much. YouTube has dealt well with the threat posed by Islamic radicalisation, because this is seen as clearly not legitimate, but the same pressure does not exist to remove far-right content, even though it poses a similar threat.

“There will be more calls for YouTube to stop promoting racist and far-right channels and content.”

‘Legitimate controversy’

His views are echoed by Dr Bharath Ganesh, a researcher at the Oxford Internet Institute.

“Taking down the video is obviously the right thing to do, but social media sites have allowed far-right organisations a place for discussion and there has been no consistent or integrated approach to dealing with it.

“There has been a tendency to err on the side of freedom of speech, even when it is obvious that some people are spreading toxic and violent ideologies.”

Now social media companies need to “take the threat posed by these ideologies much more seriously”, he added.

“It may mean creating a special category for right-wing extremism, recognising that it has global reach and global networks.”

Neither under-estimate the enormity of the task, especially as many of the exponents of far-right views are adept at, what Dr Gillespie calls, “legitimate controversy”.

“People will discuss the threat posed by Islam and acknowledge it is contentious but point out that it is legitimate to discuss,” he said.

These grey areas are going to be extremely difficult for the social media firms to tackle, they say, but after the tragedy unfolding in New Zealand, many believe they must try harder.