Facebook has announced making changes to its rules on its live-streaming feature, following the New Zealand terror attacks. This announcement comes after the shootings of two Christchurch mosques in March, which was broadcasted live on the platform by the lone gunman.
Facebook said it was introducing a “one-strike” policy for use of its live-streaming feature, restricting access for people who have encountered disciplinary action for breaching its most serious rules anywhere on the site.
It also added that first-time offenders will be suspended from using Facebook Live for set periods of time, as well as widening the number of offences that will qualify for the one-strike suspensions.
Political leaders from Europe, Canada and the Middle East will meet senior representatives from firms such as Facebook, Twitter, Google, where they’ll issue a joint “call to action” to collaborate on “transparent, specific measures” to eradicate terrorist material.
The pledge reads: “The dissemination of such content online has adverse impacts on the human rights of the victims, on our collective security and on people all over the world,”
UK Prime Minister Theresa May will call for governments and technology companies to cooperate together to prevent terrorist content from being shared online.
Mrs May said that Facebook had to remove 1.5 million copies of the video from its platform, describing it as a “stark reminder that we need to do more”.
Speaking ahead of the summit, Mrs May said that the ploy of live-streaming attacks “exposed gaps in our response and the need to keep pace with rapidly changing technological developments”.
“My message to governments and internet companies in Paris will be that we must work together and harness our combined technical abilities to stop any sharing of hateful content of this kind.” she said.
Facebook stated that anyone sharing “violating content,” such as a statement from a terrorist group, without context would be blocked from using Facebook Live for a temporary period of time, such as 30 days.
It has also pledged $7.5 million (£5.8 million) towards new research partnerships in order to automatically detect banned users, after some users managed to sidestep existing detection systems by uploading edited footage of the Christchurch attacks.
“Our goal is to minimize risk of abuse of Live while enabling people to use Live in a positive way every day,” the statement said.
Mrs May is expected to bring up concerns about the threat of far-right political groups online, as well as call for an international approach to regulation. She is also due to mention that technology companies responded well and effectively to her call to fight propaganda from the Islamic State group, following the attacks at, Westminster, Manchester Arena and London Bridge back in 2017.
Last year, IS propaganda was at its lowest level online since 2015 as a result from the coordinated worldwide response.
“That shows us what is possible.” Mrs May is expected to say. “Our work here must continue in order to keep pace with the threat. But we also need to confront the rise of the far right online.”
The UK has also recently published its own plans to present a legal duty of care for online companies, which would be imposed by a new independent regulator.
Story by Emily Clark
Featured Photo Credit: CustomerMagnetism