New Zealand's biggest telcos have penned an open letter to the leaders of social media and digital platform giants, Google, Facebook and Twitter, calling for content regulation in the wake of the Christchurch attack.
The letter addresses the tragedy last Friday which left 50 dead, with the gunman live streaming the attack using Facebook Live.
Vodafone NZ, Spark and 2degrees, have worked together to suspended access to web sites hosting video footage taken by the gunman.
The letter, signed by Vodafone CEO Jason Paris, Spark MD Simon Moutter and 2degrees CEO Stewart Sherrif, urges the social media giants to regulate how "harmful content" is shared.
"Content sharing platforms have a duty of care to proactively monitor for harmful content, act expeditiously to remove content which is flagged to them as illegal and ensure that such material – once identified – cannot be re-uploaded," the letter says.
"Although we recognise the speed with which social network companies sought to remove Friday’s video once they were made aware of it, this was still a response to material that was rapidly spreading globally and should never have been made available online.
"We believe society has the right to expect companies such as yours to take more responsibility for the content on their platforms."
While the telcos understand this is a global issue, they say the conversation has to start somewhere.
They say social media and the businesses engaging with the platforms must the right balance between internet freedom and the need to protect New Zealanders, especially "the young and vulnerable".
"We call on Facebook, Twitter and Google, whose platforms carry so much content, to be a part of an urgent discussion at an industry and New Zealand Government level on an enduring solution to this issue," the letter says.
READ THE FULL LETTER:
You may be aware that on the afternoon of Friday 15 March, three of New Zealand’s largest broadband providers, Vodafone NZ, Spark and 2degrees, took the unprecedented step to jointly identify and suspend access to web sites that were hosting video footage taken by the gunman related to the horrific terrorism incident in Christchurch.
As key industry players, we believed this extraordinary step was the right thing to do in such extreme and tragic circumstances. Other New Zealand broadband providers have also taken steps to restrict availability of this content, although they may be taking a different approach technically.
We also accept it is impossible as internet service providers to prevent completely access to this material. But hopefully we have made it more difficult for this content to be viewed and shared - reducing the risk our customers may inadvertently be exposed to it and limiting the publicity the gunman was clearly seeking.
We acknowledge that in some circumstances access to legitimate content may have been prevented, and that this raises questions about censorship. For that we apologise to our customers. This is all the more reason why an urgent and broader discussion is required.
Internet service providers are the ambulance at the bottom of the cliff, with blunt tools involving the blocking of sites after the fact. The greatest challenge is how to prevent this sort of material being uploaded and shared on social media platforms and forums.
We call on Facebook, Twitter and Google, whose platforms carry so much content, to be a part of an urgent discussion at an industry and New Zealand Government level on an enduring solution to this issue.
We appreciate this is a global issue, however the discussion must start somewhere. We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content. Social media companies and hosting platforms that enable the sharing of user generated content with the public have a legal duty of care to protect their users and wider society by preventing the uploading and sharing of content such as this video.
Although we recognise the speed with which social network companies sought to remove Friday’s video once they were made aware of it, this was still a response to material that was rapidly spreading globally and should never have been made available online. We believe society has the right to expect companies such as yours to take more responsibility for the content on their platforms.
Content sharing platforms have a duty of care to proactively monitor for harmful content, act expeditiously to remove content which is flagged to them as illegal and ensure that such material – once identified – cannot be re-uploaded.
Technology can be a powerful force for good. The very same platforms that were used to share the video were also used to mobilise outpourings of support. But more needs to be done to prevent horrific content being uploaded. Already there are AI techniques that we believe can be used to identify content such as this video, in the same way that copyright infringements can be identified. These must be prioritised as a matter of urgency.
For the most serious types of content, such as terrorist content, more onerous requirements should apply, such as proposed in Europe, including take down within a specified period, proactive measures and fines for failure to do so. Consumers have the right to be protected whether using services funded by money or data.
Now is the time for this conversation to be had, and we call on all of you to join us at the table and be part of the solution.
Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au
Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.