It’s now two months since the Christchurch terror attacks.
Social media live streaming and distribution of footage from this event sparked rapid activity aimed at restricting spread of hateful and violent content online.
Moving forward, it’s vital we create truly effective approaches in tackling this issue, and in ways that are legally enforceable and do not unnecessarily impinge on freedom of speech.
Is New Zealand Prime Minister Jacinda Ardern’s “Christchurch Call” capable of driving such change? This meeting will take place later today as a satellite event attached to the G7 summit in Paris, and include state leaders and representatives from digital media companies such as Facebook. (Today Facebook announced some new control mechanisms for online content in advance of the meeting).
The leaders would do well to pay attention to key aspects of regulation already identified by international policy experts working to target digital operations across the world. Effective legal regulation of the internet must be clear, proportional (balanced for all involved), accountable (able to be monitored and checked) and offer procedural guarantees (open to appeals).
Here come the reactive politicians
Being seen to lead is clearly an important political aspect of managing online content.
Jacinda Ardern will run the Christchurch Call event together with French President Emmanuel Macron – who is already “leading” work on this matter.
Back in March, Australia’s Prime Minister Scott Morrison was seen to be taking the “lead” to place social media governance on the agenda for the June summit of the G20 in Japan.
With so much political capital to be gained, perhaps we will now see political action creating real change. That is of course good.
But one may wonder why these leaders did not pursue this issue at this level before the horrible attack in Christchurch.
Why does it take a tragedy like this to spark political action? Civil society groups, academics, industry, media and pretty much the rest of society have been discussing these concerns for years.
The risk of hasty, excessive and uncoordinated responses
The fact that livestreaming and video of the terrorist attack in Christchurch spread to the degree it did is obviously a problem. And it is a problem that needs to be addressed as a matter of urgency.
But as part of this we must avoid hasty “solutions” that will only mask the issues in the long term, and potentially cause other problems such as excessive blocking of internet content.
To be effective, laws must be drafted in a way that makes compliance realistic.
We must also remember this is an international problem, in the sense that most of internet platforms are based outside Australia. It requires international coordination and collaboration.
Perfection is not an option
Anyone thinking of designing a framework to address the online distribution of terrorist content and other forms of hate speech must realise that we will not reach perfection. The mechanisms available to us are not perfect.
Experts such as Queensland University of Technology’s Nicolas Suzor point out that we currently do not have technologies that can reliably distinguish between illegal terrorist content such as the Christchurch livestream on the one hand, and lawful news reporting on the other.
And frankly, whatever legal formulations we adopt to delineate legal versus illegal content, we will always end up with grey zones. Legal definition simply cannot be so precise as to avoid this.
Technology is ineffective at identifying hate, and laws are necessarily imprecise; these issues place social media platforms in an uncomfortable position. They need to devote considerable human resources to monitoring content. As only the biggest companies can afford to do so, smaller companies simply cannot compete.
Given the enormous amount of content uploaded every second, it also means these companies need to decide instantly whether content is legal or illegal. These sorts of decisions may take many months for courts to make.
We may also question the degree to which we want to entrust social media companies to determine what is accessible online.
What a regulatory framework needs to include
The leading multi-stakeholder discussion regarding online content restrictions is carried out by the Internet and Jurisdiction Policy Network based in Paris.
For the past couple of years, it has worked with industry, academia, civil society, international organisations and various countries on devising operational principles for online content restrictions.
While several countries, such as United Kingdom, Canada, Switzerland and Germany, have been represented in the discussions, neither New Zealand nor Australia has actively participated in this work.
At the end of April 2019, the Secretariat of the Internet and Jurisdiction Policy Network released an important report. That document provides a blueprint intended to help public and private decision-makers take into account the full range of relevant issues when developing and implementing responsible frameworks, rules and practices to address abuses in full respect of international human rights principles.
Four important issues
The Internet and Jurisdiction Policy Network report emphasises the need for:
- framework clarity – clearly worded rules that are understood in the same way by all concerned parties outlining rights and responsibilities
- proportionality – decisions must take into account and aim to reconcile, or at least balance, the potentially competing rights of all relevant people or groups
- procedural guarantees – the need for accessible, speedy, clearly documented and publicly available appeal mechanisms
- accountability – the need for ongoing monitoring enabling appropriate oversight of content restrictions to increase trust in the process.
It remains to be seen how well Ardern’s Christchurch Call incorporates these important considerations.
If it does successfully navigate the difficulties involved, Arden and Macron’s meeting has the potential to spark further international collaborative initiatives helping ensure a better online environment for us all.
Dan Jerker B. Svantesson was an ARC Future Fellow (project number FT120100583) during 2012-2016. During this period he received funding from the Australian Research Council for a project dealing with the topic of this piece. Professor Svantesson is currently commissioned to write a Global Status Report – dealing with, amongst other things, the issue of this piece – on behalf of the Internet & Jurisdiction Policy Network. The views expressed herein are those of the author alone.