On the 4th of April 2019, the government passed the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill, to come into effect two days later on 6 April. The law was a response to the livestreaming on Facebook of the 15 March Christchurch mosque shootings by one of the perpetrators, a white nationalist. The law requires technology platforms to remove any such material from their servers quickly or risk large fines and jail time for executives.

THE PROCESS

The core intent of this legislation has broad support from the tech industry. No one in the technology sector wants to be complicit with terrorism, and there is strong recognition that this sort of content should not be on the internet. Nevertheless, there are well-founded concerns about the way in which the legislation was created. The Abhorrent Violent Material (AVM) legislation was conceived, drafted and rushed through the Senate with bipartisan support in less than three weeks. Some Senators had not seen the legislation until the day they were asked to vote on it.

In addition to the vote coming in the aftermath of an international tragedy, it was held roughly six weeks ahead of a closely-fought federal election. This amplified themes of national security, protection from terrorism and a backlash against global technology companies. This politically-charged atmosphere led to an odd process of scrutiny for the bill, most clearly encapsulated by Labor’s Shadow Attorney-General Mark Dreyfus saying the bill was “poorly drafted and will not achieve its intended purpose” yet also that “Labor will not stand in the way of this bill, despite our concerns”.

As the digital age provides newer and richer products for distributing and sharing information, no doubt there will be malicious users who look to capitalise on those tools. Combatting these challenges requires an open dialogue between technology platforms and the government. Any attempt to tackle thorny regulatory questions must be thoughtful and deliberate. A three week process from inception to law for legislation like this is manifestly inadequate. Parliamentarians thought so too - Greens Leader Richard Di Natale said: “We're being asked to ram through this legislation at a rate of knots”.

THE LEGISLATION

Fundamentally the legislation requires that an online platform remove “abhorrent violent material expeditiously.” Penalties are severe: fines of up to 10% of company revenue and up to three years in jail for executives who don’t act fast enough. ‘Abhorrent violent material’ refers to terrorism, murder, attempted murder, rape and kidnapping. However the word ‘expeditiously’ is not specifically defined — requiring a court to decide in individual cases based on the context of the event.

Attorney-General Christian Porter: “What is expeditious will always depend on the circumstances. But using the Christchurch example, I can’t precisely say what would have been the point of time at which it would have been reasonable for them to understand that this was livestreaming on their site or playable on their site and they should have removed it, but what I can say and I think every Australian would agree, it was totally unreasonable that it should exist on their site for well over an hour without them taking any action whatsoever.”

It is worth nothing that no one had complained about the Christchurch video until 29 minutes after the stream had started, and that Facebook went on to remove 1.5 million copies of it from their platform in 24 hours.

The key issue here is how much do technology platforms know what content is hosted on their servers, and how much should they know?

The scale at which these businesses operate is hard to get one’s head around. Facebook sees roughly 350 million photos uploaded to its platform per day. YouTube sees the equivalent of roughly 2.5 million 10-minute videos per day. Twitch.tv has an average of 55,000 channels livestreaming at any given time in 2019, with 4.5 million streams per month.

Did the legislators who passed this law understand the difficulty of finding such a rare needle in such a large haystack? Will a court? Will judges and jurors understand the technical difficulty of distinguishing between real livestreamed violence versus fake livestreamed violence like a movie or a videogame at such scale?

How should Australian startups attempt to build products that rely on user submitted content in light of this legislation, given they have limited resources to review and react to unexpected abhorrent material?

In an era where there are already issues with how technology companies handle user data, how comfortable are we in now requiring them to unilaterally review and police content created or shared by their users?

The law may also have unintended consequences for the public interest. While there are protections for ‘professional journalists’ in broadcasting offending material, there isn’t a broader public interest defence for most citizens. It might, for example, be legal for a journalist to broadcast some or all of an offending video, but illegal for anyone else to then forward, share or host their piece on social media.

The October 9 Halle attack

In October this year, a second high-profile attack was livestreamed on the internet, this time on Amazon’s gaming streaming platform, Twitch.tv. The attack was viewed by less than five live viewers, until a link to the automatically saved recording was shared on third party message boards. The video was removed from the site within hours of its creation. 2533 people viewed the video in total, with the vast majority (2200) accessing it in the 30 minutes before it was taken down.

Key to its lack of dissemination was Twitch’s action in quickly hashing the video, creating a digital ‘fingerprint’ of the content, and sharing it with The Global Internet Forum to Counter Terrorism. This allowed members of the Forum, including Facebook, Youtube and Twitter, to block the video from their platforms, removing any chance for the video to virally spread.

The incident made it clear that technology platforms are actively improving their ability to counter terrorist abuse of their products, and showed considerable success in a short amount of time. Yet it also showed that the blunt instrument of the AVM may have a disproportionate impact on smaller players that lack the knowledge or resources to participate in such initiatives.

Director of Tech Against Terrorism Adam Hadley said, “The Big Tech companies have a close relationship with one another... What is more difficult is coordinating activity across hundreds of smaller platforms.”

This incident also highlighted the problematic subjectivity of this legislation. Australia’s eSafety Commissioner, Julie Inman- Grant, said “the protocols taken by social media companies to thwart the spread of this material appear to be working effectively” but this was at odds with the Communications Minister, Paul Fletcher, whose office said the government expected Amazon “to provide answers about what happened, and solutions as to how they will prevent their technology from being exploited in this way”.

For this particular piece of legislation, a dialogue between the technology sector and government has been ongoing for some time. The best outcome would be for this dialogue to result in a formal review. Certainly, the goals of eliminating technological means of spreading abhorrent material is a shared goal, and an informed collection of stakeholders should be able to achieve that while also minimising the issues with the current Act.