New Zealand’s Attempt to Censor Internet as a Means to Limit Hate Crimes Could Become a Global Trend
Censorship is not an idea that citizens of the free world often concern themselves with, but the Internet, and its ability to spread information at a rapid pace, is causing the New Zealand government to consider the possibility of censoring the Internet within their country.
The decision, which is still under scrutiny, was proposed after the Christchurch mosque massacre that occurred in New Zealand in mid-March. The terrorist that attacked the mosque filmed the entire event on Facebook Live and shared a link of his 73-page manifesto filled with white supremacist beliefs with the knowledge and intent that it would spread. In order to combat the severity of the situation, New Zealand is looking to limit what its citizens can share via the Internet.
Independently owned broadcasting company Sky News, which has operations in both Australia and New Zealand, broadcast live coverage of the massacre. Afterwards, the 17-minute long footage continued to circulate via the Internet, causing outrage, despite New Zealand authorities’ police best efforts to take it down. The problem is, once a video is online, recopied, and reposted several times, it is nearly impossible to fully remove it from the Internet.
New Zealand’s prime minister Jacinda Ardern is in full support of the censoring of extreme content, saying, “We want to maintain the principles of free, open and secure internet, but this isn’t about freedom of expression. This is about preventing violent extremism and terrorism online.”
Ardern intends to be a leader in this movement and she, along with French president Emmanuel Macron, is hosting a summit in Paris on May 15. France and New Zealand are partnering because New Zealand needs more powerful and influential allies to make a global impact. The new legislation being proposed would place full responsibility of monitoring content on tech companies and social media sites, such as Facebook. The punishment for posting any extremist content would fall on the person who posted it, as well as the hosting Internet site.
The first step of Facebook’s crisis-management protocol is to understand the situation and gather information before they remove any content. Both human operators and computers work to sift through thousands of reported content at each branch worldwide to maintain a steady, 24-hour watch.
However, the Facebook Live feature specifically does not have as developed a system, considering it launched in 2016 and would work most efficiently if people flagged and reported inappropriate streams. During the time of the live stream of the Christchurch mosque massacre, about 4,000 people watched before flagging it. Facebook’s global escalations team in Singapore were working to remove the videos and links that were related to the attack, but the process took longer because protocol had to be followed.
The idea of Internet censorship is an unpopular one because with the Internet people freely connect with others and share opinions, yet it cannot be denied that the Internet is often misused. Arden declares, “it’s critical that technology platforms like Facebook are not perverted as a tool for terrorism, and instead become part of a global solution to countering extremism.”