New Zealand advertisers demand better from social media companies

Last week’s horrific mass shooting in Christchurch, New Zealand is once again raising the important issue of advertising-supported social media being used to distribute harmful content.

The shooter livestreamed his murderous rampage on Facebook, with the video being widely shared despite the social platforms’ attempts to prevent it.

On Tuesday, the Association of New Zealand Advertisers (ANZA) and the Commercial Communications Council (Comms Council) issued a joint statement calling on the social media companies to do more.

“Advertising funds social media,” reads the statement. “Businesses are already asking if they wish to be associated with social media platforms unable or unwilling to take responsibility for content on those sites. The events in Christchurch raise the question, if the site owners can target consumers with advertising in microseconds, why can’t the same technology be applied to prevent this kind of content being streamed live?”

The statement suggested that advertisers reconsider their social spending, although it stopped short of calling for a boycott.

“ANZA and the Comms Council encourage all advertisers to recognize they have choice where their advertising dollars are spent, and carefully consider, with their agency partners, where their ads appear,” it read.

Coms Council chief executive Paul Head told New Zealand news site Newshub that he hopes the call for action garners international support. “It’s time that this issue was fixed for once and for all, because the concern that we have is that this becomes the new normal,” he told Newshub. “What we need to do is get global alignment around this issue, and we are prepared to start those conversations globally.”

Similarly on Tuesday the heads of three New Zealand internet service providers sent a letter to the heads of Twitter, Facebook and Google calling for a coordinated response to stop the spread of harmful content.

“We appreciate this is a global issue, however the discussion must start somewhere,” stated the letter. “We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content. Social media companies and hosting platforms that enable the sharing of user generated content with the public have a legal duty of care to protect their users and wider society by preventing the uploading and sharing of content such as this video.”

In an article posted late Monday, the Washington Post explained the challenge that social media companies face when trying to block and remove hateful content. “This failure has highlighted Silicon Valley’s struggles to police platforms that are massively lucrative yet also persistently vulnerable to outside manipulation despite years of promises to do better,” it stated.

Fundamentally, the problem rests on the platforms’ reliance on AI to identify altered versions of hate content being shared.

“Those pushing videos of Friday’s attack made small alterations—such as changing the colour tones or length—of the shooting video originally live-streamed by the alleged killer himself through his Facebook page,” the article stated. “Such tricks often were enough to evade detection by artificial-intelligence systems designed by some of the world’s most technologically advanced companies to block such content.”

In a report issued Monday, Facebook claimed that 4,000 people viewed the livestream before it was taken down, and that the live broadcast was viewed fewer than 200 times. It said that it removed approximately 1.5 million videos of the attack in the 24 hours after the attack, more than 1.2 million of which were blocked at upload. It also said that the first user report on the original video came 29 minutes after the video started, and 12 minutes after the live broadcast ended.

“We continue to work around the clock to prevent this content from appearing on our site, using a combination of technology and people,” said the company’s statement.

 

 

David Brown