The World Economic Forum in Davos is a meeting of the ultra elite from the worlds of business and politics. Over the week, attendees talk about threats to—and opportunities for—the world order as they see it.
The world’s biggest marketers and media platforms chose Davos this week to reveal a new action plan to stop advertising dollars from flowing into the pockets of “bad actors” posting harmful content online.
The plan, from the Global Alliance for Responsible Media (GARM), is not splashy or sensational, but it does contain practical next steps to address a fundamental flaw of the digital media ecosystem—and therefore a flaw at the core of the modern advertising economy.
While global advertisers are supporting many legitimate online content providers, they are also (sometimes inadvertently) supporting people and organizations posting a combination of hateful and dangerous content.
The new GARM strategy includes three elements:
- The first step is establishing and using common definitions for 11 types of harmful content—from explicit material, to drug use and terrorism. Shared definitions among platforms, agencies and advertisers should bring shared understanding of what must be blocked.
- Second is developing tools to “create better links” across advertisers, media agencies and platforms. Those tools across the key players will “improve transparency and accuracy,” helping ensure that ad dollars are directed towards non-toxic content.
- Finally, the Alliance will establish shared measurement standards and independent verification processes to evaluate progress.
Formed by the World Federation of Advertisers in June, GARM comprises a group of marketers representing $97 billion in global advertising.
In making the announcement, the Alliance, which includes Google and Facebook, said that 620 million pieces of harmful content were removed by YouTube, Facebook and Instagram between July and September alone.
“Because of the platforms’ investments in teams and tools, the majority of this content was removed before consumers actually saw them,” said GARM. However, another 9.2 million pieces of harmful content were published during those three months—one piece of harmful content per second.
Aside from the business case to ensure marketing budgets aren’t supporting bad actors, the WFA has also taken the position that advertisers and media platforms have a moral obligation to reduce the amount of toxic content being posted and shared online.
Just days after a mass shooting in New Zealand was first live streamed and then shared on social channels last spring, the WFA said: “All these platforms are funded by advertisers, and as such, those that make them profitable have a moral responsibility to consider more than just the effectiveness and efficiency they provide for brand messages.”
That moral imperative seems present, though slightly less overt, in the new GARM announcement.
“Given that brands fund many of the platforms and content providers, we can ensure society gets the benefits of connectivity without the downsides that have sadly also emerged,” said Stephan Loerke, WFA CEO. “These first steps by the GARM are a significant move in the right direction, which will benefit consumers, society and brands.”
“It’s time to create a responsible media supply chain that is built for the year 2030—one that operates in a way that is safe, efficient, transparent, accountable, and properly moderated for everyone involved, especially for the consumers we serve,” said Marc Pritchard, chief brand officer at P&G.
Both Google and Facebook signed off on the new plan. “Responsibility is our number one priority, and we’ve made significant progress towards building a more sustainable and healthy digital ecosystem for everyone,” said YouTube CEO Susan Wojcicki.
“Safety for people and businesses is the top priority for Facebook, and we’re encouraged by the progress the Alliance and industry have made in these areas,” said Carolyn Everson, VP global marketing solutions at Facebook.
However, a stark reminder about the challenges of cleaning up social media also came Friday with a story about content moderators and Post Traumatic Stress Disorder. The Verge’s Casey Newton has reported at length about the gruesome task of reviewing content posted to Facebook and YouTube, and the toll it takes on the mental and physical health of those content moderators (here and here).
His latest story revealed that staff working for Accenture to review YouTube content were asked to sign a document acknowledging that the job could give them PTSD.
“Working for $18.50 an hour, or about $37,000 a year, employees said they struggled to afford rent and were dealing with severe mental health struggles,” wrote Newton. “The moment they quit Accenture or get fired, they lose access to all mental health services. One former moderator for Google said she was still experiencing symptoms of PTSD two years after leaving.”
Interestingly, another famous advertiser was in Davos this week talking about the pressure on businesses to make big changes to tackle another existential threat: climate change.
Speaking with MSNBC, Maurice Levy said businesses today are taking sustainable development very seriously and there has been a fundamental shift in how they view the problem. “I feel that everyone is conscious that the old idea of … maximizing profits, maximizing shareholder value, the old Milton Friedman concept is now part of the past,” he said. The problem today, he said, is that investors continue to exert pressures on CEOs to maximize profits rather than make the changes they want to make.
“How to fix the issues of gender, equality, diversity, equal pay — there are so many issues facing corporations that it’s a humongous task,” Levy said. “CEOs are perfectly conscious; what they need is to have a little bit of leeway and the support of the investors. I’m absolutely convinced that, at a point in time, they will be joining.”