Facebook is dealing with a widespread advertising boycott this month, organized by a handful of groups who say the world’s biggest social media company is making money by spreading hate content. As part of its response to criticism, Facebook’s senior executives Mark Zuckerberg and Sheryl Sandberg met this week with some of the boycott’s organizers, and the company also published findings from a civil rights audit. In this column—which originally appeared in AdAge and has been updated for The Message’s (mostly) Canadian audience—Nick Clegg, Facebook’s VP, global affairs and communications, responds directly to some of the accusations at the core of the #StopHateForProfit boycott.
When society is divided and tensions run high, those divisions play out on social media. Platforms like Facebook hold up a mirror to society—with more than three billion people using Facebook’s apps every month, everything that is good, bad and ugly in our societies will find expression on our platform.
That puts a big responsibility on Facebook and other social media companies to decide where to draw the line over what content is acceptable.
Facebook has come in for much criticism in recent weeks following its decision to allow controversial posts by U.S. President Trump to stay up, and misgivings on the part of many people, including companies that advertise on our platform, about our approach to tackling hate speech. I want to be unambiguous: Facebook does not profit from hate.
Billions of people use Facebook and Instagram because they have good experiences—they don’t want to see hateful content, nor do advertisers. There is no incentive for us to do anything but remove it.
More than 100 billion messages are sent on our services every day. In all of those billions of interactions a tiny fraction are hateful. When we find hateful posts on Facebook and Instagram, we take a zero tolerance approach and remove them. When content falls short of being classified as hate speech—or of our other policies aimed at preventing harm or voter suppression—we err on the side of free expression.
Unfortunately, zero tolerance doesn’t mean zero incidences. We invest billions of dollars each year in people and technology to keep our platform safe. We have tripled—to more than 35,000—the people working on safety and security, we continue to refine our policies based on direct feedback from experts and the civil rights community to address new risks as they emerge, and we’re a pioneer in artificial intelligence technology to remove hateful content at scale.
A recent European Commission report found that Facebook assessed 95.7% of hate speech reports in less than 24 hours, faster than YouTube and Twitter. Last month, we reported that we find nearly 90% of the hate speech we remove before someone reports it—up from 24% a little more than two years ago. We took action against 9.6 million pieces of content in the first quarter of 2020—up from 5.7 million in the previous quarter. This comes in addition to our ongoing work against organized hate. In 2019 we worked with Canadian experts to remove certain Canadian hate figures and organizations from our platforms and we continue to routinely assess and remove entities that spread hate.
Focusing on hate speech and other types of harmful content on social media is necessary and understandable, but it is worth remembering that the vast majority of those billions of conversations are positive.
Look at what happened when the coronavirus pandemic took hold. Billions of people used Facebook to stay connected with friends and family when they were physically apart.
Millions of people came together forming local groups in order to help the most vulnerable in their communities. Canadian performing artists, who saw their livelihood erased almost overnight as performance venues closed, found new audiences online through #CanadaPerforms, bringing joy to Canadians in a dark time. And when businesses had to close their doors to the public, for many Facebook was their lifeline. More than 160 million businesses use Facebook’s free tools to reach customers, and many used these tools to help them keep their businesses afloat—saving people’s jobs.
Importantly, Facebook helped people to get accurate, authoritative health information. We directed more than two billion people on Facebook and Instagram to information from the World Health Organization and other public health authorities, including the Public Health Agency of Canada, with more than 350 million people clicking through.
It is worth remembering that when the darkest things are happening in our society, social media gives people a means to shine a light on important issues and show the world what is happening. We’ve seen this around the world on countless occasions and we’re seeing it right now with the racial justice movement.
We may never be able to prevent hate from appearing on Facebook entirely, but we are getting better at stopping it all the time.
Nick Clegg is the vice-president, global affairs and communications, Facebook.