In the last few years, there have been growing concerns about how the spread of lies and disinformation online is breathing new life into a wide range of regressive and bizarre ideas and thinking.
There are the laughable flat Earthers, and the so funny-its-sad proponents of urine therapy to cure covid. But there has also been a resurgence in the far more sinister forces of white nationalism and fascists threatening democratic society itself.
These are societal issues, but are also very much advertising issues. For too long, ad dollars have been flowing into the pockets of people publishing disinformation and hate content online because of the nature of programmatic ad buying—ad tech placing ads in front of eyeballs rather than through direct buys from a publisher.
Claire Atkin and Nandini Jammi are saying enough is enough. Last fall, they launched the Check My Ads Institute as a watchdog exposing some of the many real examples of advertisers inadvertently supporting disinformation thanks to advertising executed through programmatic buys and delivered through the incomprehensibly complex ad tech system. “We’re ready to rip out the beating heart of the disinformation economy,” was the headline on their announcement in late October.
Both Atkin (left in top photo), a Canadian based in Vancouver, and Jammi worked in marketing before this. Jammi was one of the co-founders of the Sleeping Giants social media campaign, which grew famous in 2016 for calling out advertisers running ads on the right-wing website Breitbart (consequently she’s well-known and fairly unpopular in some of the internet’s less-progressive circles. See below). Atkin had been studying international election observation at the Global Campus of Human Rights in Italy when the pair connected on Twitter in 2019.
“[Nandini] was frustrated by the fact that advertising was still funding racism and bigotry, and was confused by that, as was I,” said Atkin. “When we met it was like, ‘Oh my God, you’re the only other person in the industry who’s obsessed with this problem.'”
Together, they dove deep into how ad technology was allowing the problem to continue, launching the “Branded,” newsletter in early 2020 to share stories exposing clear examples of ad tech serving ads to “Hate sites, disinformation sites, and worse.” That led to the for-profit consultancy Check My Ads, helping brands and media buyers make sense of ad tech. And now comes Check My Ads Institute, described as “The non-profit watchdog here to cut disinformation off from its lifeline: Ads.”
We asked Atkin to talk us through why she and Jammi felt they had to do this now, and about the underlying problem of ad-supported hate content online.
Why take this step to launch Check My Ads Institute?
“The fact is, there are strong ties between disinformation networks and ad tech. We know that propaganda and disinformation operations operate through hidden networks and middlemen within the ad tech industry. So they’re collecting ad dollars from unsuspecting advertisers. And we know now that advertisers don’t want to be there. We know that advertisers don’t want to be funding disinformation, or bigotry or racism.”
Marketers control the budgets, so why can’t they stop it?
“It’s hard to check your ads—It’s quite obfuscated. And we are seeing ad tech companies keep saying over and over ‘Well, we’ll take care of this. We’ve got this.’ And then they provide these controls, but actually they’re just useless dials on a dashboard. They’re not actually telling you what’s going on inside your campaigns.
“So I think there are a lot of empty promises made within the ad tech industry, and we are uncovering them. One at a time.”
You’ve pointed to the infamous Lumascape (right) as illustrative of the complexity of the ad-tech industry. Why is that complexity a problem.
“The ad-tech supply chain is a magpies nest… There are so many decisions that we made in the last 20 years that got us to a place where we can’t track our data and our dollars. And I think that’s a mistake. I think we have to consider marketing in a much more pragmatic way. I think it’s been intentionally made abstract.”
And what’s the impact of that?
“The fact is that news has been systematically defunded for the last 20 years by ad tech, and we know that it’s having a devastating effect on this pillar of democracy.
“In the last 10 years we’ve lost 30,000 journalism jobs [In the U.S.]. In 2019, brand safety technology blocked $3.2 billion [in ad revenue] from news outlets in just four countries. The whole thing is illogical to me.”
So it not only defunds professional journalism, but also funds bad information.
“Correct. Disinformation outlets are thriving. We know that up to $2.6 billion flows towards disinformation outlets every year. And we know from our own research that conspiracy theories and hate speech are blocked by brand safety companies at lower rates than the news media.”
Why is that?
“Because of the way brand safety technology is constructed—keyword block lists, semantic analysis. The fact is, it just doesn’t work. It blocks the news, and it doesn’t block the brand unsafe things.”
Do some marketers look at this as an unfortunate collateral cost of the system?
“We’ve spoken to over 200 companies in the last year, and not a single person said ‘It’s worth it to us to get the reach to be on [sites promoting] racism, bigotry, xenophobia, and disinformation.’ There are other places to reach people.”
But why haven’t they done more about it before now?
“I don’t think they’re doing nothing. I think they’re trying, and ad tech is making it hard for them to make a difference. Ad Tech is not transparent about where our campaigns go. That’s why we started Check My Ads. We believe that we have to sunlight what is happening within the digital supply chain.”
What do you think about the role of media agencies?
“I understand that the margins for media agencies are already tight, and so they’re in a tough spot. But they have to do more to check the ads, and they have to do more to enforce transparency on behalf of advertisers. And we have lots of media agencies who come to us and they run training with us.
“We think they should have brand safety officers who understand the problem, who are not trained by ad tech. They have to understand how to identify hate speech and disinformation, and how to stop funding it. It’s a question of taking this problem seriously.”
What about big associations?
“Well, they represent the interests of the ad tech Industry. While they can set certain collective agreements to self-regulate, they don’t have the architecture to be a watchdog. That’s not in their DNA. It’s not what they’re there for. They’ve told us again and again that they don’t do enforcement.
“They have a role, but their role is not this. I think there’s a gaping hole within the industry for a watchdog, and that’s why we started.”
What are the tangible ways you can affect change?
“We do it in all the ways we’ve been doing it. Every time we release a Branded newsletter that calls out companies for having nefarious connections or for saying one thing—saying that they’ll keep advertisers ad brand safe—and then doing the opposite, we see change within 24 hours most of the time.
“And the fact is that we’ve taken millions out of the disinformation economy already. We just know that there are billions to take out, so that’s why we want this watchdog.”
Check My Ads Institute is asking for donations from ‘CheckMates.’ What can you do with that funding?
“We already run a lot of research, and what we’re calling sunlight campaigns, and we expect to do a lot more of that with more people.
“Any supporting funds that we get from checkmates, foundations or donations, they will all go towards the sunlight campaigns so that we can get bigger investigations, broader investigations, more powerful investigations out into the world, and hold companies accountable for their relationships with this information.”
***
We spoke with Atkin late last year. Since then, Check My Ads Institute has begun a new campaign to defund some of the loudest voices behind last year’s insurrection in Washington. And Atkin herself became a target of the right-wing Canadian website Post Millennial. It ran an article in late December under the headline: “Nandini Jammi’s ‘Check my Ads’ partner offers to buy sexual material for minors.”
What Atkin had actually done was tweet out that she would be willing to buy books that have been banned by school boards for content featuring “sexual or gender identity as a prominent subject.”
Teenagers! If you want to read a book that is banned in your school library because it features "sexual or gender identity as a prominent subject," DM me and I will buy and order it to a safe location for you. https://t.co/6F8rdS6S2D
— Claire Atkin (@catthekin) December 29, 2021
I said books featuring "sexual identity or gender identity as a prominent subject" not sexual material, but the Post Millennial still wrote a hit piece implying I'm some sort of pedophile.
I stand with LGTBQ teens. https://t.co/I74eE8JflL pic.twitter.com/sWaRuaW0sw
— Claire Atkin (@catthekin) December 31, 2021
We asked Atkin to comment:
“Post Millennial is an outlet that publishes in bad faith for the purpose of, what seems to me, political chaos. They have incited harassment and abuse to me personally. And that alone should make it brand unsafe.
Visiting the website this morning, Google-served ads appeared for Kia and Trent University (see below):
“They’ve lost ad exchanges for this, and if Google is still there, like, what the fuck? They should not be there. It is clearly brand unsafe… so why are they monetizing things that lead to real world harassment and abuse? I cannot tell you how much this has affected my life. I had to talk to my landlord about security. I had to talk to law enforcement about the kinds of emails I get. I am talking to lawyers… it makes me consider my personal safety. That is not brand safe.”
(Asked specifically about Post Millennial, Google provided the following statement: “We have strict monetization policies for publishers, prohibiting content that promotes demonstrably false information. When we find content that violates these policies we take swift action and our enforcement can be as targeted as removing ads from individual pages with violating content.”)
The Post Millennial article speaks to the impact you’re having:
“Yes, I think that it has. I think that we really shake them up. And I think that they are threatened by our work and our campaigns.”