In case it wasn’t clear, you won’t find Nazi content here

Thoughts on the Substack controversy and brand safety

Angelica Schwartz
5 min readJan 30, 2024

I didn’t think I’d ever have to say this explicitly — but just to be clear, you won’t find Nazi content on this blog.

Why are we talking about Nazis?

Back in November, the Atlantic reported that Substack has a “Nazi problem” based on the presence of “scores” of newsletters with Nazi content on the platform, including at least 16 with “overt Nazi symbols, including the swastika”. In the aftermath of the Atlantic article, 247 Substack publications signed an open letter challenging the Substack founders on their approach, which allegedly monetizes and amplifies Nazi content. The Substack founders have historically championed open discourse and light content moderation policies with just a few narrow proscriptions (such as inciting violence), which they re-confirmed publicly in their statement in December. This has caused — ironically — a lot of discourse about the topic, as current Substack newsletters consider their future with the platform.

Photo by Scott Graham on Unsplash

One of the most prominent exits was the popular technology newsletter Platformer, run by Casey Newton, which announced last week that it was leaving Substack. I listened to the latest episode of Casey’s tech podcast, Hard Fork, where he discussed his decision to leave the platform.

While Newton acknowledges that there may be gray areas in terms of where to draw the line for content moderation, a key sticking point for him was the monetization and amplification of this content on the Substack platform. Substack fails the screenshot test (similar to the front page test) for Newton; imagine a reader able to take a screenshot of Nazi-related recommendations immediately following a Platformer article — not a great look for Platformer.

Maintaining brand safety is a crucial priority for businesses. I experienced this during my time on YouTube ads, when the team faced a brand-safety crisis from major advertisers, who did not want to show ads alongside controversial, racy, or adult organic YouTube video content. Some family-friendly brands, like Huggies or Disney, are much more sensitive to contentious content, whereas more audacious brands like RedBull may be more tolerant of risque contexts. But in both cases, brands want the flexibility to control their own destiny and make that decision for themselves. There are similar challenges between YouTube advertising and Substack and their tension with brand safety:

Photo by Collabstr on Unsplash

(1) Organic recommendations + feeds are at odds with brand control

Recommendations and infinite scrolling feeds are common techniques to amplify organic content and keep users on the platform. Recommendations are a new(-ish) feature for Substack, both via their organic recommendation engine, as well as new author-driven recommendation tools. While recommendations may help users discover new content relevant to them, the unpredictability may be at odds with brand safety.

Photo by Marjan Grabowski on Unsplash

To help mitigate this at YouTube, we offered new brand safety controls to advertisers to prevent their ad impressions from showing alongside undesirable content. Similarly, certain organic videos were demonetized completely and ineligible for revenue from YouTube ads. I wonder if similar protections could be enabled for paid users of Substack to offer more individual brand control, or demonetize particularly hateful content on the platform.

But cataloging content is no easy task either. At YouTube, we created complex machine-learning models, scored and evaluated by human reviewers, to predict content “bad-ness” in a variety of categories (aside: boy, it was a wild time when I had to watch violent or racy content with my coworkers to test our various models… XD). Substack would have to invest in cataloging as well, in order to provide any controls on that content.

(2) Once you have paid users, they have a louder say in content moderation

There may be some amount of leeway that a platform can “get away with” when it is free for all users. However, both YouTube advertisers and Substack authors are now paying to have their content distributed on the organic platform. Once a platform has paid users, the users can leverage those relationships, as the Substack authors are doing now, to demand different content moderation guidelines than were previously in place.

Photo by Icons8 Team on Unsplash

Additionally, like the advertisers on YouTube, many of the newsletters on Substack are businesses in-and-of themselves, and have their own audiences to appease. Newton specifically asked his readers to chime in before making his decision. The newsletter needs to balance its own opinions with the opinions of its readers, who are also paying customers, as it is making brand safety decisions.

Where will content moderation trend?

After the 2016 presidential election, information and social platforms such as Meta had invested heavily in content moderation teams and infrastructure to combat misinformation, hate speech, and misleading ads. However, going into this next election year, folks are worried that policy rollbacks and layoffs at top social and information companies have drastically impacted content moderation efforts (especially at Meta, where layoffs affected many business operations folks in content moderation). Additionally, as it becomes even easier to generate content at scale with generative AI, experts are worried that misinformation and conspiracy theories may become even more prolific and easier to spread. It will be interesting, to say the least, to see which direction our information platforms trend — more content moderation or less in the coming years.

What responsibility do you feel platforms have to moderate content? Is there a distinction between allowing content on the platform and amplifying it via recommendation engines? Do you think platforms will trend toward more or less content moderation? Would love to hear your thoughts!

--

--

Angelica Schwartz

Staff software engineer (ex-Google, ex-Stripe) dabbling in entrepreneurship. Writing about tech, teams, and other top of mind. New to medium, not new to tech :)