Report says shadowbanning is real—and it’s suppressing sex workers

Opinion

I’m shadowbanned from Twitter. According to shadowban.eu, my Twitter account @acvalens is banned from search and search suggestions. My sex work account is similarly impacted. Whenever I share content related to my sex work on either username, my tweets advertising my content are far less likely to be seen by my followers. This harms my ability to pay the bills, which is a minor nuisance at best and a financial crisis at worst, depending on the month. This is something called shadowbanning, a form of content moderation where users’ visibility is strictly limited without warning nor explanation.

I’ve dealt with shadowbanning for a few years now, but my sex work content has been significantly suppressed since June. My engagement has decreased because, I suspect, Twitter is less likely to show my sex work content on my followers’ feeds. My story mirrors other sex workers’ experiences. It’s also the subject behind “Posting Into the Void,” a new peer-led research report by sex worker-centered tech collective Hacking//Hustling. The report, penned by researchers Danielle Blunt, Emily Coombes, Shanelle Mullin, and Ariel Wolf, compares and contrasts the ways social media platforms target sex workers and activists, organizers, and protesters (AOPs) on services like Twitter, Facebook, and Instagram. The report’s information comes from a survey Hacking//Hustling sent out in June, and the fast turnaround is not a coincidence. In an email interview with the Daily Dot, Blunt warned the U.S. government is still on the offensive against Section 230 of the Communications Decency Act, which SESTA-FOSTA infamously watered down in 2018.

“It was important for us to get this information out as quickly as possible, before future amendments to CDA 230 were signed into law and content moderation becomes more extreme,” Blunt said. “I hope that civilians [non-sex workers] and AOPs understand how the repression and deplatforming of sex workers impacts them too and that this research reaches outside of the sex working community.”

Shadowbans are complicated, in part because they don’t just impact marginalized users. Conservatives commonly claim they’re the biggest targets of the practice; the term itself gained mainstream prominence after President Donald Trump tweeted about it. Nor are social media platforms transparent about their shadowbanning process, making it difficult to verify when a user is shadowbanned and when they aren’t. So what are sex workers up against, and how are shadowbans impacting the American public at large? The answer is as urgent as it is complex.

What is shadowbanning?

According to “Posting Into the Void,” shadowbanning is a tool used by online platforms to “reduce the prevalence of content that the platform deems ‘high-risk’ and that should not be easily discoverable.” Shadowbanning is an umbrella term and describes many different practices, from hiding users’ accounts on sitewide searches to preventing users’ posts from being seen by others.

“Because of these types of reduced visibility and discoverability, an account might show up less in other users’ feeds, unable to connect with new followers,” the report notes. “At times, shadowbanning can make social media platforms unusable, for example, when you are unable to connect with or find community and clients.”

Shadowbanning has roots in corporate advertising on mainstream social media platforms. Before shadowbans, alerts informed banned users that they were removed from a site, which lowers banned users’ exposure to paid ads. Shadowbans allow platforms to simultaneously control impacted users’ speech while continuing to monetize their time on the platform. Hacking//Hustling describes this as a core component of “surveillance capitalism,” a market structure in which private human data is “computed and packaged as prediction products and sold into behavioral futures markets” for “knowing what we will do now, soon, and later,” as the term’s inventor Shoshanna Zuboff said in 2019.

“Deplatforming an individual means that the platform is no longer able to generate ad revenue, sell data to data brokerage firms, or provide data to Social Media Intelligence companies,” Hacking//Hustling’s report notes. “Shadowbanning becomes a very powerful tool for platforms to silence dissent while still turning a profit and collaborating with the state to surveil and police communities.”

Why is shadowbanning so controversial?

Shadowbanning is manipulative and opaque. It’s impossible to know if you’ve been shadowbanned unless you use a shadowban test, such as shadowban.eu for Twitter or Triberr for Instagram. These tests use Twitter and Instagram’s content visibility features, such as searching a user’s account, in order to declare whether a user is or isn’t shadowbanned, and they’re partially based on guesswork. As Triberr puts it, its test “is largely based on several assumptions related to Instagram and its algorithms, as well as insights from the Instagram user community.” Without confirmation from social media platforms, it’s hard to know exactly how shadowbans work. This sows doubt in shadowbanned users and may make them feel confused, self-conscious, or even ashamed. These responses are symptoms of gaslighting, which social media platforms engage in by design, Hacking//Hustling argues.

The term “structural gaslighting” was coined by the 12 doctors behind Scientific American’s “George Floyd’s Autopsy and the Structural Gaslighting of America” and describes “when the state, structures, or institutions deny a set of practices which certain users or communities know to be true,” Hacking//Hustling notes. While “structural gaslighting” originated as a way to describe Black Americans’ experiences with state and institutional structures that gaslight them into thinking their experiences with oppression are not real, its use across state and corporate institutions is not coincidental. “Posting Into the Void” used this definition as the foundation for a new term, “platform gaslighting,” which is “structural gaslighting that occurs when platforms deny a set of practices which certain users know to be true.” Twitter, for example, has regularly engaged in platform gaslighting regarding its moderation features, up to and including gaslighting me as a reporter and shadowbanning victim.

“Our position on shadowbanning hasn’t changed,” a Twitter spokesperson told me in December 2019, “we don’t do it.”

I contributed to Hacking//Hustling’s report by providing my previous correspondence with Facebook and Twitter. I also reached out to Twitter for comment, which was later quoted in the full report. Twitter says its policy has not changed.

“Everyone can express themselves on Twitter as long as they don’t break the Twitter Rules,” a spokesperson said. “We don’t block, limit, or remove content based on an individual’s views or opinions. In some situations, a Tweet may not be seen by everyone, as outlined here.” According to the linked guide, tweets “may be limited” if they are considered “abusive” or “spammy.” Twitter also engages in content curation on users’ timelines by deciding what users “are most interested in” or “contributes to the conversation” in safe and healthy ways. Additionally, Twitter says it uses “behavior-based signals” that rank content appearance based on user interactions, blocks, and mutes.

Shadowbanning is a fundamentally political concept, and “Posting Into the Void” reveals shadowbanning tends to primarily target sex workers, not civilians. This implies social media platforms like Twitter, Facebook, Instagram, and Google don’t just rely on shadowbanning; they need bans to remain undetectable in order to deceitfully curate their platforms’ public image. It’s digital gentrification, or the process by which platforms remove marginalized users to replace them with a more market-friendly consumerbase. Like police officers admitting that the American justice system doesn’t actually bring about justice, acknowledging shadowbanning’s existence essentially damns its creators.

“The fact that users don’t know much about the process of shadowbanning,” the report warns, “is by design.”

Do we have data on shadowbanning’s existence?

Thanks to Blunt, Coombes, Mullin, and Wolf’s peer-led research, there’s now a comprehensive dataset on how shadowbanning takes place and what it entails. Hacking//Hustling’s findings come from 262 participants split between sex workers, AOPs, sex workers who are AOPs, and any miscellaneous respondents (approximately 7%). While Hacking//Hustling warns the sample “cannot be generalized” to AOPs or sex workers as a whole, the report offers a data-backed glimpse into shadowbanning’s long-term effects on marginalized users.

Hacking//Hustling also found that shadowbanning is innately discriminatory: Among sex workers, 69.57% experienced shadowbanning compared to just 34.88% of civilians. Over half of sex workers reported their usernames were filtered out of platform searches compared to just 22% of non-sex workers, and 41.01% of sex workers reported deplatforming compared to just 21.57% of civilians.

“45.45% of those who have not done sex work were able to get their accounts back after being deplatformed from social media while only 7.27% of those who have done sex work said the same,” the report found. “Of those who identify as both a sex worker and an AOP, an incredible 51.28% report they have been shadowbanned.”

Most of these revelations aren’t news to sex workers. But the data itself shows just how invasive shadowbanning is across the sex-working community. Sex working AOPs in particular experienced “nearly double” the amount of shadowbanning, deplatforming, and online suppression across survey questions, Blunt told the Daily Dot. Because civilians were far less likely to report similar experiences, sex workers’ struggles with social media are both discriminatory and rendered invisible. Only those connected to sex workers know what is going on.

“Again, this is all compared to respondents who identified exclusively as activists, organizers, and protestors,” Mullin told the Daily Dot. “It’s very likely that this gap would be even wider if compared to the general population.”

Sex workers who engaged in more online work during the coronavirus pandemic were more likely to face censorship from social media platforms, too, Hacking//Hustling found. Over half of sex workers refused to use certain words to avoid platform censorship, and sex workers were nearly three times more likely to receive an official message stating their account could face deletion than non-sex workers. 

“It seems that the more active you are as a sex worker on social media, the more likely you are to have your content repressed,” Blunt said. “When people are relying on online work more due to COVID-19, the violent impact of the repression of sex workers content is highlighted—it reduces their ability to earn an income and pushes them into increased financial insecurity.”

How does shadowbanning harm a democratic society?

Trump Censorship SESTA FOSTA
President Donald Trump at SESTA-FOSTA’s signing into law.

The White House
(Public Domain)
Ana Valens

By nature, silently curtails free speech by valuing certain voices over others. A civilian political analyst with a Substack newsletter, for example, is far less likely to be shadowbanned by Twitter than a sex worker. This implies that a civilian blogger’s voice is more important to a free and open democracy than a sex worker’s. It doesn’t matter whether the latter is far more politically versed in issues like fascism, sexual politics, or online censorship—or is just a better citizen overall: In Twitter’s eyes, the sex worker must be suppressed because they engage in sexual labor.

Shadowbanning is a political act. Shadowbanning does not happen in a vacuum but in a country fraught with government surveillance, police violence, AOP suppression, open-source technology defunding, and the treacherous circumstances of the U.S. presidential election, Blunt told the Daily Dot. The effects of shadowbanning aren’t just undemocratic; they run in conjunction with offline forms of oppression. Together, these acts run the risk of untangling democracy and creating authoritarian infrastructure that suppresses and erases marginalized voices.

“We were seeing a lot of posts about platform repression of protest content, stories of financial payments being blocked that said #BLM, and financial technologies disrupting mutual aid efforts and wanted to document sex worker and activists’ experiences online during the protests and COVID-19 pandemic,” Blunt told the Daily Dot. “With ‘Posting Into the Void,’ we were interested in collecting data about the intersection of the digital suppression of sex workers and activists during the 2020 uprisings.”

Digital suppression is growing, too. After Kentucky Attorney General Daniel Cameron announced that no police officers would be charged with murdering Breonna Taylor, Coombes noted that Twitter “abruptly began shadowbanning and locking accounts that posted tweets containing Breonna Taylor’s name.” In some cases, Twitter accounts tweeting about her death were suspended. Twitter claims the suspensions were a technical issue and weren’t related to users’ posts. Twitter has given the same excuses to sex workers for years.

“Surveillance technologies are being used more and more to hinder protest and demobilize movements for racial, gender, and economic justice. With sex workers being canaries in the coal mine for much of these state and corporate efforts, we are seeing in real time a mass cleansing of the internet where digital and sexual citizenship online are defined by sex worker exclusion,” Coombes told the Daily Dot. “For sex workers, the internet has never been free or open or safe, but with FOSTA/SESTA and now EARN IT, that exclusion from digital space is now quite literally written into federal law.”

Shadowbans fundamentally target “high-risk” material. In a society antagonistic toward dissent, AOPs working with Black Lives Matter are becoming bigger targets for content suppression. “Posting Into the Void” isn’t just a report revealing shadowbanning’s existence; it’s a forecast for a future that will soon impact everyone, sex workers and civilians alike, unless our political institutions are torn down and replaced with something more democratic.

“We already have models of what happens when governments repress the internet and open discourse,” Blunt said. “Fascism.”


This post first appeared on Here