Claiming that deplatforming racists violates First Amendment rights shows a distorted understanding of how speech, race, and power work online.
Founding Director and Senior Fellow Malkia Devich-Cyril’s op-ed in WIRED
EARLIER THIS MONTH, in the wake of the fatal incursion of an angry, mostly white and male mob into the Capitol Building in Washington, DC, Facebook and Twitter blocked Donald Trump’s accounts. YouTube followed with a temporary ban, which it has continued to extend in the weeks since. According to these platforms, Trump’s dangerous pattern of behavior violated their content management rules. Shortly after, Amazon Web Services ended its hosting support for the neo-Nazi online haven Parler. Parler countered with a lawsuit alleging that Amazon’s decision was an antitrust violation motivated by political animus, which the courts readily rejected. In the coming days, Facebook’s Oversight Board is expected to issue a final decision on whether to allow the former president back on its platform.
The collective sigh of relief that rippled through the digital spaces occupied by Black, indigenous and other people of color following the wave of deplatformings was visceral, and the impact was almost immediate. A study conducted by research firm Zignal Labs found that online disinformation, particularly about election fraud, fell by an incredible 73 percent in the week after Twitter’s suspension of Trump’s social media account. Online forums for Trump supporters are now fractured and weakened.
But many reacted to the social media bans with outrage. First Amendment fundamentalists across the political spectrum raised “free speech” concerns, claiming that the social media bans were a slippery slope. Though they’re being used to hold the powerful to account today, the argument goes, they could be used to repress minority groups in the future. Others worried that a digital oligarchy of big tech companies like Facebook, Twitter, Google, Apple, and Amazon with the unchecked power to silence individuals represents a threat to democracy.
I share the concern about the outsize influence of big tech on governance and the economy. But, as a Black activist who’s been fighting for digital rights and justice and against digital disparities, surveillance, and hate for more than a decade, the reaction that most resonated was relief and a sense of collective triumph. Finally, after years of organizing, movements for racial justice and human rights were able to hold these companies accountable to the demand that they give no platform or profit to white supremacy—at least momentarily.
That it took so long and such extreme circumstances for the platforms to take action, despite years of warnings and complaints, is nothing short of enraging. But it’s also not terribly surprising, especially when you consider the unequal distribution of First Amendment rights on the internet. For the past decade, we have witnessed the resurgence of white supremacy in mainstream political and public debate, and it’s only been enabled by media platforms like Facebook, Twitter, and YouTube. While those already in power may rely on the Constitution and the democratizing promise of the open internet, Black people and other marginalized groups need more than the intent of the law to enjoy its equal protection. This is not about the inherent objective of the First Amendment as law, or even about its shifting interpretations by courts over time—it is about the impact that white supremacy, anti-Black violence, and other forms of racial terrorism abetted by so-called free speech have on the speech and freedoms of Black and brown Americans.
When Black activists and other protected classes are silenced on Facebook, Twitter, and other social platforms for talking about or organizing against racism, that’s censorship. But when an oppressed minority seeks equality and justice, and freedom from the harm and violence brought on by the systematically privileged speech of others, that’s not censorship, that’s accountability. Claims stating otherwise are misguided at best, and at worst represent a very distorted understanding of the way speech, race, and power work online.
WHEN I BEGAN organizing in earnest to defend the internet in 2009, my efforts were driven by the great promise that an open internet without corporate gatekeepers would, in time, level the playing field for all speech. My hope was further inspired by the role social media platforms such as Twitter and Facebook played in aiding and giving international voice to the Arab Spring movement. Just a few years later, Occupy Wall Street also used social media as a means to bypass an exclusive and elitist mainstream media to amplify stories of economic inequity, branding the phrase “We are the 99 percent.” Then, in 2013, the hashtag #BlackLivesMatter emerged on Twitter, giving national and international voice to a growing movement for Black lives and against unchecked, systemic police violence.
By allowing ordinary people to share ideas, pressure targets directly, and catalyze and coordinate broader social movements across geographies, social media has played an important role in defending human rights. But, as I quickly learned, without adequate mechanisms to protect the speech of those historically discriminated against and excluded by all vehicles of modern voice—from school and universities to the ballot box, to media publishers and platforms—the marketplace of ideas ends up just like the actual marketplace, rigged to protect the speech of those already in power.
For instance, both the presidential elections of 2016 and 2020 were flooded with disinformation aimed explicitly at limiting the voting rights and political power of Black and Latino voters. The differing levels of police aggression against the seditious mob that recently attacked the Capitol versus the largely peaceful anti-racist protesters in almost every US city demonstrates a racialized double standard in freedom of assembly. Black communities don’t enjoy a free and fair press either: Eighty-three percent of newsroom staff are white. Racial disparities in media publishing have left the internet as a singular alternative for Black voices. But when the internet is riddled with racism, Black speech becomes a canary in a digital coal mine.
Meanwhile, white supremacists of all kinds have historically enjoyed unfettered access to the means and mechanisms of speech. This is as true in a digital age as it has ever been. A 2017 Pew study found that one in four Black Americans have been threatened or harassed online because of their race or ethnicity. With Black and indigenous women killed in America more than any other race, the confluence of digital and real world racial and gendered violence is undeniable, at least by those who directly experience it.
As an early member of the Black Lives Matter Global Network in the Bay Area, I was among the leaders responsible for managing several BLM Facebook pages, and I witnessed the inequity first hand. I spent hours each day from 2014 until 2017 removing violent racial and gendered harassment, explicitly racist anti-Black language, and even threats to maim and murder Black activists. At that time, getting these posts removed was extremely difficult. There were no feedback mechanisms outside of users flagging posts themselves. And if the content management system, algorithmic or human, didn’t agree with your interpretation, the post stayed. As a result, Black activists like me managing Facebook pages were left with only one option: combing through each and every comment to remove the thousands that threatened Black people, at great personal detriment.
In a digital age where much mobilization happens online, the constant drum beat of racist harassment and threats, of doxxing and ridicule, is reminiscent of the earlier days of civil rights organizing. My body remains intact, but my spirit is scarred.
In this context, an absolutist interpretation of the First Amendment—that all speech is equal, that the internet is a sufficiently democratizing force, and that the remedy for harmful speech is more speech—willfully and callously ignores that all speech is not treated equally. A digital divide and algorithmic injustice has fractured the internet, and, together with the racial exclusion of mainstream media, has turned the remedy of more speech into a false solution. Ultimately, this harms Black communities, leaders, organizations, and movements. In a digital age, we need to deploy real mechanisms that protect the First Amendment rights of Black and brown people.
To expand digital free speech protections for these communities, the new Biden administration must address the threat of consolidated tech power and pass laws that hold big tech companies accountable for algorithmic discrimination, data privacy, and antitrust violations. The largest online civil rights organization, Color of Change, advocates meaningful reform of Section 230, a law which currently protects platforms from the kinds of legal liability media publishers face for what their users say and do. Section 230 must not enable white supremacy and disinformation, but preserve democracy and protect the civil and human rights of minority users and other protected classes. The organization Public Knowledge agrees, and suggests that while Section 230 is an invaluable tool for preserving platforms’ ability to moderate harmful disinformation and constitutionally protected speech, it’s also been used by social media companies to evade accountability for civil rights violations. Without algorithmic transparency, it’s impossible to know whether these companies are compliant with civil rights recommendations and laws. Any reforms to Section 230 must be thoughtful and limited; it must not, under any circumstances, be repealed.
Social media companies can also play a role in distributing first amendment rights and protections more equally. To do this successfully, they must first disavow the myth of race neutrality and instead develop content policies that support racial equity. If content management mechanisms continue to claim color-blindness, they will continue to allow neo-Nazi, white supremacist, and white nationalist speech and organizing to spread unfettered.
Even hate speech bans that protect targeted groups aren’t enough. To advance equal representation and application of the First Amendment, tech companies should turn to the brilliance of civil rights and liberties advocates. They have plenty of ideas. The Electronic Frontier Foundation recommends that companies adopt the Santa Clara Principles on Transparency and Accountability in Content Moderation as a baseline starting point, to provide “meaningful due process to impacted speakers and better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users’ rights.” Steven Renderos, the executive director at MediaJustice, and Brandi Collins-Dexter, a visiting fellow at the Harvard Kennedy School’s Shorenstein Center on Media and the Politics and Public Policy and senior fellow at Color of Change, both recommend that tech companies overhaul their algorithms to reward those that fight hate rather than those that promote it. These and other groups in the Change the Terms Coalition have worked tirelessly alongside Black and Latino activists to curb the use of social media, payment processors, event-scheduling pages, chat rooms, and other applications for hateful activities. Deplatforming white supremacy, chauvinism, and fascism is not antithetical to this battle for free speech, but a continuation.
When big tech allows white hate speech to go unfettered, it not only bolsters white supremacist violence but echoes real world racial inequities that privilege white communities and depress Black wealth, mortality, and quality of life. As we’ve seen, when white nationalist speech and racist conditions comingle, they operate together and become part of the status quo, adopted by some government officials, law enforcement, members of the military and more. In a nation fractured by white supremacy and other forms of inequality, democracy is called to double duty—it must distribute political freedoms like those offered in the First Amendment while simultaneously ensuring civil rights which extend equal protection to all. This is no easy task, especially in the age of algorithmic decisionmaking, digital economies, and loosely regulated and vastly profitable media platforms.
For too long, the debate about free speech rights has been coopted by right-wing racial extremism and white liberal elitism. But, for Black, indigenous and other communities of color, power is as consequential as rights. If social movements for racial justice, technology companies, and elected officials were able to carve out a sweet spot where Black activists had the power to use the internet to speak freely about anti-racism without having our speech suppressed by both algorithmic bias and organized hate; if we could assemble to oppose police violence without the threat of violent reprisal at every turn; if we could employ a press to contest for power without being criminalized and excluded—then we as a nation can claim the First Amendment is inherent to the democracy we want, and the future of freedom we demand.