Trump v. Tech: What is censorship and who gets to do it?

Dr. Courtney C. Radsch
6 min readJan 10, 2021

--

Most of the major social media platforms have kicked President Trump off, using the fact that he incited an insurrection that left five people dead, and concern that such incitement could continue, as a justification to close his accounts. Twitter held out longer than Facebook, Google or a host of other services in deplatforming the president, and further amplified Parler and Gab as alternative platforms for Trumpians. Trump has used Twitter as his bully pulpit, enabling him to forgo press conferences that would put him front of pesky journalists who might ask him questions or push back on his exaggerations and outright lies. Although Twitter is often lumped in with the big five tech companies, it is far smaller and less profitable, but is highly influential because of the people who use it: politicians, journalists, cultural figures, businesses, influencers, and the bots.

Photo of US Capitol, flag at half staff with National Guard standing watch behind newly-erected fence adorned with red roses
Photo of the U.S. Capitol with the flag at half staff with the National Guard standing watch behind a newly-erected fence adorned with red roses (Photo credit: Courtney C. Radsch)

Rightwing challengers to the social media behemoths, like Parler or Gab, are examples of what it looks like to have competition in the social media platform space. They are alternatives to the monopolistic platforms (just as the Daily Stormer and Inforwars were alternative information sources to mainstream media). Mainstream platforms have extensive content moderation, terms of service, and community standards that govern what is acceptable and permissible on the platform. The alternative platforms were started because far right extremists, white supremacists, and conspiracy theorists were getting kicked off the popular platforms. Remember when Facebook, PayPal, and Cloudflare (a cybersecurity infrastructure company that protects about 10% of the world’s internet traffic), kicked the neo-Nazi site Daily Stormer off in 2018? Cloudflare’s CEO wrote a prescient blog asking why he, a guy who woke up in a bad mood one day, could decide to kick someone off his platform. Just as the final decision about whether Facebook would ban Trump was up to Mark Zuckerberg, a single bro whose project to build an app to help him meet girls has morphed into the global public sphere of the 21st century, credited with enabling everything from the Arab Spring to genocide in Myanmar. Facebook’s guiding ideology is to connect the world while making as much money as possible by microtargeting individuals with the soma of engagement, affirmation and certitude to better know and predict (and some would say compel) what you will feel and do. (Sidenote: move fast and break things is another of its guiding principles, though I doubt Zuckeberg and the Silicon Valley crowd meant to apply this to American democracy).

The most popular, biggest companies didn’t effectively restrict the use of their platforms to organize the anti-democratic, extremist, conspiracy-based groups that raided the Capitol. These people didn’t just talk about overthrowing the status quo, or what they termed the “deep state”, they put it into action. Then the Apple and Google Play stores joined in Friday, mandating content moderation from Parler, the alternative platform to which many Trumpians had migrated, attempting to deny the anti-democratic destructiveness that characterized so much of the discussion on those platforms easy access to a broader customer base. But the site remained available online. Until the web hosting service, Amazon Web Services, announced it would suspend the account on Sunday, forcing Parler to find an alternative hosting service or get booted off the web. This is all reminiscent of 2018, which also saw the web-hosting service GoDaddy refuse to provide services to the Daily Stormer.

Should the Google and Apple stores be equated with physical companies who have stopped carrying problematic products or pariah brands? With grocery stores that decide to stop carrying sugary drinks or inhumane meat because of their broader detrimental impacts to society? Or should they be considered public utilities that should be content agnostic? Kicking Parler out for failing to have what the platforms consider to be an adequate content moderation policy that would prevent the advocacy of insurrection seems more akin to Target refusing to carry Nike unless it stopped its use of sweatshop labor than it does of ATT refusing to carry the phone calls of a white supremacists.

Which begs the question, should infrastructure providers and public utilities be required to carry communications traffic regardless of content? I mean, many mobsters and terrorists used telephones to organize illegal activity. Should telecommunications companies proactively be required to prevent terrorists or neo-Nazis from using their phone lines? That is what tech platforms are being asked to do. (The EU, for example, wants pre-upload filters to prevent terrorists from using the internet and companies are coordinating their removal and prevention efforts). Many of those companies had denied Daily Stormer, Infowars and other right-wing extremist accounts access to their services. What about Verizon or Comcast, which provide the pipes? Should they be co-opted into content moderation?

The precedent for extending censorial sensibility from the application layer of the stack — where user interfaces and sites like Facebook, Google, and Twitter live — down to the infrastructure layer — where Cloudflare, AWS, or GoDaddy live — was already set in 2018. It remains contested, but the precedent was set.

So what happens when the dominant public sphere — the most popular social media platforms, and the stores where alternative forums ply their services and the web hosting services — kick off apps or websites because they dislike or disagree with their ideology or that of their users? In the U.S. we like to talk about the marketplace of ideas. But the concept of a market has certain inherent values and a logic baked in — it’s capitalistic, consumerist, and patriarchal. Critical and feminist theory would interrogate how conceptualizing the public sphere as a marketplace of ideas excludes and devalues perspectives of those who do not benefit from the status quo or advocate an alternative logic — I’m not going to dive into that here but do want to acknowledge it.

However, we could also talk about a marketplace and think more about the last half of this conjunction, which implies that we are not talking about the concept of an economic market, but instead an actual place, like the farmer’s markets and marchés at the local, community level. There, those who hawk their wares and haggle for the best price are in an iterative relationship, and there must be some level of trust and accountability for the community to sustain it. If we think about the marketplace of ideas in this way then we could perhaps rethink the role of community norms and local gatekeepers.

Many have, and will, argue that private companies like Twitter, Amazon and Facebook, which govern vast swathes of our social, economic, and digital lives, shouldn’t have this level of control to deny access to their platforms. They point to the First Amendment’s prohibitions on government making any laws that abridge freedom of speech, assembly, religion or the press, and argue that these powerful platforms are akin to government because they govern so much of our daily lives. But while this is true, and should thus require greater transparency and accountability (an issue for another piece), is it not also preferable for non-governmental actors to enforce norms about what speech and ideology is acceptable in the public sphere?

One of the reasons that white supremacy and extremist conspiracy theories have become so prevalent, to the point where they literally threaten democracy and representative government, is because the norms that once relegated these ideas to the fringes have unravelled. Amid the algorithmic intermediation of the public sphere by platforms designed for engagement, polarization, and connectivity, and the demagogic leadership of an anti-democratic narcissist, the normative framework that kept U.S. democracy functioning (however imperfectly for some groups of people) is in tatters.

Striking the right balance between government regulation, self-regulation by tech companies, and the role of the public will define the future of information in the digital age. At the core of this balancing act is the question of who should decide what content is permissible and what is not. A look at abuse of legal frameworks by authoritarians around the world is a clear warning against government regulation of information. At the same time, relying on internet platforms to filter or verify information could result in the privatization of censorship and further exclusion of marginalized, minority voices. Particularly as online activity and information is consolidated across a few hegemonic platforms, nearly all of which are based on the West Coast of the United States. All this is to say that we need to think carefully and deliberately about how we respond to the tech lash that these recent events will accelerate.

--

--

Dr. Courtney C. Radsch

Postdoctoral fellow at UCLA institute for Technology, Law & Policy and Director of the Center for Journalism and Literacy at Open Markets Institute