In the aftermath of the January 6th siege on the United States Capitol building, social media platforms have scrambled to avoid further blame for their role in instigating the mob.
After an initial reaction of widespread public condemnation, Twitter’s choice to act now has shifted the conversation from the dangers of rhetoric to the freedom of speech online, and, for policymakers, their reliance upon big tech CEOs to regulate content. Public discourse over these concerns, as well as the recent publication of the EU Commission’s proposal for a Digital Services Act, may present the perfect opportunity to open dialogue on best practices in international social media platform regulation.
Two days after the January 6th attack on the US Capitol building, Twitter issued a statement declaring Donald Trump’s twitter account to be "in violation of the Glorification of Violence Policy" and announcing that his account would be "immediately permanently suspended from the service.' Other platforms, such as Facebook, Snapchat, TikTok and Twitch quickly followed suit.
Twitter’s CEOs have faced criticism from both sides of the political spectrum for this move: Trump’s supporters call it an attack on the freedom of speech while his opponents insist the ban came far too late to be effective. Why wasn’t Trump deplatformed after his tweet “when the looting starts, the shooting starts” in May?’ Furthermore, if Trump’s tweets are dangerous enough to justify banning a world leader from the platform, why not ban other lawmakers known for their inflammatory rhetoric (like Björn Höcke, of the AfD) or institutional bodies spreading bigoted misinformation online (the Chinese Embassy in the US, to name a recent example).
Twitter’s choice to act only now in effect shifted the conversation from debating Trump’s culpability to protecting the right of freedom of speech online, a much more comfortable talking point for his supporters. Moreover, banning Trump may actually exacerbate the situation, as his supporters flock to platforms like Gab and Telegram where communication is both less constrained by moderation and less transparent to law enforcement.
As a private company, Twitter is not bound by First Amendment protections. However, social media platforms have 3.6 billion users worldwide and are where nearly one in five American adults primarily gets their political news, The removal (or censorship) of politically-relevant individuals is therefore concerning.
The crux of the issue lies in social media platforms having nearly total decision-making power over what content resides and is disseminated online. In a Politico op-ed, the EU Commissioner for the internal market Thierry Breton said it best – “what happens online doesn’t just stay online: It has – and even exacerbates – consequences ‘in real life’ too.” If this is true, rules that govern “real life” in turn must require online counterparts. However, these online rules remain challenging, as policymakers struggle to understand how platforms function and how to implement rules in online spaces.
The ability for the CEO of a private company to cut off a key communication channel from a government official to citizens is dramatic proof of the power of tech companies, and holding them accountable for removing potentially dangerous content has proven itself imperative. However, without a clear legislative line, policymakers have no authority to enforce a boundary between hate speech and censorship.
The EU is taking action to address this challenge – in mid-December 2020, the Commission officially published the first draft of the Digital Services Act (DSA), marking an attempt at legislation that would clarify and enforce moderation obligations for online platforms. According to the proposal, platforms with more than 45 million users within the EU (i.e. Facebook, Google and Twitter) must submit yearly risk assessment reports outlining how they police illegal content. “Illegal content” in this context is left to existing EU and member state laws to specify their own interpretation. While the proposal does not address misinformation specifically, it nevertheless compels platforms to actively counter “coordinated disinformation campaigns” and publicize their results.
The DSA also emphasises transparency, and grants lawmakers the right to view confidential algorithm functions and advertising policies. Failure to comply with these rules could result in fines of up to six percent of annual income, which would have amounted to $180 million for Twitter in 2018. Most notably, the Commission’s proposal creates an independent body to ensure compliance with the law. This independent auditor would arbitrate cases brought by member states and would be empowered to seek fines from a platform in case a member state is unwilling or unable to file a case itself.
With the DSA, the EU has taken a meaningful step in the fight to regulate platforms that dominate communication channels and ecommerce, during a time when no other democratic governing body seems willing to step in. If anything, the events of January 6th clearly demonstrate the urgent need for similar legislation in the United States. While cooperative transatlantic dialogue during the Trump presidency was minimal, Joe Biden’s inauguration is sure to bring elements of multilateralism back to the transatlantic relationship. Biden is no progressive – his enthusiastic support for some form of international platform regulation is unlikely. However, considering the openness of the Biden administration towards transnational engagement, this may be a perfect opportunity to open an international dialogue on best practices of online moderation. Specifically, the US could use the European DSA a template for its own legislative reform.
If the storming of the Capitol has shown us anything, it is that governments can no longer rely solely on platforms to decide and moderate what content could have dangerous consequences offline. The Biden administration offers hope for an era of renewed transnational cooperation, and the United States would be wise to pay close attention to and learn from digital legislative efforts such as the EU Digital Services Act.
For more resources on the Digital Services Act:
- Event: How to build a European regulator to govern social media platforms, 18 February, 6-7pm