The Rise of a New Freedom of Expression Paradigm in the Algorithmic Society? Comparing the US and the EU Approaches

Tzu-Chiang (Leo) Huang[1]

1. Introduction

On 31 May 2022, the Supreme Court of the United States (SCOTUS) upheld a district court ruling blocking a Texas social media law (HB20) from taking effect in NetChoice, LLC v Paxton. The contentious Texas law, prompted by conservative complaints aiming to free social media users from “Silicon Valley censorship”, prohibited platforms from censoring users based on their viewpoints. In the district court’s ruling, the court explicitly recognised that “[s]ocial media platforms have a First Amendment right to moderate content disseminated on their platforms”. Hence, the court concluded that HB20 is unconstitutional because it interfered with the platforms’ editorial discretion protected by the First Amendment.

The decision in NetChoice, which emphasises social media platforms’ right to free expression rather than users’ right to free expression, demonstrates how the US  has fundamentally taken a different stance from the EU in regulating online platforms in a democracy. Recognising that threats to fundamental rights also come from transnational private platforms “whose freedoms are increasingly turning into forms of unaccountable power”,[2] the EU has rejected the laissez-faire approach to online platforms regulation and introduced regulations prioritising users’ right to free expression.

This article analyses the approaches taken by the US and EU and argues that a different approach should have been taken in NetChoice. Part Ⅱ analyses in more depth how the US and EU have responded differently to the rise of private censorship. Part Ⅲ discusses how the US should reconstruct its freedom of expression jurisprudence since the contemporary online speech environment is characterised by a limited number of privately owned platforms exercising unchecked power over content moderation. More specifically, this article argues that the emergence of an algorithmic society[3] has challenged the laissez-faire paradigm long favoured by the SCOTUS. Some normative arguments are relied on to support these argument; these advocate the EU’s approach, which interprets freedom of expression as a ‘positive right’, thereby enabling the state to introduce procedural requirements necessary to address the power asymmetry between private platforms and their users.

2. The Rise of Private Censorship and the Legal Response Across the Atlantic

Transnational online platforms, such as Facebook and Twitter, play an increasingly critical role in facilitating the flow of information online and have revolutionised the way people communicate. Prior to NetChoice, the SCOTUS recognised in Packingham v North Carolina that social media platforms are “one of the most important places to exchange views”, as they offer “relatively unlimited, low-cost capacity for communication of all kinds”. In a similar vein, the ECJ acknowledged in Glawischnig-Piesczek v Facebook Ireland Limited the vital role social media platforms play in disseminating information. More recently, the COVID-19 pandemic has also revealed the importance of access to onlineinformation to overcome social distancing.[4] In today’s algorithmic society, accessing online platforms has become a prerequisite for individuals to participate in public discourse.[5]

Despite this recognition as primary venues for the public to engage in democratic deliberation, the legal responses of the US and EU differ. In the US, the traditional laissez-faire First Amendment jurisprudence, incorporating the ‘more speech’ doctrine, has discouraged the government from introducing regulations.[6] After all, online platforms are viewed as facilitators of democracy rather than a threat to public discourse.[7] The Communication Decency Act (CDA) 230 regime,which is interpreted broadly to immunise social media platforms from liability for third-party content,[8] epitomises the laissez-faire approach long favoured by the First Amendment tradition.[9] Furthermore, the ‘state action doctrine’ limits the constitutional obligations imposed by the First Amendment to public actors, thereby enabling social media platforms to exclusively control the rules for content moderation without any First Amendment concerns.[10]

In contrast, the EU has responded to the challenges of an algorithmic society with its rise of ‘digital constitutionalism’.[11] Alerted by social media platforms performing ‘quasi-public functions’ — such as balancing individual fundamental rights by implementing content moderation[12] — on a global scale without any constitutional constraints, the primary goal of the European digital constitutionalism is to articulate the limits to the exercise of private power.[13] The Digital Services Act, which limits platform power by prescribing substantial obligations and procedural safeguards, is a paradigmatic example of the application of European digital constitutionalism. Without introducing content-based regulation, it requires online platforms to abide by procedural safeguards such as requiring platforms to provide their users with “a clear and specific statement of reasons” for their decisions on content moderation[14] and establish a compliant procedure that is ‘user friendly’.[15] Furthermore, to incentivise online platforms to comply, the Digital Services Act imposes fines on those violating the procedural requirement[16] while maintaining the exemption of liability for online platforms who moderate content responsibly.[17]

 

3. Reconstructing the Freedom of Expression Jurisprudence in an Algorithmic Society

A) Freedom of Expression Should Be Interpreted as a Positive Right

The discrepancy between the US and EU approach to regulating online platform can be attributed to how they interpret ‘freedom of expression’ as a ‘right’. In the US, First Amendment jurisprudence overwhelmingly considers free expression to be a ‘negative right’, holding that the purpose of free expression is best served when it is free from government interference.[18] Hence, despite being the ‘new governors’ dictating what users can say online,[19] social media platforms are protected by the First Amendment rather than restricted by it. By contrast, under the framework of European digital constitutionalism, freedom of expression is no longer considered merely a negative right but also a ‘positive right’. A positive right stresses that the realisation of freedom of expression requires ‘support and enablement’[20] from the state to facilitate robust public debate where “diversity of expression flourish[es]”.[21] That is, the state has a positive obligation to ensure that everyone is guaranteed the resources and opportunities to effectively exercise their rights without being unduly impeded by other private parties.[22]

The positive-right approach adopted by the European digital constitutionalism, it is submitted, is more effective to address the challenges to freedom of expression in an algorithmic society. On one hand, the negative-right approach is counterproductive as it provides platforms with little legal incentive to combat online disinformation, which leads to the proliferation of “cheap speech” undermining democratic values.[23] On the other hand, it is established that grave threats to fundamental rights, such as “radically unequal economic power”[24] between private parties, can trigger the positive obligation of states to regulate private activities to protect fundamental rights.[25] In an algorithmic society, the dominant role of social media platforms in shaping public opinion has made privatised content moderation both necessary and problematic. A limited number of private platforms whose owners have no constitutional obligations to respect fundamental rights possess the absolute power to set the boundary of freedom of expression on a global scale.[26]Furthermore, the high degree of opacity and unpredictability of content moderation threatens rather than safeguards users’ right to freedom of expression. As Gregorio aptly noted, the differences between publicly available community guidelines and privately hidden internal policies render the moderation process “more as an authoritarian determination than a democratic expression”.[27] Therefore, the power asymmetry between private platforms and their users can serve as the justification for invoking the ‘positive’ dimension of freedom of expression.

B) HowNetChoice Should Have Been Decided

Going back to NetChoice, how would the SCOTUS have examined the constitutionality of HB20 had it adopted the European digital constitutionalism and interpreted the First Amendment as a positive right? There were two regulatory regimes contended in NetChoice: (1) prohibiting platforms from censoring users based on viewpoint; and (2) requiring platforms to establish procedures by which users can appeal a platform’s decision to remove content.

Regulatory regime (1) would still be unconstitutional as it is a content-based regulation compelling private platforms to carry speech. It is important to note that arguing against the laissez-faire approach to online platform regulation is not synonymous with advocating more rigorous content-based regulation. The purpose of the positive-right approach is not to categorically deny platforms their First Amendment right, but to properly balance the First Amendment right between platforms and their users. On one hand, the positive dimension of freedom of expression imposes obligations on the state to introduce regulation promoting a pluralistic speech environment. On the other hand, the negative dimension of freedom of expression restricts the state from introducing regulation that is unnecessarily intrusive.[28] Given that the right not to speak is a fundamental aspect of the First Amendment’s protections,[29] platforms should not be compelled to carry speech. Furthermore, as a policy matter, it is desirable that platforms enforce content moderation to address online disinformation which distorts democratic deliberation.[30] Therefore, the underlying principle is to design a regulatory regime that introduce requirements which protect users from being subject to content moderation based on unaccountable decision-making and deliberately opaque procedures, instead of compelling platforms to carry or remove certain content.

Regulatory regime (2) is constitutional as it introduces a due process requirement which fosters accountability and transparency in online content moderation. As discussed above, the core idea of European digital constitutionalism is to limit the abuse of private power by establishing procedural safeguards.[31] Similar to Article 14 of the Digital Services Act, regulatory regime (2) in HB20 provides users with tools to check against private interferences by requiring platforms to establish compliant procedure, thereby mitigating the power asymmetry between platforms and their users. The procedure-centric approach, which makes platforms more accountable for their decision-making while still allowing platforms to remove content they deem detrimental to the community, strikes a proper balance between platforms’ and users’ First Amendment right.

 

4. Conclusion

 NetChoice is a paradigmatic illustration of the challenges to freedom of expression in an algorithmic society. In an algorithmic society, despite the state remaining to pose a threat to freedom of expression, it is also crucial that the state “serve as a necessary counterweight to developing technologies of private control and surveillance”.[32] While many have traditionally viewed freedom of expression only as a negative right, arguments for a shift to an interpretation that incorporates the positive dimension are actually anything but new.[33] The idea here is that the power asymmetry between private platforms and their users which characterises an algorithmic society may provide the most compelling case for the SCOTUS to reconstruct its freedom of expression jurisprudence.[34] As private censorship is inevitable in the era of social media, the solution is not to prohibit private platforms from enforcing content moderation, but to determine how private platforms should enforce content moderation. That is, the underlying principle of the positive approach is to design a regulatory regime that introduces procedural safeguards, thus promoting a pluralistic speech environment.

 

[1] Recent LLM graduate, University of Michigan Law School

[2] Giovanni De Gregorio, Digital Constitutionalism in Europe: Reframing Rights and Powers in the Algorithmic Society (Cambridge University Press 2022) 65.

[3] An ‘algorithmic society’ is defined as “a society organized around social and economic decision-making by algorithms, robots, and Al agents, who not only make the decisions but also, in some cases, carry them out”, 1219.

[4] Kate Klonick, ‘The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression’ (2020) 129 Yale L J 2418, 2497.

[5] Jack M. Balkin, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2018) 51 UCD L Rev 1149, 1153.

[6] “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the process of education, the remedy to be applied is more speech, not enforced silence. Only an emergency can justify repression”, 377 (Brandeis J., concurring)

[7] Jack M. Balkin, ‘How Rights Change: Freedom of Speech in the Digital Era’ (2004) 26 Sydney L Rev 5, 7.

[8] See generally Danielle Keats Citron & Mary Anne Franks, ‘The Internet as a Speech Machine and Other Myths Confounding Section 230 Reform’ (2020) 2020 U Chi Legal F 45, 50.

[9] Philip M. Napoli, ‘What If More Speech Is No Longer the Solution: First Amendment Theory Meets Fake News and the Filter Bubble’ (2018) 70 Fed Comm LJ 55, 58.

[10] See generally Jonathan Peters, ‘The Sovereigns of Cyberspace and State Action: The First Amendment’s Application – Or Lack Thereof – To Third-Party Platforms’ (2017) 32 Berkeley Tech. L.J. 989, 1022-23.

[11] Gregorio (n 1) 78.

[12] ibid 118.

[13] ibid 4.

[14] Digital Service Act, Art 15.

[15] ibid Art 14.

[16] ibid Art 42.

[17] ibid Art 5.

[18] Andrew T. Kenyon, Democracy of Expression: Positive Free Speech and Law (Cambridge University Press 2021) 118.

[19] Kate Klonick, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2018) 131 Harvard Law Review 1598, 1663.

[20] Andrew T. Kenyon, ‘Positive Free Speech: A Democratic Freedom’ in Adrienne Stone and Frederick Schauer (eds), The Oxford Handbook of Freedom of Speech (Oxford University Press 2021) 4.

[21] ibid 5.

[22] Joel Bakan, Just Words: Constitutional Rights and Social Wrongs (University of Toronto Press 1997) 10.

[23] Richard L. Hasen, ‘Cheap Speech and What It Has Done (to American Democracy)’ (2017) 16 First Amend L Rev 200.

[24] Jack M Balkin, ‘Some Realism about Pluralism: Legal Realist Approaches to the First Amendment’ (1990) Duke LJ 375, 379.

[25] Gregorio (n 1) 203.

[26] Ashutosh Bhagwat, ‘Free Speech Categories in the Digital Age’ in Susan J. Brison and Katharine Gelber (eds),  Free Speech in the Digital Age(Oxford University Press 2019) 93.

[27] Gregorio (n 1) 185.

[28] Kenyon, ‘Positive Free Speech’ (n 19).

[29] See, eg, W. Va. State Bd. of Educ. V Barnette, 319 U.S. 624, 641 (1943) (“If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can . . . force citizens to confess by word or act their faith therein.”)

[30] Hunt Allcott & Matthew Gentzkow, ‘Social Media and Fake News in the 2016 Election’ (2017) 31 J. Econ. Perspect. 211.

[31] See part Ⅱ. of the article.

[32] Balkin ‘Free Speech in the Algorithmic Society’ (n 3) 1152.

[33] See, eg, Thomas Emerson, The System of Freedom of Expression (Random House 1970).

[34] Philip M. Napoli, Social Media and the Public Interest: Media Regulation In the Disinformation Age (Columbia University Press 2019) 192.