NETCHOICE, LLC, d.b.a. NetChoice, COMPUTER & COMMUNICATIONS INDUSTRY ASSOCIATION, d.b.a. CCIA v. ATTORNEY GENERAL, STATE OF FLORIDA, in their official capacity, JONI ALEXIS POITIER, in her official capacity as
No. 21-12355
United States Court of Appeals for the Eleventh Circuit
May 23, 2022
[PUBLISH]
Plaintiffs-Appellees,
versus
Defendants-Appellants.
Appeal from the United States District Court for the Northern District of Florida
D.C. Docket No. 4:21-cv-00220-RH-MAF
Before NEWSOM, TJOFLAT, and ED CARNES, Circuit Judges.
NEWSOM, Circuit Judge:
Not in their wildest dreams could anyone in the Founding generation have imagined Facebook, Twitter, YouTube, or TikTok. But “whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of freedom of speech and the press, like the First Amendment‘s command, do not vary when a new and different medium for communication appears.” Brown v. Ent. Merchs. Ass‘n, 564 U.S. 786, 790 (2011) (quotation marks omitted). One of those “basic principles“—indeed, the most basic of the basic—is that “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors.” Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1926 (2019). Put simply, with minor exceptions, the government can‘t tell a private person or entity what to say or how to say it.
The question at the core of this appeal is whether the Facebooks and Twitters of the world—indisputably “private actors” with First Amendment rights—are engaged in constitutionally protected expressive activity when they moderate and curate the content that they disseminate on their platforms. The State of Florida insists that they aren‘t, and it has enacted a first-of-its-kind law to combat what some of its
We hold that it is substantially likely that social-media companies—even the biggest ones—are “private actors” whose rights the First Amendment protects, Manhattan Cmty., 139 S. Ct. at 1926, that their so-called “content-moderation” decisions constitute protected exercises of editorial judgment, and that the provisions of the new Florida law that restrict large platforms’ ability to engage in content moderation unconstitutionally burden that prerogative. We further conclude that it is substantially likely that one of the law‘s particularly onerous disclosure provisions—which would require covered platforms to provide a “thorough rationale” for each and every content-moderation decision they make—violates the First Amendment. Accordingly, we hold that the companies are entitled to a preliminary injunction prohibiting enforcement of those provisions. Because we think it unlikely that the law‘s remaining (and far less burdensome) disclosure provisions violate the First Amendment, we hold that the companies are not entitled to preliminary injunctive relief with respect to them.
I
A
We begin with a primer: This is a case about social-media platforms. (If you‘re one of the millions of Americans who regularly use social media or can‘t remember a time before social media existed, feel free to skip ahead.)
At their core, social-media platforms collect speech created by third parties—typically in the form of written text, photos, and videos, which we‘ll collectively call “posts“—and then make that speech available
Three important points about social-media platforms: First—and this would be too obvious to mention if it weren‘t so often lost or obscured in political rhetoric—platforms are private enterprises, not governmental (or even quasi-governmental) entities. No one has an obligation to contribute to or consume the content that the platforms make available. And correlatively, while the Constitution protects citizens from governmental efforts to restrict their access to social media, see Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017), no one has a vested right to force a platform to allow her to contribute to or consume social-media content.
Second, a social-media platform is different from traditional media outlets in that it doesn‘t create most of the original content on its site; the vast majority of “tweets” on Twitter and videos on YouTube, for instance, are created by individual users, not the companies that own and operate Twitter and YouTube. Even so, platforms do engage in some speech of their own: A platform, for example, might publish terms of service or community standards specifying the type of content that it will (and won‘t) allow on its site, add addenda or disclaimers to certain posts (say, warning of misinformation or mature content), or publish its own posts.
Third, and relatedly, social-media platforms aren‘t “dumb pipes“: They‘re not just servers and hard drives storing information or hosting blogs that anyone can access, and they‘re not internet service providers reflexively transmitting data from point A to point B. Rather, when a user visits Facebook or Twitter, for instance, she sees a curated and edited compilation of content from the people and organizations that she follows. If she follows 1,000 people and 100 organizations on a particular platform, for instance, her “feed“—for better or worse—won‘t just consist of every single post created by every single one of those people and organizations arranged in reverse-chronological order. Rather, the platform will have exercised editorial judgment in two key ways: First, the platform will have removed posts that violate its terms of service or community standards—for instance, those containing hate speech, pornography, or violent content. See, e.g., Doc. 26-1 at 3-6; Facebook Community Standards, Meta, https://transparency.fb.com/policies/community-standards (last accessed May 15, 2022). Second, it will have arranged available content by choosing how to prioritize and display posts—effectively selecting which users’ speech the viewer will see, and in what order, during any given visit to the site. See Doc. 26-1 at 3.
Accordingly, a social-media platform serves as an intermediary between users who have chosen to partake of the service the platform provides and thereby participate in the community it has created. In that way, the platform creates a virtual space in which every user—private individuals, politicians, news organizations, corporations, and advocacy groups—can be both speaker and listener. In playing this role, the platforms invest significant time and resources into editing and organizing—the best word, we think, is curating—users’
B
The State of Florida enacted
To these ends,
[A]ny information service, system, Internet search engine, or access software provider that:
- Provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site;
- Operates as a sole proprietorship, partnership, limited liability company, corporation, association, or other legal entity;
- Does business in the state; and
- Satisfies at least one of the following thresholds:
- Has annual gross revenues in excess of $100 million...
- Has at least 100 million monthly individual platform participants globally.
The relevant provisions of
Content-Moderation Restrictions
- Candidate deplatforming: A social-media platform “may not willfully deplatform a candidate for office.”
Fla. Stat. § 106.072(2) . The term “deplatform” is defined to mean “the action or practice by a social media platform to permanently delete or ban a user or to temporarily delete or ban a user from the social media platform for more than 14 days.” Id.§ 501.2041(1)(c) . - Posts by or about candidates: “A social media platform may not apply or use post-prioritization or shadow banning algorithms for content and material posted by or about . . . a candidate.” Id.
§ 501.2041(2)(h) . “Post prioritization” refers to the practice of arrаnging certain content in a more or less prominent position in a user‘s feed or search results. Id.§ 501.2041(1)(e) .3 “Shadow banning” refers to any action to “limit or eliminate the exposure of a user or content or material posted by a user to other users of [a] . . . platform.” Id.§ 501.2041(1)(f) . - “Journalistic enterprises“: A social-media platform may not “censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast.” Id.
§ 501.2041(2)(j) . The term “journalistic enterprise” is defined broadly to include any entity doing business in Florida
that either (1) publishes in excess of 100,000 words online and has at least 50,000 paid subscribers or 100,000 monthly users, (2) publishes 100 hours of audio or video online and has at least 100 million annual viewers, (3) operates a cable channel that provides more than 40 hours of content per week to more than 100,000 cable subscribers, or (4) operates under an FCC broadcast license. Id.
- Consistency: A social-media platform must “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.” Id.
§ 501.2041(2)(b) . The Act does not define the term “consistent.” - 30-day restriction: A platform may not make changes to its “user rules, terms, and agreements . . . more than once every 30 days.” Id.
§ 501.2041(2)(c) . - User opt-out: A platform must “categorize” its post-prioritization and shadow-banning algorithms and allow users to opt out of them; for users who opt out, the platform must display material in “sequential or chronological” order. Id.
Disclosure Obligations
- Standards: A social-media platform must “publish the standards, including detailed
definitions, it uses or has used for determining how to censor, deplatform, and shadow ban.” Id. § 501.2041(2)(a) . - Rule changes: A plаtform must inform its users “about any changes to” its “rules, terms, and agreements before implementing the changes.” Id.
§ 501.2041(2)(c) . - View counts: Upon request, a platform must provide a user with the number of others who viewed that user‘s content or posts. Id.
§ 501.2041(2)(e) . - Candidate free advertising: Platforms that “willfully provide[] free advertising for a candidate must inform the candidate of such in-kind contribution.” Id.
§ 106.072(4) . - Explanations: Before a social-media platform deplatforms, censors, or shadow-bans any user, it must provide the user with a detailed notice. Id.
§ 501.2041(2)(d) . In particular, the notice must be in writing and be delivered within 7 days, and must include both a “thorough rationale explaining the reason” for the “censor[ship]” and a “precise and thorough explanation of how the social media platform became aware” of the content that triggered its decision. Id.§ 501.2041(3) . (The notice requirement doesn‘t apply “if the censored content or material is obscene.” Id.§ 501.2041(4) .)
User-Data Requirement
- Data access: A social-media platform must allow a deplatformed user to “access or retrieve all of the user‘s information, content, material, and data for at least 60 days” after the user receives notice of deplatforming. Id.
§ 501.2041(2)(i) .
Enforcement of
C
The plaintiffs here—NetChoice and the Computer & Communications Industry Association (together, “NetChoice“)—are trade associations that represent internet and social-media companies like Fаcebook, Twitter, Google (which owns YouTube), and TikTok. They sued the Florida officials charged with enforcing
The district court granted NetChoice‘s motion and preliminarily enjoined enforcement of
On NetChoice‘s free-speech challenge, the district court held that the Act‘s provisions implicated the First Amendment because they restrict platforms’ constitutionally protected exercise of “editorial judgment.” The court then applied strict First Amendment scrutiny because it concluded that some of the Act‘s provisions were content-based and, more broadly, because it found that the entire bill was motivated by the state‘s viewpoint-based purpose to defend conservatives’ speech from perceived liberal “big tech” bias: “This viewpoint-based motivation, without more, subjects the legislation to strict scrutiny, root and branch.” Doc. 113 at 23-26. The court held that the Act‘s provisions “come nowhere close” to surviving strict scrutiny because, it said, “leveling the playing field” for speech is not a legitimate state interest, the provisions aren‘t narrowly tailored, and the State hadn‘t even argued that the provisions could survive such scrutiny. Id. at 27. The court further noted that even if more permissive intermediate scrutiny applied, the provisions wouldn‘t survive because they don‘t meet the narrow-tailoring requirement and instead “seem designed not to achieve any governmental interest but to impose the maximum available burden on the social media platforms.” Id. at 28. The court concluded that the plaintiffs easily met the remaining requirements for a preliminary injunction.
The State appealed. Before us, the State first argues that the plaintiffs are unlikely to succeed on their preemption challenge because some applications of the Act are consistent with
NetChoice responds that platforms’ content-moderation decisions—i.e., their decisions to remove or deprioritize posts or deplatform users, and thereby curate the material they disseminate—are “editorial judgments” that are protected by the First Amendment under longstanding Supreme Court precedent, including Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), Pacific Gas & Electric Co. v. Public Utilities Commission of California, 475 U.S. 1 (1986), Turner Broadcasting Systems, Inc. v. FCC, 512 U.S. 622 (1994), and Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston, 515 U.S. 557 (1995). According to NetChoice, strict scrutiny applies to the entire law “several times over” because it is speaker-, content-, and viewpoint-based. Moreover, and in any event, NetChoice says, the law fails any form of heightened scrutiny because there is no legitimate state interest in equalizing speech and because the law isn‘t narrowly tailored. NetChoice briefly defends the district court‘s preemption holding, but focuses on the First Amendment issues because they fully dispose of the case and because, it contends, a First
D
“We review the grant of a preliminary injunction for abuse of discretion, reviewing any underlying legal conclusions de novo and any findings of fact for clear error.” Gonzalez v. Governor of Ga., 978 F.3d 1266, 1270 (11th Cir. 2020). Ordinarily, “[a] district court may grant injunctive relief only if the moving party shows that: (1) it has a substantial likelihood of success on the merits; (2) irreparable injury will be suffered unless the injunction issues; (3) the threatened injury to the movant outweighs whatever damage the proposed injunction may cause the opposing party; and (4) if issued, the injunction would not be adverse to the public interest.” Siegel v. LePore, 234 F.3d 1163, 1176 (11th Cir. 2000) (en banc). Likelihood of success on the merits “is generally the most important” factor. Gonzalez, 978 F.3d at 1271 n.12 (quotation marks omitted).
*
*
*
We will train our attention on the question whether NetChoice has shown a substantial likelihood of success on the merits of its First Amendment challenge to
In assessing whether the Act likely violates the First Amendment, we must initially consider whether it triggers First Amendment scrutiny in the first place—i.e., whether it regulates “speech” within the meaning of the Amendment at all. See Coral Ridge Ministries Media, Inc. v. Amazon.com, Inc., 6 F.4th 1247, 1254 (11th Cir. 2021). In other words, we must determine whether social-media platforms engage in First-Amendment-protected activity. If they do, we must then proceed to determine what level of scrutiny applies and whether the Act‘s provisions survive thаt scrutiny. See Fort Lauderdale Food Not Bombs v. City of Fort Lauderdale, 11 F.4th 1266, 1291 (11th Cir. 2021) (”FLFNB II“).
For reasons we will explain in the balance of the opinion, we hold as follows: (1)
II
A
Social-media platforms like Facebook, Twitter, YouTube, and TikTok are private companies with First Amendment rights, see First Nat‘l Bank of Bos. v. Bellotti, 435 U.S. 765, 781-84 (1978), and when they (like other entities) “disclos[e],” “publish[],” or “disseminat[e]” information, they engage in “speech within the meaning of the First Amendment.” Sorrell v. IMS Health Inc., 564 U.S. 552, 570 (2011) (quotation marks omitted). More particularly, when a platform removes or deprioritizes a user or post, it makes a judgment about whether and to what extent it will publish information to its users—a judgment rooted in the platform‘s own views about the sorts of content and viewpoints that are valuable and appropriate for dissemination on its site. As the officials who sponsored and signed
Laws that restrict platforms’ ability to speak through content moderation therefore trigger First Amendment scrutiny. Two lines of precedent independently confirm this commonsense conclusion: first, and most obviously, decisions protecting exercises of “editorial judgment“; and second, and separately, those protecting inherently expressive conduct.
1
We‘ll begin with the editorial-judgment cases. The Supreme Court has repeatedly held that a private entity‘s choices about whether, to what extent, and in what manner it will disseminate speech—even speech created by others—constitute “editorial judgments” protected by the First Amendment.
Miami Herald Publishing Co. v. Tornillo is the pathmarking case. There, the Court held that a newspaper‘s decisions about what content to publish and its “treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment” that the First Amendment was designed to safeguard. 418 U.S. at 258. Florida had passed a statute requiring any paper that ran a piece critical of a political candidate to give the candidate equal space in its pages to reply. Id. at 243. Despite the contentions (1) that economic conditions had created “vast accumulations of unreviewable power in the modern media empires” and (2) that those conditions had resulted in “bias and manipulative reportage” and massive barriers to entry, the Court concluded that the state‘s attempt to compel the paper‘s editors to
“publish that which reason tells them should not be published is unconstitutional.” Id. at 250-51, 256 (quotation marks omitted). Florida‘s “intrusion into the function of editors,” the Court held, was barred by the
The Court subsequently extended Miami Herald‘s protection of editorial judgment beyond newspapers. In Pacific Gas & Electric Co. v. Public Utilities Commission of California, 475 U.S. 1 (1986), the Court invalidated a state agency‘s order that would have required a utility company to include in its billing envelopes the speech of a third party with which the company disagreed. 475 U.S. at 4, 20 (plurality op.). A plurality of the Court reasoned that the concerns underlying Miami Herald applied to a utility company in the same way that they did to the institutional press. Id. at 11-12. The challenged order required the company “to use its property as a vehicle for spreading a message with which it disagree[d]” and therefore was subject to (and failed) strict
So too, in Turner Broadcasting Systems, Inc. v. FCC, 512 U.S. 622 (1994), the Court held that cable operators—companies that own cable lines and choose which stations to offer their customers—“engage in and transmit speech.” 512 U.S. at 636. “[B]y exercising editorial discretion over which stations or programs to include in [their] repertoire,” the Court said, they “seek to communicate messages on a wide variety of topics and in a wide variety of formats.” Id. (quotation marks omitted); see also Ark. Educ. TV Comm‘n v. Forbes, 523 U.S. 666, 674 (1998) (“Although programming decisions often involve the compilation of the speech of third parties, the decisions nonetheless constitute communicative acts.“). Because cable operators’ decisions about which channels to transmit were protected speech, the challenged regulation requiring operators to carry broadcast-TV channels triggered
Most recently, the Court applied the editorial-judgment principle to a parade organizer in Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston, 515 U.S. 557 (1995), explaining that parades (like newspapers and cable-TV packages) constitute protected expression. 515 U.S. at 568. The Supreme Judicial Court of Massachusetts had attempted to apply the state‘s public-accommodations law to require the organizers of a privately run parade to allow a gay-pride group to march. Id. at 564. Citing Miami Herald, and using words equally applicable here, the Court observed that “the presentation of an edited compilation of speech generated by other persons . . . fall[s] squarely within the core of
Together, Miami Herald, Pacific Gas, and particularly Turner and Hurley establish that a private entity‘s decisions about whether, to what extent, and in what manner to disseminate third-party-created content to the public are editorial judgments protected by the
2
Separately, we might also assess social-media platforms’ content-mоderation practices against our general standard for what constitutes inherently expressive conduct protected by the
In determining whether conduct is expressive, we ask whether the reasonable person would interpret it as some sort of message, not whether an observer would necessarily infer a specific message. If we find that the conduct in question is expressive, any law regulating that conduct is subject to the
First Amendment .
6 F.4th at 1254 (cleaned up).
In Coral Ridge, a Christian ministry and media organization sued Amazon.com, alleging that Amazon‘s decision to exclude the organization from the company‘s “AmazonSmile” charitable-giving program—based on the Southern Poverty Law Center‘s designation of the organization as a “hate group“—constituted religious discrimination in violation of Title II of the
The Coral Ridge case built on our earlier decision in Fort Lauderdale Food Not Bombs. That case concerned a non-profit organization that distributed free food in a city park to communicate its view that society should end hunger and poverty by redirecting resources away from the military. 901 F.3d at 1238-39. When the city enacted an ordinance that would have prohibited distributing food in parks without prior authorization, the organization sued, arguing that its food-sharing events constituted inherently expressive conduct protected by the
3
Whether we assess social-media platforms’ content-moderation activities against the Miami Herald line of cases or against our own decisions explaining what constitutes expressive conduct, the result is the same: Social-media platforms exercise editorial judgment that is inherently expressive. When platforms choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breaches of their community standards, they engage in
- YouTube seeks to create a “welcoming community for viewers” and, to that end, prohibits a wide range of content, including spam, pornography, terrorist incitement, election and public-health misinformation, and hate speech.6
- Facebook engages in content moderation to foster “authenticity,” “safety,” “privacy,” аnd “dignity,” and accordingly, removes or adds warnings to a wide range of content—for example, posts that include what it considers to be hate speech, fraud or deception, nudity or sexual activity, and public-health misinformation.7
- Twitter aims “to ensure all people can participate in the public conversation freely and safely” by removing content, among other categories, that it views as embodying hate, glorifying violence, promoting suicide, or containing election misinformation.8
- Roblox, a gaming social network primarily for children, prohibits “[s]ingling out a user or group for ridicule or abuse,” any sort of sexual content, depictions of and support for war or violence, and any discussion of political parties or candidates.9
- Vegan Forum allows non-vegans but “will not tolerate members who promote contrary agendas.”10
All such decisions about what speech to permit, disseminate, prohibit, and deprioritize—decisions based on platforms’ own particular values and views—fit comfortably within the Supreme Court‘s editorial-judgment precedents.
Separately, but similarly, platforms’ content-moderation activities qualify as
In an effort to rebut this point, the State responds that because the vast majority of content that makes it onto social-media platforms is never reviewed—let alone removed or deprioritized—platforms aren‘t engaged in conduct of sufficiently expressive quality to merit
B
In the face of the editorial-judgment and expressive-conduct cases, the State insists that S.B. 7072 doesn‘t even implicate, let alone violate, the
1
We begin with the “hosting” cases. The first decision to which the State points, PruneYard, is readily distinguishable. There, the Supreme Court affirmed a state court‘s decision requiring a privately owned shopping mall to allow members of the public to circulate petitions on its property. Id. at 76-77, 88. In that case, though, the only
FAIR may be a bit closer, but it, too, is distinguishable. In that case, the Supreme Court upheld a federal statute—the Solomon Amendment—that required law
With respect to the first argument, the Court distinguished Miami Herald, Pacific Gas, and Hurley on the ground that, in those cases, “the complaining speaker‘s own message was affected by the speech it was forced to accommodate.” Id. at 63. The Solomon Amendment‘s requirement that schools host military reсruiters did “not affect the law schools’ speech,” the Court said, “because the schools [were] not speaking when they host[ed] interviews and recruiting receptions“: Recruiting activities, the Court reasoned, simply aren‘t “inherently expressive“—they‘re not speech in the way that editorial pages, newsletters, and parades are. Id. at 64. Therefore, the Court concluded, “accommodation of a military recruiter‘s message is not compelled speech because the accommodation does not sufficiently interfere with any message of the school.” Id. Nor did the Solomon Amendment‘s requirement that schools send notices on behalf of military recruiters unconstitutionally compel speech, the Court held, as it was merely incidental to the law‘s regulation of conduct. Id. at 62.
The FAIR Court also rejected the law schools’ second argument—namely, that the Solomon Amendment restricted their inherently expressive conduct. The schools’ refusal to allow military recruiters on campus was expressive, the Court emphasized, “only because [they] accompanied their conduct with speech explaining it.” Id. at 66. In the normal course, the Court said, an observer “who s[aw] military recruiters interviewing away from the law school [would have] no way of knowing” whether the school was expressing a message or, instead, the school‘s rooms just happened to be full or the recruiters just preferred to interview elsewhere. Id. Because “explanatory speech” was necessary to understand the message conveyed by the law schools’ conduct, the Court concluded, that conduct wasn‘t “inherently expressive.” Id.
FAIR isn‘t controlling here because social-media platforms warrant
First, S.B. 7072 interferes with social-media platforms’ оwn “speech” within the meaning of the
Second, social-media platforms are engaged in inherently expressive conduct of the sort that the Court found lacking in FAIR. As we were careful to explain in FLFNB I, FAIR “does not mean that conduct loses its expressive nature just because it is also accompanied by other speech.” 901 F.3d at 1243-44. Rather, “[t]he critical question is whether the explanatory speech is necessary for the reasonable observer to perceive a message from the conduct.” Id. at 1244. And we held that an advocacy organization‘s food-sharing events constituted expressive conduct from which, “due to the context surrounding them, the reasonable observer would infer some sort of message“—even without reference to the words “Food Not Bombs” on the organization‘s banners. Id. at 1245. Context, we held, is what differentiates “activity that is sufficiently expressive [from] similar activity that is not“—e.g., “the act of sitting down” from “the sit-in by African Americans at a Louisiana library” protesting segregation. Id. at 1241 (citing Brown v. Louisiana, 383 U.S. 131, 141-42 (1966)).
Unlike the law schools in FAIR, social-media platforms’ content-moderation decisions communicate messages when they remove or “shadow-ban” users or content. Explanatory speech isn‘t ”necessary for the reasonable observer to perceive a message from,” for instance, a platform‘s decision to ban a politician or remove what it perceives to be misinformation. Id. at 1244. Such conduct—the targeted removal of users’ speech from websites whose primary function is to serve as speech platforms—conveys a message to the reasonable observer “due to the context surrounding” it. Id. at 1245; see also Coral Ridge, 6 F.4th at 1254. Given the context, a reasonable observer witnessing a platform remove a user or item of content would infer, at a minimum, a message of disapproval.15
The State asserts that Pruneyard and FAIR—and, for that matter, the Supreme Court‘s editorial-judgment decisions—establish three “guiding principles” that should lead us to conclude that S.B. 7072 doesn‘t implicate the
The first principle—that a regulation must interfere with the host‘s ability to speak in order to implicate the
The State‘s second principle—that in order to trigger
The State‘s final principle—that in order to receive
In short, the State‘s reliance on PruneYard and FAIR and its attempts to distinguish the editorial-judgment line of cases are unavailing.
2
The State separately seeks to evade (or at least minimize)
this is “true of social media in the 21st century.” Oral Arg. at 18:37 et seq. For reasons we explain, we disagree.
At the outset, we confess some uncertainty whether the State means to argue (a) that platforms are already common carriers, and so possess no (or only minimal) First Amendment rights, or (b) that the State can, by dint of ordinary legislation, make them common carriers, thereby abrogating any First Amendment rights that they currently possess. Whatever the State‘s position, we are unpersuaded.
a
The first version of the argument fails because, in point of fact, social-media platforms are not—in the nature of things, so to speak—common carriers. That is so for at least three reasons.
First, social-media platforms have never acted like common carriers. “[I]n the communications context,” common carriers are entities that “make a public offering to provide cоmmunications facilities whereby all members of the public who choose to employ such facilities may communicate or transmit intelligence of their own design and choosing“—they don‘t “make individualized decisions, in particular cases, whether and on what terms to deal.” FCC v. Midwest Video Corp., 440 U.S. 689, 701 (1979) (cleaned up). While it‘s true that social-media platforms generally hold themselves open to all members of the public, they require users, as preconditions of access, to accept their terms of service and abide by their community standards. In other words, Facebook is open to every individual if, but only if, she agrees not to transmit content that violates the company‘s rules. Social-media users, accordingly, are not freely able to transmit messages “of their own design and choosing” because platforms make—and have always made—“individualized” content- and viewpoint-based decisions about whether to publish particular messages or users.
Second, Supreme Court precedent strongly suggests that internet companies like social-media platforms aren‘t common carriers. While the Court has applied less stringent First Amendment scrutiny to television and radio broadcasters, the Turner Court cabined that approach to “broadcast” media because of its “unique physical limitations“—chiefly, the scarcity of broadcast frequencies. 512 U.S. at 637-39. Instead of “comparing cable operators to electricity providers, trucking companies, and railroads—all entities subject to traditional economic regulation“—the Turner Court “analogized the cable operators [in that case] to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment.” U.S. Telecom Ass‘n v. FCC, 855 F.3d 381, 428 (D.C. Cir. 2017) (Kavanaugh, J., dissental); see Turner, 512 U.S. at 639. And indeed, the Court explicitly distinguished online from broadcast media in Reno v. American Civil Liberties Union, emphasizing that the “vast democratic forums of the Internet” have never been “subject to the type of government supervision and regulation that has attended the broadcast industry.” 521 U.S. 844, 868-69 (1997). These precedents demonstrate that social-media platforms should be treated more like cable operators, which retain their First Amendment right to exercise editorial discretion, than traditional common carriers.
Finally, Congress has distinguished internet companies from common carriers. The Telecommunications Act of 1996 explicitly differentiates “interactive computer services“—like social-media platforms—from “common carriers or
b
If social-media platforms are not common carriers either in fact or by law, the State is left to argue that it can force them to become common carriers, abrogating or diminishing the First Amendment rights that they currently possess and exercise. Neither law nor logic recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier. Quite the contrary, if social-media platforms currently possess the First Amendment right to exercise editorial judgment, as we hold it is substantially likely they do, then any law infringing that right—even one bearing the terminology of “common carri[age]“—should be assessed under the same standards that apply to other laws burdening First-Amendment-protected activity. See Denver Area Educ. Telecomm. Consortium, Inc. v. FCC, 518 U.S. 727, 825 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part) (“Labeling leased access a common carrier scheme has no real First Amendment consequences.“); Cablevision Sys. Corp. v. FCC, 597 F.3d 1306, 1321-22 (D.C. Cir. 2010) (Kavanaugh, J., dissenting) (explaining that because video programmers have a constitutional right to exercise editorial discretion, “the Government cannot compel [them] to operate like ‘dumb pipes’ or ‘common carriers’ that exercise no editorial control“); U.S. Telecom Ass‘n, 855 F.3d at 434 (Kavanaugh, J., dissental) (“Can the Government really force Facebook and Google . . . to operate as common carriers?“).
* * *
The State‘s best rejoinder is that because large social-media platforms are clothed with a “public trust” and have “substantial market power,” they are (or should be treated like) common carriers. Br. of Appellants at 35-37; see Biden v. Knight First Amend. Inst., 141 S. Ct. 1220, 1226 (2021) (Thomas, J., concurring). These premises aren‘t uncontroversial, but even if they‘re true, they wouldn‘t change our conclusion. The State doesn‘t argue that market power and public importance are alone sufficient reasons to recharacterize a private company as a common carrier; rather, it acknowledges that the “basic characteristic of common carriage is the requirement to hold oneself out to serve the public indiscriminately.” Br. of Appellants at 35 (quoting U.S. Telecom. Ass‘n v. FCC, 825 F.3d 674, 740 (D.C. Cir. 2016)); see Knight, 141 S. Ct. at 1223 (Thomas, J., concurring). The problem, as we‘ve explained, is that social-media platforms don‘t serve the public indiscriminately but, rather, exercise editorial judgment to curate the content that they display and disseminate.
The State seems to argue that even if platforms aren‘t currently common carriers, their market power and public
In short, because social-media platforms exercise—and have historically exercised—inherently expressive editorial judgment, they aren‘t common carriers, and a state law can‘t force them to act as such unless it survives First Amendment scrutiny.
C
With one exception, we hold that the challenged provisions of S.B. 7072 trigger First Amendment scrutiny either (1) by restricting social-media platforms’ ability to exercise editorial judgment or (2) by imposing disclosure requirements. Here‘s a brief rundown.
S.B. 7072‘s content-moderation restrictions all limit platforms’ ability to exercise editorial judgment and thus trigger First Amendment scrutiny. The provisions that prohibit deplatforming candidates (
The consistency requirement (
The user-opt-out requirement (
S.B. 7072‘s disclosure provisions implicate the First Amendment, but for a different reason. These provisions don‘t directly restrict editorial judgment or expressive conduct, but indirectly burden platforms’ editorial judgment by compelling them to disclose certain information. Laws that compel commercial disclosures and thereby indirectly burden protected speech trigger relatively permissive First Amendment scrutiny, which we will explain. See Zauderer, 471 U.S. at 651; Nat‘l Inst. of Fam. & Life Advocs. v. Becerra, 138 S. Ct. 2361, 2378 (2018) (”NIFLA“).
Finally, the exception: We hold that S.B. 7072‘s user-data-access requirement (
* * *
Taking stock: We conclude that social-media platforms’ content-moderation activities—permitting, removing, prioritizing, and deprioritizing users and posts—constitute “speech” within the meaning of the First Amendment. All but one of S.B. 7072‘s operative provisions implicate platforms’ First Amendment rights and are therefore subject to First Amendment scrutiny.
III
A
Having determined that it is substantially likely that S.B. 7072 triggers First Amendment scrutiny, we must now determine the level of scrutiny to apply—and to which provisions.
We begin with the basics. “[A] content-neutral regulation of expressive conduct is subject to intermediate scrutiny, while a regulation based on the content of the expression must withstand the additional rigors of strict scrutiny.” FLFNB II, 11 F.4th at 1291; see also Turner, 512 U.S. at 643-44, 662 (noting that although the challenged provisions “interfere[d] with cable operators’ editorial discretion,” they were content-neutral and so would be subject only to intermediate scrutiny). A law is content-based if it “suppress[es], disadvantage[s], or impose[s] differential burdens upon speech because of its content,” Turner, 512 U.S. at 642—i.e., if it “applies to particular speech because of the topic discussed or the idea or message expressed,” Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015). A law can be content-based either because it draws “facial distinctions . . . defining regulated speech by particular subject matter” or because, though facially neutral, it “cannot be justified without reference to the content of the regulated speech.” Id. at 163-64 (quotation marks omitted).
Viewpoint-based laws—“[w]hen the government targets not subject matter, but particular views taken by speakers on a subject“—constitute “an egregious
1
NetChoice asks us to affirm the district court‘s conclusion that S.B. 7072‘s “viewpoint-based motivation” subjects the entire Act—evеry provision—“to strict scrutiny, root and branch.” Doc. 113 at 25 (emphasis added). It‘s certainly true—as already explained—that at least a handful of S.B. 7072‘s key proponents candidly acknowledged their desire to combat what they perceived to be the “leftist” bias of the “big tech oligarchs” against “conservative” ideas. Id. It‘s also true that the Act applies only to a subset of speakers consisting of the largest social-media platforms and that the law‘s enacted findings refer to the platforms’ allegedly “unfair” censorship. See S.B. 7072 § (9), (10);
We have held—“many times“—that “when a statute is facially constitutional, a plaintiff cannot bring a free-speech challenge by claiming that the lawmakers who passed it acted with a constitutionally impermissible purpose.” In re Hubbard, 803 F.3d 1298, 1312 (11th Cir. 2015). In Hubbard, we cited (among other decisions) United States v. O‘Brien for the proposition that courts shouldn‘t look to a law‘s legislative history to find an illegitimate motivation for an otherwise constitutional statute. Id. (citing United States v. O‘Brien, 391 U.S. 367, 383 (1968)). The plaintiffs in O‘Brien had challenged a law prohibiting the burning of draft cards on the ground that Congress‘s “purpose“—as evidenced in the statements of several legislators—was “to suppress freedom of speech.” 391 U.S. at 382-83. The Supreme Court refused to void the statute “on the basis of what fewer than a handful of Congressmen said about it” given that Congress “had the undoubted power to enact” it if legislators had only made “‘wiser’ speech[es] about it.” Id. at 384; see also Arizona v. California, 283 U.S. 423, 455 (1931) (“Into the motives which induced members of Congress to enact the [statute], this court may not inquire.“). Even though the statute in O‘Brien regulated expressive conduct and its legislative history suggested a viewpoint-based motivation, the O‘Brien Court declined to invalidate the statute as a per se matter, or even apply strict scrutiny, but rather upheld the law under what we have come to call intermediate scrutiny. 391 U.S. at 382.
To be fair, there is some support for NetChoice‘s motivation-based argument for invalidating S.B. 7072 in toto, but not enough to overcome the clear statements in Hubbard and O‘Brien. It‘s true that the Supreme Court said in Turner that “even a regulation neutral on its face may be content based if its manifest purpose is to regulate speech because of the message it conveys.” Turner, 512 U.S. at 645-46 (emphasis added). And Turner cited, with a hazy “cf.” signal, Church of Lukumi Babalu Aye, Inc. v. Hialeah, 508 U.S. 520, 534-535 (1993), which held that in the free-exercise context, it was appropriate to look beyond “the text of the laws at issue” to identify discriminatory animus against a minority religion. But NetChoice hasn‘t cited—and we‘re not aware of—any Supreme Court or Eleventh Circuit decision that relied on legislative history or statements by proponents to characterize as viewpoint-based a law challenged on free-speech grounds.19 The closest the Supreme Court seems to have come is in Sorrell v. IMS Health, Inc., in which it looked to a statute‘s “formal legislative findings” to dispel “any doubt” that the challenged statute was content-based. 564 U.S. at 564-65. But the only evidence of viewpoint-based motivation in S.B. 7072‘s enacted findings are the references to “unfair[ness].” Those, we think, are far less damning than the findings in Sorrell, which expressly—and startlingly—stated that the regulated speakers conveyed messages that were “often in conflict with the goals of the state.” 564 U.S. at 565 (quotation marks omitted).
Finally, the fact that S.B. 7072 targets only a subset of social-media platforms isn‘t enough to subject the entire law to strict scrutiny or per se invalidation. It‘s true that the Supreme Court‘s “precedents are deeply skeptical of laws that distinguish among different speakers, allowing speech by some but not others” because they “run the risk that the State has left unburdened those speakers whose messages are in accord with its own views.” NIFLA, 138 S. Ct. at 2378 (quotation marks omitted); cf. Minneapolis Star & Tribune Co. v. Minn. Comm‘r of Revenue, 460 U.S. 575, 592 (1983) (noting that the power to “single[] out a few members of the press presents such a potential for abuse that no interest suggested by [the State] can justify the scheme“). But “[i]t would be error to conclude . . . that the First Amendment mandates strict scrutiny for any speech regulation that applies to one medium (or a subset thereof) but not others“: “[H]eightened scrutiny is unwarranted when the differential treatment is ‘justified by some special characteristic of the particular medium being regulated.‘”20 Turner, 512 U.S. at 660-61 (quoting Minneapolis Star, 460 U.S. at 585). S.B. 7072‘s application to only the largest social-media platforms might be viewpoint-motivated, or it might be based on some other “special characteristic” of large platforms—for instance, their market power. See Appellant‘s App‘x at 237-46. Given Hubbard and O‘Brien—and in the absence of clear precedent enabling us to find a viewpoint-discriminatory purpose based on legislative history—we conclude that NetChoice hasn‘t shown a substantial likelihood of success on the merits of its argument that S.B. 7072 should be stricken, or subject to strict scrutiny, in its entirety.21
2
Having determined that we cannot use the Act‘s chief proponents’ statements as a basis to invalidate S.B. 7072 “root and branch,” we must proceed on a more nuanced basis to determine what sort of scrutiny each provision—or category of provisions—triggers.
To start, we hold that it is substantially likely that what we have called the Act‘s content-moderation restrictions are subject to either strict or intermediate First Amendment scrutiny, depending on whether they are content-based or content-neutral. See FLFNB II, 11 F.4th at 1291-92. Some of these provisions are self-evidently content-based and thus subject to strict scrutiny. The journalistic-enterprises provision, for instance, prohibits a platform from making content-moderation decisions concerning any “journalistic enterprise based on the content of its posts,”
Some of the provisions—for instance,
A different standard applies to S.B. 7072‘s disclosure provisions
B
At last, it is time to apply the requisite First Amendment scrutiny. We hold that it is substantially likely that none of S.B. 7072‘s content-moderation restrictions survive intermediate—let alone strict—scrutiny. We further hold that there is a substantial likelihood that the “thorough explanation” disclosure requirement (
1
We‘ll start with S.B. 7072‘s content-moderation restrictions. While some of these provisions are likely subject to strict scrutiny, it is substantially likely that none survive even intermediate scrutiny. When a law is subject to intermediate scrutiny, the government must show that it “is narrowly drawn to further a substantial governmental interest . . . unrelated to the suppression of free speech.” FLFNB II, 11 F.4th at 1291. Narrow tailoring in this context means that the regulation must be “no greater than is essential to the furtherance of [the government‘s] interest.” O‘Brien, 391 U.S. at 377.
We think it substantially likely that S.B. 7072‘s content-moderation restrictions do not further any substantial governmental interest—much less any compelling one. Indeed, the State‘s briefing doesn‘t even argue that these provisions can survive heightened scrutiny. (The State seems to have wagered pretty much everything on the argument that S.B. 7072‘s provisions don‘t trigger First Amendment scrutiny at all.) Nor can we discern any substantial or compelling interest that would justify the Act‘s significant restrictions on platforms’ editorial judgment. We‘ll briefly explain and reject two possibilities that the State might offer.
The State might theoretically assert some interest in counteracting “unfair” private “censorship” that privileges some viewpoints over others on social-media platforms. See S.B. 7072 § 1(9). But a state “may not burden the speech of others in order to tilt public debatе in a preferred direction,” Sorrell, 564 U.S. at 578-79, or “advance some points of view,” Pacific Gas, 475 U.S. at 20 (plurality op.). Put simply, there‘s no legitimate—let alone substantial—governmental interest in leveling the expressive playing field. Nor is there a substantial governmental interest in enabling users—who, remember, have no vested right to a social-media account—to say whatever they want on privately owned platforms that would prefer to remove their posts: By preventing platforms from conducting content moderation—which, we‘ve explained, is itself expressive First-Amendment-protected activity—S.B. 7072 “restrict[s] the speech of some elements of our society in order to enhance the relative voice of others“—a concept “wholly foreign to the First Amendment.” Buckley v. Valeo, 424 U.S. 1, 48-49 (1976). At the end of the day, preventing “unfair[ness]” to certain users or points of view isn‘t a substantial governmental interest; rather, private actors have a First Amendment right to be “unfair“—which is to say, a right to have and express their own points of view. Miami Herald, 418 U.S. 258.
The State might also assert an interest in “promoting the widespread dissemination of information from a multiplicity of sources.” Turner, 512 U.S. at 662. Just as the Turner Court held that the must-carry provisions served the government‘s substantial interest in ensuring that American citizens were able to access their “local broadcasting outlets,” id. at 663-64, the State could argue that S.B. 7072 ensures that political candidates and journalistic enterprises are able to communicate with the public, see
There is also a substantial likelihood that the consistency, 30-day, and user-opt-out provisions (
Moreover, and in any event, even if the State could establish that its content-moderation restrictions serve a substantial governmental interest, it hasn‘t even attempted to—and we don‘t think it could—show that the burden that those provisions impose is “no greater than is essential to the furtherance of that interest.” O‘Brien, 391 U.S. at 377. For instance,
We conclude that NetChoice has shown a substantial likelihood of success on the merits of its claim that S.B. 7072‘s content-moderation
2
We assess S.B. 7072‘s disclosure requirements—in
With one notable exception, it is not substantially likely that the disclosure provisions are unconstitutional. The State‘s interest here is in ensuring that users—consumers who engage in commercial transactions with platforms by providing them with a user and data for advertising in exchange for access to a forum—are fully informed about the terms of that transaction and aren‘t misled about platforms’ content-moderation policies.24 This interest is likely legitimate. On the ensuing burden question, NetChoice hasn‘t established a substantial likelihood that the provisions that require platforms to publish their standards (
But NetChoice does argue that
* * *
It is substantially likely that S.B. 7072‘s content-moderation restrictions (
IV
Finally, we turn to the remaining preliminary-injunction factors. Our conclusions about which provisions of S.B. 7072 are substantially likely to violate the First Amendment effectively determine the result of this appeal because likelihood of success on the merits “is generally the most important of the four factors.” Gonzalez, 978 F.3d at 1271 n.12 (quotation marks omitted). With respect to the second factor, we have held that “an ongoing violation of the First Amendment“—as the platforms here would suffer in the absence of an injunction—“constitutes an irreparable injury.” FF Cosms. FL, Inc. v. City of Miami Beach, 866 F.3d 1290, 1298 (11th Cir. 2017); see also Otto v. City of Boca Raton, 981 F.3d 854, 870 (11th Cir. 2020). The third and fourth factors—“damage to the opposing party” and the “public interest“—“can be consolidated” because “[t]he nonmovant is the government.” Otto, 981 F.3d at 870. And “neither the government nor the public has any legitimate interest in enforcing an unconstitutional ordinance.” Id. Therefore, the preliminary-injunction factors weigh in favor of enjoining the likely unconstitutional provisions of the Act.
* * *
We hold that the district court did not abuse its discretion when it preliminarily enjoined those provisions of S.B. 7072 that are substantially likely to violate the First Amendment. But the district court did abuse its discretion when it enjoined provisions of S.B. 7072 that aren‘t likely unconstitutional. Accordingly, we AFFIRM the preliminary injunction in part, and VACATE and REMAND in part, as follows:
| | Fla. Stat. § | Likely Constitutionality | Disposition |
|---|---|---|---|
| Candidate deplatforming | 106.072(2) | Unconstitutional | Affirm |
| Posts by/about candidates | 501.2041(2)(h) | Unconstitutional | Affirm |
| “Journalistic enterprises” | 501.2041(2)(j) | Unconstitutional | Affirm |
| Consistency | 501.2041(2)(b) | Unconstitutional | Affirm |
| 30-day restriction | 501.2041(2)(c) | Unconstitutional | Affirm |
| User opt-out | 501.2041(2)(f),(g) | Unconstitutional | Affirm |
| Explanations (per decision) | 501.2041(2)(d) | Unconstitutional | Affirm |
| Standards | 501.2041(2)(a) | Constitutional | Vacate |
| Rule changes | 501.2041(2)(c) | Constitutional | Vacate |
| User view counts | 501.2041(2)(e) | Constitutional | Vacate |
| Candidate “free advertising” | 106.072(4) | Constitutional | Vacate |
| User-data access | 501.2041(2)(i) | Constitutional | Vacate |
Notes
To the extent that the states argue that social-media platforms lack the requisite “intent” to convey a message, we find it implausible that platforms would engage in the laborious process of defining detailed community standards, identifying offending content, and removing or deprioritizing that content if they didn‘t intend to convey “some sort of message.” Unsurprisingly, the record in this case confirms platforms’ intent to communicate messages through their content-moderation decisions—including that certain material is harmful or unwelcome on their sites. See, e.g., Doc. 25-1 at 2 (declaration of YouTube executive explaining that its approach to content moderation “is to remove content that violates [its] policies (developed with outside experts to prevent real-wоrld harms), reduce the spread of harmful misinformation ... and raise authoritative and trusted content“); Facebook Community Standards, supra (noting that Facebook moderates content “in service of its ‘values’ of ‘authenticity,’ ‘safety,’ ‘privacy,’ and ‘dignity‘“).
It might be, we suppose, that some content-moderation decisions—for instance, to prioritize or deprioritize individual posts—are so subtle that users wouldn‘t notice them but for the platforms’ speech explaining their actions. But even if some subset of content-moderation activities wouldn‘t count as inherently expressive conduct under FAIR and FLFNB I, many are sufficiently transparent that users would likely notice them and, in context, infer from them “some sort of message“—even in the absence of explanatory speech. Specifically, it‘s likely clear to viewers that platforms take down individual posts, remove entire categories of content, and deplatform other users—and that such actions express messages. “Shadow-banning” would also likely be apparent and communicate a message to a reasonable user who knows that she follows a particular poster but doesn‘t see that poster‘s content, for instance, in her feed or search results. Thus, even if some content moderation isn‘t inherently expressive, much of it is. See United States v. Stevens, 559 U.S. 460, 473 (2010) (noting that a statute facially violates the
