MOODY, ATTORNEY GENERAL OF FLORIDA, ET AL. v. NETCHOICE, LLC, DBA NETCHOICE, ET AL.
No. 22-277
SUPREME COURT OF THE UNITED STATES
July 1, 2024
OCTOBER TERM, 2023
Syllabus
NOTE: Where it is feasible, a syllabus (headnote) will be released, as is being done in connection with this case, at the time the opinion is issued. The syllabus constitutes no part of the opinion of the Court but has been prepared by the Reporter of Decisions for the convenience of the reader. See United States v. Detroit Timber & Lumber Co., 200 U. S. 321, 337.
SUPREME COURT OF THE UNITED STATES
Syllabus
MOODY, ATTORNEY GENERAL OF FLORIDA, ET AL. v. NETCHOICE, LLC, DBA NETCHOICE, ET AL.
CERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE ELEVENTH CIRCUIT
No. 22-277. Argued February 26, 2024—Decided July 1, 2024*
In 2021, Florida and Texas enacted statutes regulating large social-media companies and other internet platforms. The States’ laws differ in the entities they cover and the activities they limit. But both curtail the platforms’ capacity to engage in content moderation—to filter, prioritize, and label the varied third-party messages, videos, and other content their users wish to post. Both laws also include individualized-explanation provisions, requiring a platform to give reasons to a user if it removes or alters her posts.
NetChoice LLC and the Computer & Communications Industry Association (collectively, NetChoice)—trade associations whose members include Facebook and YouTube—brought facial First Amendment challenges against the two laws. District courts in both States entered preliminary injunctions.
The Eleventh Circuit upheld the injunction of Florida‘s law, as to all provisions relevant here. The court held that the State‘s restrictions on content moderation trigger First Amendment scrutiny under this Court‘s cases protecting “editorial discretion.” 34 F. 4th 1196, 1209, 1216. The court then concluded that the content-moderation provisions are unlikely to survive heightened scrutiny. Id., at 1227-1228. Similarly, the Eleventh Circuit thought the statute‘s individualized-explanation requirements likely to fall. Relying on Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U. S. 626, the
The Fifth Circuit disagreed across the board, and so reversed the preliminary injunction of the Texas law. In that court‘s view, the platforms’ content-moderation activities are “not speech” at all, and so do not implicate the First Amendment. 49 F. 4th 439, 466, 494. But even if those activities were expressive, the court determined the State could regulate them to advance its interest in “protecting a diversity of ideas.” Id., at 482. The court further held that the statute‘s individualized-explanation provisions would likely survive, even assuming the platforms were engaged in speech. It found no undue burden under Zauderer because the platforms needed only to “scale up” a “complaint-and-appeal process” they already used. 49 F. 4th, at 487.
Held: The judgments are vacated, and the cases are remanded, because neither the Eleventh Circuit nor the Fifth Circuit conducted a proper analysis of the facial First Amendment challenges to Florida and Texas laws regulating large internet platforms. Pp. 9–31.
(a) NetChoice‘s decision to litigate these cases as facial challenges comes at a cost. The Court has made facial challenges hard to win. In the First Amendment context, a plaintiff must show that “a substantial number of [the law‘s] applications are unconstitutional, judged in relation to the statute‘s plainly legitimate sweep.” Americans for Prosperity Foundation v. Bonta, 594 U. S. 595, 615.
So far in these cases, no one has paid much attention to that issue. Analysis and arguments below focused mainly on how the laws applied to the content-moderation practices that giant social-media platforms use on their best-known services to filter, alter, or label their users’ posts, i.e., on how the laws applied to the likes of Facebook‘s News Feed and YouTube‘s homepage. They did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.
The proper analysis begins with an assessment of the state laws’ scope. The laws appear to apply beyond Facebook‘s News Feed and its ilk. But it‘s not clear to what extent, if at all, they affect social-media giants’ other services, like direct messaging, or what they have to say about other platforms and functions. And before a court can do anything else with these facial challenges, it must “determine what [the law] covers.” United States v. Hansen, 599 U. S. 762, 770.
The next order of business is to decide which of the laws’ applications violate the First Amendment, and to measure them against the rest. For the content-moderation provisions, that means asking, as to every covered platform or function, whether there is an intrusion on pro-
Because this is “a court of review, not of first view,” Cutter v. Wilkinson, 544 U. S. 709, 718, n. 7, this Court cannot undertake the needed inquiries. And because neither the Eleventh nor the Fifth Circuit performed the facial analysis in the way described above, their decisions must be vacated and the cases remanded. Pp. 9–12.
(b) It is necessary to say more about how the First Amendment relates to the laws’ content-moderation provisions, to ensure that the facial analysis proceeds on the right path in the courts below. That need is especially stark for the Fifth Circuit, whose decision rested on a serious misunderstanding of First Amendment precedent and principle. Pp. 12–29.
(1) The Court has repeatedly held that ordering a party to provide a forum for someone else‘s views implicates the First Amendment if, though only if, the regulated party is engaged in its own expressive activity, which the mandated access would alter or disrupt. First, in Miami Herald Publishing Co. v. Tornillo, 418 U. S. 241, the Court held that a Florida law requiring a newspaper to give a political candidate a right to reply to critical coverage interfered with the newspaper‘s “exercise of editorial control and judgment.” Id., at 243, 258. Florida could not, the Court explained, override the newspaper‘s decisions about the “content of the paper” and “[t]he choice of material to go into” it, because that would substitute “governmental regulation” for the “crucial process” of editorial choice. Id., at 258. The next case, Pacific Gas & Elec. Co. v. Public Util. Comm‘n of Cal., 475 U. S. 1, involved California‘s attempt to force a private utility to include material from a certain consumer-advocacy group in its regular newsletter to consumers. The Court held that an interest in “offer[ing] the public a greater variety of views” could not justify compelling the utility “to carry speech with which it disagreed” and thus to “alter its own message.” Id., at 11, n. 7, 12, 16. Then in Turner Broadcasting System, Inc. v. FCC, 512 U. S. 622, the Court considered federal “must-carry” rules, which required cable operators to allocate certain channels to local broadcast stations. The Court had no doubt the First Amendment was implicated, because the rules “interfere[d]” with the cable operators’ “editorial discretion over which stations or programs to include in [their] repertoire.” Id., at 636, 643-644. The capstone of this line of precedents, Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U. S. 557, held that the First Amendment prevented Massachusetts from compelling parade organizers to admit as a participant a gay and lesbian group seeking to convey a
From that slew of individual cases, three general points emerge. First, the First Amendment offers protection when an entity engaged in compiling and curating others’ speech into an expressive product of its own is directed to accommodate messages it would prefer to exclude. Second, none of that changes just because a compiler includes most items and excludes just a few. It “is enough” for the compiler to exclude the handful of messages it most “disfavor[s].” Hurley, 515 U. S., at 574. Third, the government cannot get its way just by asserting an interest in better balancing the marketplace of ideas. In case after case, the Court has barred the government from forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm. Pp. 13–19.
(2) “[W]hatever the challenges of applying the Constitution to ever-advancing technology, the basic principles” of the First Amendment “do not vary.” Brown v. Entertainment Merchants Assn., 564 U. S. 786, 790. And the principles elaborated in the above-summarized decisions establish that Texas is not likely to succeed in enforcing its law against the platforms’ application of their content-moderation policies to their main feeds.
Facebook‘s News Feed and YouTube‘s homepage present users with a continually updating, personalized stream of other users’ posts. The key to the scheme is prioritization of content, achieved through algorithms. The selection and ranking is most often based on a user‘s expressed interests and past activities, but it may also be based on other factors, including the platform‘s preferences. Facebook‘s Community Standards and YouTube‘s Community Guidelines detail the messages and videos that the platforms disfavor. The platforms write algorithms to implement those standards—for example, to prefer content deemed particularly trustworthy or to suppress content viewed as deceptive. Beyond ranking content, platforms may add labels, to give users additional context. And they also remove posts entirely that contain prohibited subjects or messages, such as pornography, hate speech, and misinformation on certain topics. The platforms thus unabashedly control the content that will appear to users.
Texas‘s law, though, limits their power to do so. Its central provision prohibits covered platforms from “censor[ing]” a “user‘s expression” based on the “viewpoint” it contains.
The Court has repeatedly held that type of regulation to interfere with protected speech. Like the editors, cable operators, and parade organizers this Court has previously considered, the major social-media platforms curate their feeds by combining “multifarious voices” to create a distinctive expressive offering. Hurley, 515 U. S., at 569. Their choices about which messages are appropriate give the feed a particular expressive quality and “constitute the exercise” of protected “editorial control.” Tornillo, 418 U. S., at 258. And the Texas law targets those expressive choices by forcing the platforms to present and promote content on their feeds that they regard as objectionable.
That those platforms happily convey the lion‘s share of posts submitted to them makes no significant First Amendment difference. In Hurley, the Court held that the parade organizers’ “lenient” admissions policy did “not forfeit” their right to reject the few messages they found harmful or offensive. 515 U. S., at 569. Similarly here, that Facebook and YouTube convey a mass of messages does not license Texas to prohibit them from deleting posts they disfavor. Pp. 19–26.
(3) The interest Texas relies on cannot sustain its law. In the usual First Amendment case, the Court must decide whether to apply strict or intermediate scrutiny. But here, Texas‘s law does not pass even the less stringent form of review. Under that standard, a law must further a “substantial governmental interest” that is “unrelated to the suppression of free expression.” United States v. O‘Brien, 391 U. S. 367, 377. Many possible interests relating to social media can meet that test. But Texas‘s asserted interest relates to the suppression of free expression, and it is not valid, let alone substantial.
Texas has never been shy, and always been consistent, about its interest: The objective is to correct the mix of viewpoints that major platforms present. But a State may not interfere with private actors’ speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. But the way the First Amendment achieves that goal is by preventing the government from “tilt[ing] public debate in a preferred direction,” Sorrell v. IMS Health Inc., 564 U. S. 552, 578-579, not by licensing the government to stop private actors from speaking as they wish and preferring some views over others. A State cannot prohibit speech to rebalance the speech market. That unadorned interest is not “unrelated to the suppression of free expression.” And Texas may not pursue it consistent with the First Amendment. Pp. 26–29.
No. 22-277, 34 F. 4th 1196; No. 22-555, 49 F. 4th 439; vacated and remanded.
NOTICE: This opinion is subject to formal revision before publication in the United States Reports. Readers are requested to notify the Reporter of Decisions, Supreme Court of the United States, Washington, D. C. 20543, pio@supremecourt.gov, of any typographical or other formal errors.
SUPREME COURT OF THE UNITED STATES
Nos. 22-277 and 22-555
ASHLEY MOODY, ATTORNEY GENERAL OF FLORIDA, ET AL., PETITIONERS
22-277 v.
NETCHOICE, LLC, DBA NETCHOICE, ET AL.
ON WRIT OF CERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE ELEVENTH CIRCUIT
NETCHOICE, LLC, DBA NETCHOICE, ET AL., PETITIONERS
22-555 v.
KEN PAXTON, ATTORNEY GENERAL OF TEXAS
ON WRIT OF CERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE FIFTH CIRCUIT
[July 1, 2024]
JUSTICE KAGAN delivered the opinion of the Court.*
Not even thirty years ago, this Court felt the need to explain to the opinion-reading public that the “Internet is an international network of interconnected computers.” Reno v. American Civil Liberties Union, 521 U. S. 844, 849 (1997). Things have changed since then. At the time, only 40 million people used the internet. See id., at 850. Today, Facebook and YouTube alone have over two billion users each. See App. in No. 22–555, p. 67a. And the public likely no longer needs this Court to define the internet.
But courts still have a necessary role in protecting those entities’ rights of speech, as courts have historically protected traditional media‘s rights. To the extent that social-media platforms create expressive products, they receive the First Amendment‘s protection. And although these cases are here in a preliminary posture, the current record suggests that some platforms, in at least some functions, are indeed engaged in expression. In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before. Traditional publishers and editors also select and shape other parties’ expression into their own curated speech products. And we have repeatedly held that laws curtailing their editorial choices must meet the First Amendment‘s requirements. The principle does not change because the curated compilation has gone from the physical to the virtual world. In the latter, as in the former, government efforts to alter an edited compilation of third-party
Today, we consider whether two state laws regulating social-media platforms and other websites facially violate the First Amendment. The laws, from Florida and Texas, restrict the ability of social-media platforms to control whether and how third-party posts are presented to other users. Or otherwise put, the laws limit the platforms’ capacity to engage in content moderation—to filter, prioritize, and label the varied messages, videos, and other content their users wish to post. In addition, though far less addressed in this Court, the laws require a platform to provide an individualized explanation to a user if it removes or alters her posts. NetChoice, an internet trade association, challenged both laws on their face—as a whole, rather than as to particular applications. The cases come to us at an early stage, on review of preliminary injunctions. The Court of Appeals for the Eleventh Circuit upheld such an injunction, finding that the Florida law was not likely to survive First Amendment review. The Court of Appeals for the Fifth Circuit reversed a similar injunction, primarily reasoning that the Texas law does not regulate any speech and so does not implicate the First Amendment.
Today, we vacate both decisions for reasons separate from the First Amendment merits, because neither Court of Appeals properly considered the facial nature of NetChoice‘s challenge. The courts mainly addressed what the parties had focused on. And the parties mainly argued these cases as if the laws applied only to the curated feeds offered by the largest and most paradigmatic social-media platforms—as if, say, each case presented an as-applied challenge brought by Facebook protesting its loss of control over the content of its News Feed. But argument in this Court revealed that the laws might apply to, and differently affect, other kinds of websites and apps. In a facial challenge, that could well matter, even when the challenge is
To do that right, of course, a court must understand what kind of government actions the First Amendment prohibits. We therefore set out the relevant constitutional principles, and explain how one of the Courts of Appeals failed to follow them. Contrary to what the Fifth Circuit thought, the current record indicates that the Texas law does regulate speech when applied in the way the parties focused on below—when applied, that is, to prevent Facebook (or YouTube) from using its content-moderation standards to remove, alter, organize, prioritize, or disclaim posts in its News Feed (or homepage). The law then prevents exactly the kind of editorial judgments this Court has previously held to receive First Amendment protection. It prevents a platform from compiling the third-party speech it wants in the way it wants, and thus from offering the expressive product that most reflects its own views and priorities. Still more, the law—again, in that specific application—is unlikely to withstand First Amendment scrutiny. Texas has thus far justified the law as necessary to balance the mix of speech on Facebook‘s News Feed and similar platforms; and the record reflects that Texas officials passed it because they thought those feeds skewed against politically conservative voices. But this Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression—to “un-bias” what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others.
In sum, there is much work to do below on both these
I
As commonly understood, the term “social media platforms” typically refers to websites and mobile apps that allow users to upload content—messages, pictures, videos, and so on—to share with others. Those viewing the content can then react to it, comment on it, or share it themselves. The biggest social-media companies—entities like Facebook and YouTube—host a staggering amount of content. Facebook users, for example, share more than 100 billion messages every day. See App. in No. 22–555, at 67a. And YouTube sees more than 500 hours of video uploaded every minute. See ibid.
In the face of that deluge, the major platforms cull and organize uploaded posts in a variety of ways. A user does not see everything—even everything from the people she follows—in reverse-chronological order. The platforms will have removed some content entirely; ranked or otherwise prioritized what remains; and sometimes added warnings or labels. Of particular relevance here, Facebook and YouTube make some of those decisions in conformity with content-moderation policies they call Community Standards and Community Guidelines. Those rules list the subjects or messages the platform prohibits or discourages—say, pornography, hate speech, or misinformation on select topics. The rules thus lead Facebook and YouTube to remove, disfavor, or label various posts based on their content.
In 2021, Florida and Texas enacted statutes regulating internet platforms, including the large social-media companies just mentioned. The States’ laws differ in the entities they cover and the activities they limit. But both contain
Florida‘s law regulates “social media platforms,” as defined expansively, that have annual gross revenue of over $100 million or more than 100 million monthly active users.
In addition, the Florida law mandates that a platform provide an explanation to a user any time it removes or alters any of her posts. See
The Texas law regulates any social-media platform, having over 50 million monthly active users, that allows its users “to communicate with other users for the primary purpose of posting information, comments, messages, or images.”
Soon after Florida and Texas enacted those statutes, NetChoice LLC and the Computer & Communications Industry Association (collectively, NetChoice)—trade associations whose members include Facebook and YouTube—brought facial First Amendment challenges against the two laws. District courts in both States entered preliminary injunctions, halting the laws’ enforcement. See 546 F. Supp. 3d 1082, 1096 (ND Fla. 2021); 573 F. Supp. 3d 1092, 1117 (WD Tex. 2021). Each court held that the suit before it is likely to succeed because the statute infringes on the constitutionally protected “editorial judgment” of NetChoice‘s members about what material they will display. See 546 F. Supp. 3d, at 1090; 573 F. Supp. 3d, at 1107.
The Eleventh Circuit upheld the injunction of Florida‘s law, as to all provisions relevant here. The court held that the State‘s restrictions on content moderation trigger First Amendment scrutiny under this Court‘s cases protecting
The Fifth Circuit disagreed across the board, and so reversed the preliminary injunction before it. In that court‘s view, the platforms’ content-moderation activities are “not speech” at all, and so do not implicate the First Amendment. 49 F. 4th 439, 466, 494 (2022). But even if those activities were expressive, the court continued, the State could regulate them to advance its interest in “protecting a diversity of ideas.” Id., at 482 (emphasis deleted). The court further held that the statute‘s individualized-explanation provisions would likely survive, again even assuming that the platforms were engaged in speech. Those requirements, the court maintained, are not unduly burdensome under Zauderer because the platforms needed only to “scale up” a “complaint-and-appeal process” they already used. 49 F. 4th, at 487.
We granted certiorari to resolve the split between the Fifth and Eleventh Circuits. 600 U. S. ____ (2023).
II
NetChoice chose to litigate these cases as facial challenges, and that decision comes at a cost. For a host of good reasons, courts usually handle constitutional claims case by case, not en masse. See Washington State Grange v. Washington State Republican Party, 552 U. S. 442, 450-451 (2008). “Claims of facial invalidity often rest on speculation” about the law‘s coverage and its future enforcement. Id., at 450. And “facial challenges threaten to short circuit the democratic process” by preventing duly enacted laws from being implemented in constitutional ways. Id., at 451. This Court has therefore made facial challenges hard to win.
That is true even when a facial suit is based on the First Amendment, although then a different standard applies. In other cases, a plaintiff cannot succeed on a facial challenge unless he “establish[es] that no set of circumstances exists under which the [law] would be valid,” or he shows that the law lacks a “plainly legitimate sweep.” United States v. Salerno, 481 U. S. 739, 745 (1987); Washington State Grange, 552 U. S., at 449. In First Amendment cases, however, this Court has lowered that very high bar. To “provide[] breathing room for free expression,” we have substituted a less demanding though still rigorous standard. United States v. Hansen, 599 U. S. 762, 769 (2023). The question is whether “a substantial number of [the law‘s] applications are unconstitutional, judged in relation to the statute‘s plainly legitimate sweep.” Americans for Prosperity Foundation v. Bonta, 594 U. S. 595, 615 (2021); see Hansen, 599 U. S., at 770 (likewise asking whether the law “prohibits a substantial amount of protected speech relative to its plainly legitimate sweep“). So in this singular context, even a law with “a plainly legitimate sweep” may be struck down in its entirety. But that is so only if the law‘s unconstitutional applications substantially outweigh its constitutional ones.
So far in these cases, no one has paid much attention to
The first step in the proper facial analysis is to assess the state laws’ scope. What activities, by what actors, do the laws prohibit or otherwise regulate? The laws of course differ one from the other. But both, at least on their face, appear to apply beyond Facebook‘s News Feed and its ilk. Members of this Court asked some of the relevant questions at oral argument. Starting with Facebook and the other giants: To what extent, if at all, do the laws affect their other services, like direct messaging or events management? See Tr. of Oral Arg. in No. 22–555, pp. 62–63; Tr. of Oral Arg. in No. 22–277, pp. 24-25; App. in No. 22–277, pp. 129, 159. And beyond those social-media entities, what
The next order of business is to decide which of the laws’ applications violate the First Amendment, and to measure them against the rest. For the content-moderation provisions, that means asking, as to every covered platform or function, whether there is an intrusion on protected editorial discretion. See infra, at 13-19. And for the individualized-explanation provisions, it means asking, again as to each thing covered, whether the required disclosures unduly burden expression. See Zauderer, 471 U. S., at 651. Even on a preliminary record, it is not hard to see how the answers might differ as between regulation of Facebook‘s News Feed (considered in the courts below) and, say, its direct messaging service (not so considered). Curating a feed and transmitting direct messages, one might think, involve different levels of editorial choice, so that the one creates an expressive product and the other does not.
The problem for this Court is that it cannot undertake the needed inquiries. “[W]e are a court of review, not of first view.” Cutter v. Wilkinson, 544 U. S. 709, 718, n. 7 (2005). Neither the Eleventh Circuit nor the Fifth Circuit performed the facial analysis in the way just described. And even were we to ignore the value of other courts going first, we could not proceed very far. The parties have not briefed the critical issues here, and the record is underdeveloped. So we vacate the decisions below and remand these cases. That will enable the lower courts to consider the scope of the laws’ applications, and weigh the unconstitutional as against the constitutional ones.
III
But it is necessary to say more about how the First Amendment relates to the laws’ content-moderation provisions, to ensure that the facial analysis proceeds on the right path in the courts below. That need is especially stark for the Fifth Circuit. Recall that it held that the content choices the major platforms make for their main feeds are “not speech” at all, so States may regulate them free of the First Amendment‘s restraints. 49 F. 4th, at 494; see supra, at 8. And even if those activities were expressive, the court
A
Despite the relative novelty of the technology before us, the main problem in this case—and the inquiry it calls for—is not new. At bottom, Texas‘s law requires the platforms to carry and promote user speech that they would rather
The seminal case is Miami Herald Publishing Co. v. Tornillo, 418 U. S. 241 (1974). There, a Florida law required a newspaper to give a political candidate a right to reply when it published “criticism and attacks on his record.” Id., at 243. The Court held the law to violate the First Amendment because it interfered with the newspaper‘s “exercise of editorial control and judgment.” Id., at 258. Forcing the paper to print what “it would not otherwise print,” the Court explained, “intru[ded] into the function of editors.” Id., at 256, 258. For that function was, first and foremost, to make decisions about the “content of the paper” and “[t]he choice of material to go into” it. Id., at 258. In protecting that right of editorial control, the Court recognized a possible downside. It noted the access advocates’ view (similar to the States’ view here) that “modern media empires” had gained ever greater capacity to “shape” and even “manipulate popular opinion.” Id., at 249–250. And the Court expressed some sympathy with that diagnosis. See id., at 254. But the cure proposed, it concluded, collided with the First Amendment‘s antipathy to state manipulation of the speech market. Florida, the Court explained,
could not substitute “governmental regulation” for the “crucial process” of editorial choice. Id., at 258.Next up was Pacific Gas & Elec. Co. v. Public Util. Comm‘n of Cal., 475 U.S. 1 (1986) (PG&E), which the Court thought to follow naturally from Tornillo. See 475 U.S., at 9-12 (plurality opinion); id., at 21 (Burger, C. J., concurring). A private utility in California regularly put a newsletter in its billing envelopes expressing its views of energy policy. The State directed it to include as well material from a consumer-advocacy group giving a different perspective. The utility objected, and the Court held again that the interest in “offer[ing] the public a greater variety of views” could not justify the regulation. Id., at 12. California was compelling the utility (as Florida had compelled a newspaper) “to carry speech with which it disagreed” and thus to “alter its own message.” Id., at 11, n. 7, 16.
In Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622 (1994) (Turner I), the Court further underscored the constitutional protection given to editorial choice. At issue were federal “must-carry” rules, requiring cable operators to allocate some of their channels to local broadcast stations. The Court had no doubt that the
The capstone of those precedents came in Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557 (1995), when the Court considered (of all things) a parade. The question was whether Massachusetts could require the organizers of a St. Patrick‘s Day parade to admit as a participant a gay and lesbian group seeking to convey a message of “pride.” Id., at 561. The Court held unanimously that the
On two other occasions, the Court distinguished Tornillo and its progeny for the flip-side reason—because in those cases the compelled access did not affect the complaining party‘s own expression. First, in PruneYard Shopping Center v. Robins, 447 U.S. 74 (1980), the Court rejected a shopping mall‘s
That is a slew of individual cases, so consider three general points to wrap up. Not coincidentally, they will figure in the upcoming discussion of the
First, the
Second, none of that changes just because a compiler includes most items and excludes just a few. That was the situation in Hurley. The St. Patrick‘s Day parade at issue there was “eclectic“: It included a “wide variety of patriotic, commercial, political, moral, artistic, religious, athletic, public service, trade union, and eleemosynary themes, as well as conflicting messages.” 515 U.S., at 562. Or otherwise said, the organizers were “rather lenient in admitting participants.” Id., at 569. No matter. A “narrow, succinctly articulable message is not a condition of constitutional protection.” Ibid. It “is enough” for a compiler to exclude the handful of messages it most “disfavor[s].” Id., at 574. Suppose, for example, that the newspaper in Tornillo had granted a right of reply to all but one candidate. It would have made no difference; the Florida statute still could not have altered the paper‘s policy. Indeed, that kind of focused editorial choice packs a peculiarly powerful expressive punch.
Third, the government cannot get its way just by asserting an interest in improving, or better balancing, the marketplace of ideas. Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That
B
“[W]hatever the challenges of applying the Constitution to ever-advancing technology, the basic principles” of the
Most readers are likely familiar with Facebook‘s News Feed or YouTube‘s homepage; assuming so, feel free to skip this paragraph (and maybe a couple more). For the uninitiated, though, each of those feeds presents a user with a continually updating stream of other users’ posts. For Facebook‘s News Feed, any user may upload a message, whether verbal or visual, with content running the gamut from “vacation pictures from friends” to “articles from local or national news outlets.” App. in No. 22-555, at 139a. And whenever a user signs on, Facebook delivers a personalized collection of those stories. Similarly for YouTube. Its users upload all manner of videos. And any person opening the website or mobile app receives an individualized list of video recommendations.
The key to the scheme is prioritization of content, achieved through the use of algorithms. Of the billions of posts or videos (plus advertisements) that could wind up on a user‘s customized feed or recommendations list, only the tiniest fraction do. The selection and ranking is most often
Beyond rankings lie labels. The platforms may attach “warning[s], disclaimers, or general commentary“—for example, informing users that certain content has “not been verified by official sources.” Id., at 75a. Likewise, they may use “information panels” to give users “context on content relating to topics and news prone to misinformation, as well as context about who submitted the content.” Id., at 114a. So, for example, YouTube identifies content submitted by state-supported media channels, including those funded by the Russian Government. See id., at 76a.
But sometimes, the platforms decide, providing more information is not enough; instead, removing a post is the right course. The platforms’ content-moderation policies also say when that is so. Facebook‘s Standards, for example, proscribe posts—with exceptions for “news-worth[iness]” and other “public interest value“—in categories and subcategories including: Violence and Criminal Behavior (e.g., violence and incitement, coordinating harm and publicizing crime, fraud and deception); Safety (e.g., suicide and self-injury, sexual exploitation, bullying and harassment); Objectionable Content (e.g., hate speech, violent and graphic content); Integrity and Authenticity (e.g., false news, manipulated media). Id., at 412a-415a, 441a-442a. YouTube‘s Guidelines similarly target videos falling within categories like: hate speech, violent or graphic content, child safety, and misinformation (including about elections and vaccines). See id., at 430a-432a. The platforms thus
Except that Texas‘s law limits their power to do so. As noted earlier, the law‘s central provision prohibits the large social-media platforms (and maybe other entities)6 from “censor[ing]” a “user‘s expression” based on its “viewpoint.”
- support Nazi ideology;
- advocate for terrorism;
- espouse racism, Islamophobia, or anti-Semitism;
- glorify rape or other gender-based violence;
- encourage teenage suicide and self-injury;
- discourage the use of vaccines;
- advise phony treatments for diseases;
- advance false claims of election fraud.
The list could continue for a while.9 The point of it is not that the speech environment created by Texas‘s law is worse than the ones to which the major platforms aspire on their main feeds. The point is just that Texas‘s law profoundly alters the platforms’ choices about the views they will, and will not, convey.
And we have time and again held that type of regulation to interfere with protected speech. Like the editors, cable
That those platforms happily convey the lion‘s share of posts submitted to them makes no significant
Similarly, the major social-media platforms do not lose their
C
And once that much is decided, the interest Texas relies on cannot sustain its law. In the usual
Texas has never been shy, and always been consistent, about its interest: The objective is to correct the mix of speech that the major social-media platforms present. In this Court, Texas described its law as “respond[ing]” to the
But a State may not interfere with private actors’ speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. That is, indeed, a fundamental aim of the
The Court‘s decisions about editorial control, as discussed earlier, make that point repeatedly. See supra, at 18-19. Again, the question those cases had in common was whether the government could force a private speaker, in-cluding a compiler and curator of third-party speech, to convey views it disapproved. And in most of those cases, the government defended its regulation as yielding greater balance in the marketplace of ideas. But the Court—in Tornillo, in PG&E, and again in Hurley—held that such an interest could not support the government‘s effort to alter the speaker‘s own expression. “Our cases establish,” the PG&E Court wrote, “that the State cannot advance some points of view by burdening the expression of others.” 475 U.S., at 20. So the newspaper, the public utility, the parade organizer—whether acting “fair[ly] or unfair[ly]“—could exclude the unwanted message, free from government interference. Tornillo, 418 U.S., at 258; see United States Telecom Assn. v. FCC, 855 F. 3d 381, 432 (CADC 2017) (Kavanaugh, J., dissenting from denial of rehearing en banc) (“[E]xcept in rare circumstances, the
IV
These are facial challenges, and that matters. To succeed on its
But there has been enough litigation already to know that the Fifth Circuit, if it stayed the course, would get wrong at least one significant input into the facial analysis. The parties treated Facebook‘s News Feed and YouTube‘s homepage as the heartland applications of the Texas law. At least on the current record, the editorial judgments influencing the content of those feeds are, contrary to the Fifth Circuit‘s view, protected expressive activity. And Texas may not interfere with those judgments simply because it would prefer a different mix of messages. How that matters for the requisite facial analysis is for the Fifth Circuit to decide. But it should conduct that analysis in keeping with two
We accordingly vacate the judgments of the Courts of Appeals for the Fifth and Eleventh Circuits and remand the cases for further proceedings consistent with this opinion.
It is so ordered.
I join the Court‘s opinion, which correctly articulates and applies our
But for the reasons the Court gives, these cases illustrate the dangers of bringing a facial challenge. If NetChoice‘s members are concerned about preserving their editorial discretion with respect to the services on which they have focused throughout this litigation—e.g., Facebook‘s Newsfeed and YouTube‘s homepage—they would be better served by bringing a
Consider, for instance, how platforms use algorithms to prioritize and remove content on their feeds. Assume that human beings decide to remove posts promoting a particular political candidate or advocating some position on a public-health issue. If they create an algorithm to help them identify and delete that content, the
But what if a platform‘s algorithm just presents automatically to each user whatever the algorithm thinks the user will like—e.g., content similar to posts with which the user previously engaged? See ante, at 22, n. 5. The
There can be other complexities too. For example, the corporate structure and ownership of some platforms may be relevant to the constitutional analysis. A speaker‘s right to “decide ‘what not to say‘” is “enjoyed by business corporations generally.” Hurley, 515 U.S., at 573-574 (quoting Pacific Gas & Elec. Co. v. Public Util. Comm‘n of Cal., 475 U.S. 1, 16 (1986)). Corporations, which are composed of human beings with
These are just a few examples of questions that might
A facial challenge to either of these laws likely forces a court to bite off more than it can chew. An as-applied challenge, by contrast, would enable courts to home in on whether and how specific functions—like feeds versus direct messaging—are inherently expressive and answer platform- and function-specific questions that might bear on the First Amendment analysis. While the governing constitutional principles are straightforward, applying them in one fell swoop to the entire social-media universe is not.
JUSTICE JACKSON, concurring in part and concurring in the judgment.
These cases present a complex clash between two novel state laws and the alleged First Amendment rights of several of the largest social media platforms. Some things are already clear. Not every potential action taken by a social media company will qualify as expression protected under the
In doing so, the lower courts must address these cases at the right level of specificity. The question is not whether an entire category of corporations (like social media companies) or a particular entity (like Facebook) is generally engaged in expression. Nor is it enough to say that a given activity (say, content moderation) for a particular service (the News Feed, for example) seems roughly analogous to a more familiar example from our precedent. Cf. Red Lion Broadcasting Co. v. FCC, 395 U. S. 367, 386 (1969) (positing that “differences in the characteristics of new media justify differences in the
In light of the high bar for facial challenges and the state of these cases as they come to us, I would not go on to treat either like an as-applied challenge and preview our potential ruling on the merits. Faced with difficult constitutional
JUSTICE THOMAS, concurring in the judgment.
I agree with the Court‘s decision to vacate and remand because NetChoice and the Computer and Communications Industry Association (together, the trade associations) have not established that Texas‘s
I cannot agree, however, with the Court‘s decision to opine on certain applications of those statutes. The Court‘s discussion is unnecessary to its holding. See Jama v. Immigration and Customs Enforcement, 543 U. S. 335, 351, n. 12 (2005) (“Dictum settles nothing, even in the court that utters it“). Moreover, the Court engages in the exact type of analysis that it chastises the Courts of Appeals for performing. It faults the Courts of Appeals for focusing on only one subset of applications, rather than determining
I agree with JUSTICE ALITO‘s analysis and join his opinion in full. I write separately to add two observations on the merits and to highlight a more fundamental jurisdictional problem. The trade associations have brought facial challenges alleging that
I
As JUSTICE ALITO explains, the trade associations have failed to provide many of the basic facts necessary to evaluate their challenges to
First, with respect to certain provisions of
Second, the common-carrier doctrine should continue to guide the lower courts’ examination of the trade associations’ claims on remand. See post, at 18, and n. 17, 30 (opinion of ALITO, J.). “[O]ur legal system and its British predecessor have long subjected certain businesses, known as common carriers, to special regulations, including a general requirement to serve all comers.” Biden v. Knight First Amendment Institute at Columbia Univ., 593 U. S. ___ (2021) (THOMAS, J., concurring in grant of certiorari) (slip op., at 3). Moreover, “there is clear historical precedent for regulating transportation and communications networks in a similar manner as traditional common carriers” given their many similarities. Id., at ___ (slip op., at 5). Though they reached different conclusions, both the Fifth Circuit and the Eleventh Circuit appropriately strove to apply the common-carrier doctrine in assessing the constitutionality of
II
The opinions in these cases detail many of the considerable hurdles that currently preclude resolution of the trade associations’ claims. See ante, at 9-10; ante, at 1-4 (BARRETT, J., concurring); post, at 22–32 (opinion of ALITO, J.). The most significant problem of all, however, has yet to be addressed: Federal courts lack authority to adjudicate the trade associations’ facial challenges.
Rather than allege that the statutes impermissibly regulate them, the trade associations assert that
Facial challenges are fundamentally at odds with
A
1
These limitations on the power of judicial review play an essential role in preserving our constitutional structure. Our Constitution sets forth a “tripartite allocation of power,” separating different types of powers across three co-equal branches. DaimlerChrysler Corp. v. Cuno, 547 U. S. 332, 341 (2006) (internal quotation marks omitted). “[E]ach
2
Facial challenges conflict with
To bring a facial challenge under our precedents, a plaintiff must ordinarily “establish that no set of circumstances exists under which the Act would be valid.” United States v. Salerno, 481 U. S. 739, 745 (1987). In the
Proceeding to decide the merits of possible constitutional challenges that could be brought by other plaintiffs is not necessary to resolve that case. Instead, any holding with respect to potential future plaintiffs would be “no more than an advisory opinion—which a federal court should never issue at all, and especially should not issue with regard to a constitutional question, as to which we seek to avoid even nonadvisory opinions.” Chicago v. Morales, 527 U. S. 41, 77 (1999) (Scalia, J., dissenting) (citation omitted).
Unsurprisingly, facial challenges are at odds with doctrines enforcing the case-or-controversy requirement. Pursuant to standing doctrine, for example, a plaintiff can maintain a suit in a federal court—and thus invoke judicial power—only if he has suffered an “injury” with a “traceable connection” to the “complained-of conduct of the defendant.” Steel Co., 523 U. S., at 103. Facial challenges significantly relax those rules. Start with the injury requirement. Facial challenges allow a plaintiff to challenge applications of a statute that have not injured him. But see Acheson Ho-
Facial challenges also distort standing doctrine‘s redressability requirement. The Court has held that a plaintiff has standing to sue only when his “requested relief will redress the alleged injury.” Steel Co., 523 U. S., at 103. With a facial challenge, however, a plaintiff seeks to enjoin every application of a statute—including ones that have nothing to do with his injury. A plaintiff can ask, “Do [I] just want [the court] to say that this statute cannot constitutionally be applied to [me] in this case, or do [I] want to go for broke and try to get the statute pronounced void in all its applications?” Morales, 527 U. S., at 77 (opinion of Scalia, J.). In this sense, the remedy sought by a facial challenge is akin to a universal injunction—a practice that is itself “inconsistent with longstanding limits on equitable relief and the power of
Because deciding the constitutionality of a statute as applied to nonparties is not necessary to resolve a case or controversy, it is beyond a federal court‘s constitutional authority. Federal courts have “no power per se to review and annul acts of Congress on the ground that they are unconstitutional. That question may be considered only when the justification for some direct injury suffered or threatened, presenting a justiciable issue, is made to rest upon such an
3
Adjudicating facial challenges also intrudes upon powers reserved to the Legislative and Executive Branches and the States. When a federal court decides an issue unnecessary for resolving a case or controversy, the Judiciary assumes authority beyond what the Constitution granted. Supra, at 5-6. That necessarily alters the balance of powers: When one branch exceeds its vested power, it becomes stronger relative to the other branches. See Free Enterprise Fund v. Public Company Accounting Oversight Bd., 561 U. S. 477, 500 (2010).
Moreover, by exceeding their
Comparing the effects of as-applied challenges and facial
Facial challenges, however, force the Judiciary to take a maximalist approach. A single plaintiff can immediately call upon a federal court to declare an entire statute unconstitutional, even before it has been applied to him. The political branches have no opportunity to correct course, making legislation an all-or-nothing proposition. The end result is that “the democratic process” is “short circuit[ed]” and “laws embodying the will of the people [are prevented] from being implemented in a manner consistent with the Constitution.” Ibid.
In a similar vein, facial challenges distort the relationship between the Federal Government and the States. The Constitution “establishes a system of dual sovereignty between the States and the Federal Government.” Gregory v. Ashcroft, 501 U. S. 452, 457 (1991). The States retain all powers “not delegated” to the Federal Government and not “prohibited by [the Constitution] to the States.”
B
In addition to their constitutional infirmities, facial challenges also create practical problems. The case-or-controversy requirement serves as the foundation of our adversarial system. Rather than ““sit[ting] as self-directed boards of legal inquiry and research,” federal courts serve as “arbiters of legal questions presented and argued by the parties before them.” NASA v. Nelson, 562 U. S. 134, 147, n. 10 (2011) (quoting Carducci v. Regan, 714 F. 2d 171, 177 (CADC 1983) (opinion for the court by Scalia, J.)). This system “assure[s] that the legal questions presented to the court will be resolved . . . in a concrete factual context conducive to a realistic appreciation of the consequences of judicial action.” Valley Forge Christian College v. Americans United for Separation of Church and State, Inc., 454 U. S. 464, 472 (1982).
Facial challenges disrupt the adversarial system and increase the risk of judicial error as a result. A plaintiff raising a facial challenge need not have any direct knowledge of how the statute applies to others. In fact, since a facial challenge may be brought before a statute has been enforced against anyone, a plaintiff often can only guess how the statute operates—even in his own case. For this reason, “[c]laims of facial invalidity often rest on speculation,” Washington State Grange, 552 U. S., at 450, and “factually barebones records,” Sabri v. United States, 541 U. S. 600, 609 (2004). Federal courts are often called to give “prema-
C
The problems with facial challenges are particularly evident in the two cases before us. Even though the trade associations challenge two state laws, the state actors have been left out of the picture. State officials had no opportunity to tailor the laws’ enforcement. Nor could the legislatures amend the statutes before they were preliminarily enjoined. In addition, neither set of state courts had a chance to interpret their own State‘s law or “accord [that] law a limiting construction to avoid constitutional questions.” Washington State Grange, 552 U. S., at 450. Instead, federal courts construed these novel state laws in the first instance. And, they did so with little factual record to assist them. The trade associations’ reliance on our questionable associational-standing doctrine is partially to blame.2 But, the fact that the trade associations raise facial challenges has undeniably played a significant role. With
D
Facial challenges are particularly suspect given their origins. They appear to be the product of two doctrines that are themselves constitutionally questionable, vagueness and overbreadth.
At the time of the founding, it was well understood that federal courts could hold a statute unconstitutional only insofar as necessary to resolve a particular case or controversy. See supra, at 5–6. The Founders were certainly familiar with alternative systems that provided for the free-floating review of duly enacted statutes. For example, the New York Constitution of 1777 created a Council of Revision, composed of the Governor, Chancellor, and New York Supreme Court. See Hansen, 599 U. S., at 786 (THOMAS, J., concurring). The Council of Revision could object to “any measure of a [prospective] bill” based on “not only [its] constitutionality... but also [its] policy.” Id., at 787. If the Council lodged an objection, the Legislature‘s only options were to “conform to [the Council‘s] objections, override them by a two-thirds vote of both Houses, or simply let the bill die.” Ibid. (internal quotation marks omitted).
In our Constitution, the Founders refused to create a council of revision or involve the Federal Judiciary in the business of reviewing statutes in the abstract. “Despite the support of respected delegates the Convention voted against creating a federal council of revision on four different occasions. No other proposal was considered and rejected so many times.” Id., at 789 (citation omitted). Instead, the Founders created a Judiciary with “only the
For more than a century following the founding, the Court generally adhered to the original understanding of the narrow scope of judicial review. When the Court first discussed the concept of judicial review in Marbury v. Madison, it made clear that such review is limited to what is necessary for resolving “a particular cas[e]” before a court. 1 Cranch, at 177; see also supra, at 5–6. And, in case after case that followed Marbury, the Court reiterated that federal courts have no authority to reach beyond the parties before them to facially invalidate a statute.4
The vagueness doctrine was the direct ancestor of one subset of modern facial challenges, the overbreadth doctrine. See United States v. Sineneng-Smith, 590 U. S. 371, 385 (2020) (THOMAS, J., concurring) (noting that the overbreadth doctrine “developed as a result of the vagueness doctrine‘s application in the
Thornhill‘s approach quickly gained traction in the
The overbreadth and vagueness doctrines’ method of facial invalidation eventually spread to other areas of law, setting in motion our modern facial challenge doctrine. For several decades after Thornhill, the Court continued to resist the broad use of facial challenges. For example, in Broadrick v. Oklahoma, 413 U. S. 601 (1973), the Court emphasized that “[c]onstitutional judgments, as Mr. Chief Justice Marshall recognized, are justified only out of the necessity of adjudicating rights in particular cases between the litigants brought before the Court.” Id., at 611. In that vein, the Court characterized “facial overbreadth adjudication [as] an exception to our traditional rules of practice.” Id., at 615. But, the Court eventually entertained facial challenges more broadly where a plaintiff established that “no set of circumstances exists under which the Act would be valid.”6 Salerno, 481 U. S., at 745. Just as with the overbreadth doctrine, the Court has yet to explain how facial challenges are consistent with the Constitution‘s text or history.
Given how our facial challenge doctrine seems to have developed—with one doctrinal mistake leading to another—it is no wonder that facial challenges create a host of constitutional and practical issues. See supra, at 6–13. Rather than perpetuate our mistakes, the Court should end them. “No principle is more fundamental to the judiciary‘s proper role in our system of government than the constitutional
*
*
*
The Court has recognized the problems that facial challenges pose, emphasizing that they are “disfavored,” Washington State Grange, 552 U. S., at 450, and “best when infrequent,” Sabri, 541 U. S., at 608. The Court reiterates those sentiments today. Ante, at 9, 30. But, while sidelining facial challenges provides some measure of relief, it ignores the real problem. Because federal courts are bound by Article III‘s case-or-controversy requirement, holding a statute unconstitutional as applied to nonparties is not simply disfavored—it exceeds the authority granted to federal courts. It is high time the Court reconsiders its facial challenge doctrine.
The holding in these cases is narrow: NetChoice failed to prove that the Florida and Texas laws they challenged are facially unconstitutional. Everything else in the opinion of the Court is nonbinding dicta.
I agree with the bottom line of the majority‘s central holding. But its description of the Florida and Texas laws, as well as the litigation that shaped the question before us, leaves much to be desired. Its summary of our legal precedents is incomplete. And its broader ambition of providing guidance on whether one part of the Texas law is unconstitutional as applied to two features of two of the many platforms that it reaches—namely, Facebook‘s News Feed and YouTube‘s homepage—is unnecessary and unjustified.
But given the incompleteness of this record, there is no
These as-applied issues are important, and we may have to decide them before too long. But these cases do not provide the proper occasion to do so. For these reasons, I am therefore compelled to provide a more complete discussion of those matters than is customary in an opinion that concurs only in the judgment.
I
As the Court has recognized, social-media platforms have become the “modern public square.” Packingham v. North Carolina, 582 U. S. 98, 107 (2017). In just a few years, they have transformed the way in which millions of Americans communicate with family and friends, perform daily chores, conduct business, and learn about and comment on current events. The vast majority of Americans use social media,1 and the average person spends more than two hours a day on various platforms.2 Young people now turn primarily to
Notes
As I have recently explained, “[a]ssociational standing raises constitutional concerns.” See FDA v. Alliance for Hippocratic Medicine, 602 U. S. 367, 399 (2024) (concurring opinion). Associational standing appears to conflict with
In light of these trends, platforms and governments have implemented measures to minimize the harms unique to the social-media context. Social-media companies have created user guidelines establishing the kinds of content that users may post and the consequences of violating those guidelines, which often include removing nonconforming posts or restricting noncompliant users’ access to a platform.
Such enforcement decisions can sometimes have serious consequences. Restricting access to social media can impair users’ ability to speak to, learn from, and do business with others. Deleting the account of an elected official or candidate for public office may seriously impair that individual‘s efforts to reach constituents or voters, as well as the ability of voters to make a fully informed electoral choice. And what platforms call “content moderation” of the news or user comments on public affairs can have a substantial effect on popular views.
A
1
I start with Florida‘s law, S. B. 7072, which regulates any internet platform that does “business in the state” and has either “annual gross revenues in excess of $100 million” or “at least 100 million monthly individual platform participants globally.”
To prevent covered platforms from unfairly treating Floridians, S. B. 7072 imposes the following “content-moderation” and disclosure requirements:
To prevent platforms from attempting to evade this restriction by regularly modifying their practices, the law prohibits platforms from changing their censorship “rules, terms, and agreements . . . more than once every 30 days.”
Although some platforms still have employees who monitor and organize social-media feeds, for most platforms, “the incredible volume of content shared each day makes human review of each new post impossible.” Brief for Developers Alliance et al. as Amici Curiae 4. Consequently, platforms rely heavily on algorithms to organize and censor content. Ibid. And it is likely that they will increasingly
In addition to barring censorship, the Florida law attempts to prevent platforms from unfairly influencing elections or distorting public discourse. To do this, it requires platforms to host candidates for public office and journalistic enterprises.7
Disclosure provisions. S. B. 7072 requires platforms to make both general and individual disclosures about how and when they censor the speech of Floridians. The law requires platforms to publish their content-moderation standards and to inform users of any changes.
To ensure compliance with these provisions, S. B. 7072 authorizes the Florida attorney general to bring civil and administrative actions against noncomplying platforms.
To protect platforms, the law provides that it “may only be enforced to the extent not inconsistent with federal law,” including §203 of the Communications Decency Act of 1996.
2
Days after S. B. 7072‘s enactment, NetChoice filed suit in federal court, alleging that the new law violates the First Amendment in all its applications.8 As a result, NetChoice asked the District Court to enter a preliminary injunction against any enforcement of any of its provisions before the law took effect.
Florida defended the constitutionality of S. B. 7072. It argued that the law‘s prohibition of censorship does not violate the freedom of speech because the First Amendment permits the regulation of the conduct of entities that do not express their own views but simply provide the means for others to communicate. See Record in No. 4:21–CV–00220
Despite these arguments, the District Court enjoined S. B. 7072 in its entirety before the law could go into effect. Florida appealed, maintaining, among other things, that NetChoice was “unlikely to prevail on the merits of [its] facial First Amendment challenge.” Brief for Appellants in No. 21-12355 (CA11), р. 20; Reply Brief in No. 21-12355 (CA11), р. 15.
With just one exception, the Eleventh Circuit affirmed. It first held that all the regulated platforms’ decisions about “whether, to what extent, and in what manner to disseminate third-party created content to the public” were constitutionally protected expression. NetChoice v. Attorney Gen., Fla., 34 F. 4th 1196, 1212 (2022). Under that framing, the court found that the moderation and individual-disclosure provisions likely failed intermediate scrutiny, obviating the need to determine whether strict scrutiny applied. Id., at 1227.9 But the court held that the general-
B
1
Around the same time as the enactment of the Florida law, Texas adopted a similar measure, H. B. 20, which covers “social media platform[s]” with more than 50 million monthly users in the United States.
To ensure “the free exchange of ideas and information,” H. B. 20 requires regulated platforms to abide by the following content-moderation and disclosure requirements. Act of Sept. 2, 2021, 87th Leg., 2d Called Sess., ch. 3.
Content-moderation provisions. H. B. 20 prevents social-media companies from “censoring” users—that is, acting to “block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against”—based on their viewpoint or geographic
Disclosure provisions. Like the Florida law, H. B. 20 also requires platforms to make general and individual disclosures about their censorship practices. Specifically, the law obligates each platform to tell the public how it “targets,” “promotes,” and “moderates” content.
Users may sue any platform that violates these provisions, as may the Texas attorney general.
2
As it did in the Florida case, NetChoice sought a preliminary injunction in federal court, claiming that H. B. 20 violates the First Amendment in its entirety. In response, Texas argued that because H. B. 20 regulates NetChoice’s members “in their operation as publicly accessible conduits for the speech of others” rather than “as authors or editors” of their own speech, NetChoice could not prevail. Record in No. 1:21–CV–00840 (WD Tex.), Doc. 39, p. 23. But even if the platforms might have the right to use algorithms to censor their users’ speech, the State argued, the question of “what these algorithms are doing is a critical, and so far, unexplained, aspect of this case.” Id., at 24. This deficiency mattered, Texas contended, because the platforms could succeed on their facial challenge only by showing that “all algorithms used by the Platforms are for the purposes of expressing viewpoints of those Platforms.” Id., at 27. And because NetChoice had not even explained what its members’ algorithms did, much less whether they did so in an expressive way, Texas argued that NetChoice had not shown that “all applications of H.B. 20 are unconstitutional.” Ibid.; see also id., Doc. 53, at 13 (arguing that
To clarify these and other “threshold issues,” Texas moved for expedited discovery. Id., Doc. 20, at 1. The District Court granted Texas‘s motion in part, but after one month of discovery, it sided with NetChoice and enjoined H. B. 20 in its entirety before it could go into effect. Texas appealed, arguing that despite the District Court‘s judgment to the contrary, “[l]aws requiring commercial entities to neutrally host speakers generally do not even implicate the First Amendment because they do not regulate the host‘s speech at all—they regulate its conduct.” Brief for Appellant in No. 21–51178 (СА5), р. 16. The State also emphasized NetChoice‘s alleged failure to show that H. В. 20 was unconstitutional in even a “substantial number of its applications,” the “bare minimum” showing that NetChoice needed to make to prevail on its facial challenge. E.g., Reply Brief in No. 21–51178 (СА5), p. 8 (quoting Americans for Prosperity Foundation v. Bonta, 594 U. S. 595, 615 (2021)).
A divided Fifth Circuit panel reversed, focusing primarily on NetChoice‘s failure to “even try to show that HB 20 is ‘unconstitutional in all its applications.‘” NetChoice, LLC v. Paxton, 49 F. 4th 439, 449 (2022) (quoting Washington State Grange v. Washington State Republican Party, 552 U. S. 442, 449 (2008)). The court also accepted Texas‘s argument that H. B. 20 “does not regulate the Platforms’ speech at all” because “the Platforms are not ‘speaking’ when they host other people‘s speech.” 49 F. 4th, at 448. Finally, the court upheld the law‘s disclosure requirements on the ground that they involve the disclosure of the type of purely factual and uncontroversial information that may be compelled under Zauderer. 49 F. 4th, at 485.
II
NetChoice contends that the Florida and Texas statutes facially violate the First Amendment, meaning that they cannot be applied to anyone at any time under any circumstances without violating the Constitution. Such challenges are strongly disfavored. See Washington State Grange, 552 U. S., at 452. They often raise the risk of “‘premature interpretatio[n] of statutes’ on the basis of factually barebones records.” Sabri v. United States, 541 U. S. 600, 609 (2004). They clash with the principle that courts should neither “‘anticipate a question of constitutional law in advance of the necessity of deciding it” nor “‘formulate a rule of constitutional law broader than is required by the precise facts to which it is to be applied.” Ashwander v. TVA, 297 U. S. 288, 346–347 (1936) (Brandeis, J., concurring). And they “threaten to short circuit the democratic process by preventing laws embodying the will of the people from being implemented in a manner consistent with the Constitution.” Washington State Grange, 552 U. S., at 451.
Facial challenges also strain the limits of the federal courts’ constitutional authority to decide only actual “Cases” and “Controversies.”
For these reasons, we have insisted that parties mounting facial attacks satisfy demanding requirements. In United States v. Salerno, 481 U. S. 739, 745 (1987), we held that a facial challenger must “establish that no set of circumstances exists under which the [law] would be valid.” “While some Members of the Court have criticized the Salerno formulation,” all have agreed “that a facial challenge must fail where the statute has a ‘plainly legitimate
NetChoice and the Federal Government urge us not to apply any of these demanding tests because, they say, the States disputed only the “threshold question” whether their laws “cover expressive activity at all.” Tr. of Oral Arg. in No. 22–277, at 76; see also id., at 84, 125; Tr. of Oral Arg. in No. 22–555, at 92. The Court unanimously rejects that argument—and for good reason.
First, the States did not “put all their eggs in [one] basket.” Tr. of Oral Arg. in No. 22–277, at 76. To be sure, they argued that their newly enacted laws were valid in all their applications. Ibid. Both the Federal Government and the States almost always defend the constitutionality of all provisions of their laws. But Florida and Texas did not stop there. Rather, as noted above, they went on to argue that NetChoice had failed to make the showing required for a facial challenge.15 Therefore, the record does not support
Second, even if the States had not asked the lower courts to reject NetChoice‘s request for blanket relief, it would have been improper for those courts to enjoin all applications of the challenged laws unless that test was met. “It is one thing to allow parties to forfeit claims, defenses, or lines of argument; it would be quite another to allow parties to stipulate or bind [a court] to the application of an incorrect legal standard.” Gardner v. Galetka, 568 F. 3d 862, 879 (CA10 2009); see also Kairys v. Southern Pines Trucking, Inc., 75 F. 4th 153, 160 (CA3 2023) (“But parties cannot forfeit the application of ‘controlling law’”); United States v. Escobar, 866 F. 3d 333, 339, n. 13 (CA5 2017) (per curiam) (“‘A party cannot waive, concede, or abandon the applicable standard of review’” (quoting Ward v. Stephens, 777 F. 3d 250, 257, n. 3 (CA5 2015)).
Represented by sophisticated counsel, NetChoice made the deliberate choice to mount a facial challenge to both laws, and in doing so, it obviously knew what it would have to show in order to prevail. NetChoice decided to fight these laws on these terms, and the Court properly holds it to that decision.
III
I therefore turn to the question whether NetChoice established facial unconstitutionality, and I begin with the States’ content-moderation requirements. To show that these provisions are facially invalid, NetChoice had to demonstrate that they lack a plainly legitimate sweep under the First Amendment. Our precedents interpreting that Amendment provide the numerator (the number of unconstitutional applications) and denominator (the total number of possible applications) that NetChoice was required to identify in order to make that showing. Estimating the numerator requires an understanding of the First Amendment principles that must be applied here, and I therefore provide a brief review of those principles.
A
The First Amendment protects “the freedom of speech,” and most of our cases interpreting this right have involved government efforts to forbid, restrict, or compel a party‘s own oral or written expression. Agency for Int‘l Development v. Alliance for Open Society Int‘l, Inc., 570 U. S. 205, 213 (2013); Wooley v. Maynard, 430 U. S. 705, 714 (1977); West Virginia Bd. of Ed. v. Barnette, 319 U. S. 624, 642 (1943). Some cases, however, have involved another aspect of the free speech right, namely, the right to “presen[t] . . . an edited compilation of speech generated by other persons” for the purpose of expressing a particular message. See Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U. S. 557, 570 (1995). As used in this context, the term “compilation” means any effort to present the expression of others in some sort of organized package. See ibid.
An example such as the famous Oxford Book of English Poetry illustrates why a compilation may constitute expression on the part of the compiler. The editors’ selection of the poems included in this volume expresses their view
Not all compilations, however, have this expressive characteristic. Suppose that the head of a neighborhood group prepares a directory consisting of contact information submitted by all the residents who want to be listed. This directory would not include any meaningful expression on the part of the compiler.
Because not all compilers express a message of their own, not all compilations are protected by the First Amendment. Instead, the First Amendment protects only those compilations that are “inherently expressive” in their own right, meaning that they select and present speech created by other persons in order “to spread [the compiler‘s] own message.” FAIR, 547 U. S., at 66; Pacific Gas & Elec. Co. v. Public Util. Comm‘n of Cal., 475 U. S. 1, 10 (1986) (PG&E) (plurality opinion). If a compilation is inherently expressive, then the compiler may have the right to refuse to accommodate a particular speaker or message. See Hurley, 515 U. S., at 573. But if a compilation is not inherently expressive, then the government can require the compiler to host a message or speaker because the accommodation does not amount to compelled speech. Id., at 578–581.
To show that a hosting requirement would compel speech and thereby trigger First Amendment scrutiny, a claimant must generally show three things.
1
First, a claimant must establish that its practice is to exercise “editorial discretion in the selection and presentation” of the content it hosts. Arkansas Ed. Television Comm‘n v. Forbes, 523 U. S. 666, 674 (1998); Hurley, 515 U. S., at 574; ante, at 14. NetChoice describes this process
Determining whether an entity should be viewed as a “curator” or a “dumb pipe” may not always be easy because different aspects of an entity‘s operations may take different approaches with respect to hosting third-party speech. The typical newspaper regulates the content and presentation of articles authored by its employees or others, PG&E, 475 U. S., at 8, but that same paper might also run nearly all the classified advertisements it receives, regardless of their content and without adding any expression of its own. Compare Tornillo, 418 U. S. 241, with Pittsburgh Press Co. v. Pittsburgh Comm‘n on Human Relations, 413 U. S. 376 (1973). These differences may be significant for First Amendment purposes.
The same may be true for a parade organizer. For example, the practice of a parade organizer may be to select the
groups that are admitted, but not the individuals who are allowed to march as members of admitted groups. Hurley, 515 U. S., at 572-574. In such a case, each of these practices would have to be analyzed separately.
2
Second, the host must use the compilation of speech to express “some sort of collective point“—even if only at a fairly abstract level. Id., at 568. Thus, a parade organizer who claims a First Amendment right to exclude certain groups or individuals would need to show at least that the message conveyed by the groups or individuals who are allowed to march comport with the parade‘s theme. Id., at 560, 574. A parade comprising “unrelated segments” that lumber along together willy-nilly would likely not express anything at all. Id., at 576. And although “a narrow, succinctly articulable message is not a condition of constitutional protection,” compilations that organize the speech of others in a non-expressive way (e.g., chronologically) fall “beyond the realm of expressi[on].” Id., at 569; contra, ante, at 17-18.
Our decision in PruneYard illustrates this point. In that case, the Court held that a mall could be required to host third-party speech (i.e., to admit individuals who wanted to distribute handbills or solicit signatures on petitions) because the mall‘s admission policy did not express any message, and because the mall was “open to the public at large.” PruneYard Shopping Center v. Robins, 447 U. S. 74, 83, 87-88 (1980); 303 Creative LLC v. Elenis, 600 U. S. 570, 590 (2023). In such circumstances, we held that the First Amendment is not implicated merely because a host objects to a particular message or viewpoint. See PG&E, 475 U. S., at 12.
3
Finally, a compiler must show that its “own message [is]
Two precedents that the majority tries to downplay, if not forget, are illustrative. The first is PruneYard, which I have already discussed. The PruneYard Court rejected the mall‘s First Amendment claim because “[t]he views expressed by members of the public in passing out pamphlets or seeking signatures for a petition [were] not likely [to] be identified with those of the owner.” 447 U. S., at 87. And if those who perused the handbills or petitions were not likely to make that connection, any message that the mall owner intended to convey would not be affected.
The decision in FAIR rested on similar reasoning. In that case, the Court did not dispute the proposition that the law schools’ refusal to host military recruiters expressed the message that the military should admit and retain gays and lesbians. But the Court found no First Amendment violation because, as in PruneYard, it was unlikely that the views of the military recruiters “would be identified with” those of the schools themselves, and consequently, hosting the military recruiters did not “sufficiently interfere with any message of the school.” 547 U. S., at 64-65; contra, ante, at 25 (“[T]his Court has never hinged a compiler‘s First Amendment protection on the risk of misattribution.“).18
B
A party that challenges government interference with its curation of content cannot win without making the three-part showing just outlined, but such a showing does not guarantee victory. To prevail, the party must go on and show that the challenged regulation of its curation practices violates the applicable level of First Amendment scrutiny.
Our decision in Turner makes that clear. Although the television cable operators in that case made the showing needed to trigger First Amendment scrutiny, they did not ultimately prevail on their facial challenge to the Cable Act. After a remand and more than 18 months of additional factual development, the Court held that the law was adequately tailored to serve legitimate and important government interests, including “promoting the widespread dissemination of information from a multiplicity of sources.” Turner Broadcasting System, Inc. v. FCC, 520 U. S. 180, 189 (1997). Here, the States assert a similar interest in fostering a free and open marketplace of ideas.19
C
With these standards in mind, I proceed to the question
1
First, NetChoice did not establish which entities the statutes cover. This failure is critical because it is “impossible to determine whether a statute reaches too far without first knowing what the statute covers.” Williams, 553 U. S., at 293. When it sued Florida, NetChoice was reluctant to disclose which of its members were covered by S. B. 7072. Instead, it filed declarations revealing only that the law reached “Etsy, Facebook, and YouTube.” Tr. of Oral Arg. in No. 22-277, at 32. In this Court, NetChoice was a bit more forthcoming, representing that S. B. 7072 also covers Instagram, X, Pinterest, Reddit, Gmail, Uber, and other e-commerce websites. Id., at 69, 76; Brief for Respondents in No. 22-277, at 7, 38, 49.20 But NetChoice has still not provided a complete list.
NetChoice was similarly reluctant to identify its affected members in the Texas case. At first, NetChoice “represented ... that only Facebook, YouTube, and [X] are affected by the Texas law.” Brief for Appellant in No. 21-51178 (CA5), at 1, n. 1. But in its brief in this Court, NetChoice told us that H. B. 20 also regulates “some of the Internet‘s most popular websites, including Facebook, Instagram, Pinterest, TikTok, Vimeo, X (formerly known as Twitter), and YouTube.” Brief for Petitioners in No. 22-
It is a mystery how NetChoice could expect to prevail on a facial challenge without candidly disclosing the platforms that it thinks the challenged laws reach or the nature of the content moderation they practice. Without such information, we have no way of knowing whether the laws at issue here “cover websites that engage in primarily non-expressive conduct.” Tr. of Oral Arg. in No. 22-277, at 34; see also id., at 126. For example, among other things, NetChoice has not stated whether the challenged laws reach websites like WhatsApp25 and Gmail,26 which carry messages instead of curating them to create an independent speech product. Both laws also appear to cover Reddit27
In First Amendment terms, this means that these laws—in at least some of their applications—appear to regulate the kind of “passive receptacle[s]” of third-party speech that receive no First Amendment protection. Tornillo, 418 U. S., at 258. Given such uncertainty, it is impossible for us to determine whether these laws have a “plainly legitimate sweep.” Williams, 553 U. S., at 292; Washington State Grange, 552 U. S., at 449.
2
Second, NetChoice has not established what kinds of content appear on all the regulated platforms, and we cannot determine whether these platforms create an “inherently expressive” compilation of third-party speech until we know what is being compiled.
We know that social-media platforms generally allow their users to create accounts; send direct messages
For one thing, the ways in which users post, send direct messages, or interact with content may differ in meaningful ways from platform to platform. And NetChoice‘s failure to account for these differences may be decisive. To see how, consider X and Yelp. Both platforms allow users to post comments and photos, but they differ in other respects.31 X permits users to post (or “Tweet“) on a broad range of topics because its “purpose is to serve the public conversation,”32 and as a result, many elected officials use X to communicate with constituents. Yelp, by contrast, allows users to post comments and pictures only for the purpose of advertising local businesses or providing “firsthand accounts” that reflect their “consumer experience” with businesses.33 It does not permit “rants about political ideologies, a business‘s employment practices, extraordinary circumstances, or other matters that don‘t address the core of the consumer experience.”34
As this example shows, X‘s content is more political than Yelp‘s, and Yelp‘s content is more commercial than X‘s. That difference may be significant for First Amendment purposes. See Pittsburgh Press, 413 U. S. 376. But NetChoice has not developed the record on that front. Nor
Social-media platforms are diverse, and each may be unique in potentially significant ways. On the present record, we are ill-equipped to account for the many platform-specific features that allow users to do things like sell or purchase goods,35 live-stream events,36 request a ride,37 arrange a date,38 create a discussion forum,39 wire money to friends,40 play a video game,41 hire an employee,42 log a run,43 or agree to watch a dog.44 The challenged laws may apply differently to these different functions, which may present different First Amendment issues. A court cannot invalidate the challenged laws if it has to speculate about their applications.
3
Third, NetChoice has not established how websites moderate content. NetChoice alleges that “[c]overed websites” generally use algorithms to organize and censor content appearing in “search results, comments, or in feeds.” Brief for Petitioners in No. 22-555, at 4, 6. But at this stage and on this record, we have no way of confirming whether all of the regulated platforms use algorithms to organize all of their content, much less whether these algorithms are expressive. See Hurley, 515 U. S., at 568. Facebook and Reddit, for instance, both allow their users to post about a wide
Perhaps recognizing this, NetChoice argues in passing that it cannot tell us how its members moderate content because doing so would embolden “malicious actors” and divulge “proprietary and closely held” information. E.g., Brief for Petitioners in No. 22-555, at 11. But these harms are far from inevitable. Various platforms already make similar disclosures—both voluntarily and to comply with the European Union‘s Digital Services Act46—yet the sky has not fallen. And on remand, NetChoice will have the opportunity to contest whether particular disclosures are necessary and whether any relevant materials should be filed under seal.
Various NetChoice members already disclose in broad strokes how they use algorithms to curate content. Many platforms claim to use algorithms to identify and remove
Some platforms have also disclosed that they use algorithms to help their users find relevant content. The e-commerce platform Etsy, for instance, uses an algorithm that matches a user‘s search terms to the “attributes” that a seller ascribes to its wares.51 Etsy‘s algorithm also accounts for things like the date of the seller‘s listing, the proximity of the seller and buyer, and the quality of the seller‘s customer-service ratings. Ibid.
YouTube says it answers search queries based on “relevance, engagement and quality“—taking into account how well a search query matches a video title, the kinds of videos a particular user viewed in the past, and each creator‘s “expertise, authoritativeness, and trustworthiness on a given topic.”52
For all these reasons, NetChoice failed to establish whether the content-moderation provisions violate the First Amendment on their face.
D
Although the only question the Court must decide today is whether NetChoice showed that the Florida and Texas laws are facially unconstitutional, much of the majority opinion addresses a different question: whether the Texas law‘s content-moderation provisions are constitutional as applied to two features of two platforms—Facebook‘s News Feed and YouTube‘s homepage. The opinion justifies this discussion on the ground that the Fifth Circuit cannot apply the facial constitutionality test without resolving that question, see, e.g., ante, at 13, 30, but that is not necessarily true. Especially in light of the wide reach of the Texas law, NetChoice may still fall far short of establishing facial unconstitutionality—even if it is assumed for the sake of argument that the Texas law is unconstitutional as applied to Facebook‘s News Feed and YouTube‘s homepage.53
For this reason, the majority‘s “guidance” on this issue may well be superfluous. Yet superfluity is not its most egregious flaw. The majority‘s discussion also rests on wholly conclusory assumptions that lack record support.
Instead of seriously engaging with this and other arguments, the majority rests on NetChoice‘s dubious assertion that there is no constitutionally significant difference between what newspaper editors did more than a half-century ago at the time of Tornillo and what Facebook and YouTube do today.
Maybe that is right—but maybe it is not. Before mechanically accepting this analogy, perhaps we should take a closer look.
Let‘s start with size. Currently, Facebook and YouTube each produced—on a daily basis—more than four petabytes (4,000,000,000,000,000 bytes) of data.54 By my calculation, that is roughly 1.3 billion times as many bytes as there are in an issue of the New York Times.55
Now consider how newspapers and social-media platforms edit content. Newspaper editors are real human beings, and when the Court decided Tornillo (the case that the majority finds most instructive), editors assigned articles to particular reporters, and copyeditors went over typescript with a blue pencil. The platforms, by contrast, play no role in selecting the billions of texts and videos that users try to convey to each other. And the vast bulk of the “curation” and “content moderation” carried out by platforms is not done by human beings. Instead, algorithms remove a small fraction of nonconforming posts post hoc and prioritize content based on factors that the platforms have not revealed and may not even know. After all, many of the biggest platforms are beginning to use AI algorithms to help them moderate content. And when AI algorithms make a decision, “even the researchers and programmers creating them don‘t really understand why the models they have built make the decisions they make.”56 Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?
Other questions abound. Maybe we should think about the enormous power exercised by platforms like Facebook and YouTube as a result of “network effects.” Cf. Ohio v. American Express Co., 585 U. S. 529 (2018). And maybe we
Instead, when confronted with the application of a constitutional requirement to new technology, we should proceed with caution. While the meaning of the Constitution remains constant, the application of enduring principles to new technology requires an understanding of that technology and its effects. Premature resolution of such questions creates the risk of decisions that will quickly turn into embarrassments.
IV
Just as NetChoice failed to make the showing necessary to demonstrate that the States’ content-moderation provisions are facially unconstitutional, NetChoice‘s facial attacks on the individual-disclosure provisions also fell short. Those provisions require platforms to explain to affected users the basis of each content-censorship decision. Because these regulations provide for the disclosure of “purely factual and uncontroversial information,” they must be reviewed under Zauderer‘s framework, which requires only that such laws be “reasonably related to the State‘s interest in preventing deception of consumers” and not “unduly burde[n]” speech. 471 U. S., at 651.57
Our unanimous agreement regarding NetChoice‘s failure to show that a sufficient number of its members engage in constitutionally protected expression prevents us from accepting NetChoice‘s argument regarding these provisions. In the lower courts, NetChoice did not even try to show how these disclosure provisions chill each platform‘s speech. Instead, NetChoice merely identified one subset of one platform‘s content that would be affected by these laws: billions of nonconforming comments that YouTube removes each year. 49 F. 4th, at 487; see also Brief for Appellees in No. 21-12355 (CA11), p. 13. But if YouTube uses automated processes to flag and remove these comments, it is not clear why having to disclose the bases of those processes would chill YouTube‘s speech. And even if having to explain each removal decision would unduly burden YouTube‘s First Amendment rights, the same does not necessarily follow with regard to all of NetChoice‘s members.
NetChoice‘s failure to make this broader showing is especially problematic since NetChoice does not dispute the States’ assertion that many platforms already provide a notice-and-appeal process for their removal decisions. In fact, some have even advocated for such disclosure requirements. Before its change in ownership, the previous Chief Executive Officer of the platform now known as X went as
*
*
*
The only binding holding in these decisions is that NetChoice has yet to prove that the Florida and Texas laws they challenged are facially unconstitutional. Because the majority opinion ventures far beyond the question we must decide, I concur only in the judgment.
