Patterson v Meta Platforms, Inc. (
| Patterson v Meta Platforms, Inc. |
| Decided on July 25, 2025 |
| Appellate Division, Fourth Department |
| Published by New York State Law Reporting Bureau pursuant to Judiciary Law § 431. |
| This opinion is uncorrected and subject to revision before publication in the Official Reports. |
Decided on July 25, 2025 SUPREME COURT OF THE STATE OF NEW YORK Appellate Division, Fourth Judicial Department
PRESENT: LINDLEY, J.P., CURRAN, BANNISTER, SMITH, AND NOWAK, JJ.
535 CA 24-00513
v
META PLATFORMS, INC., FORMERLY KNOWN AS FACEBOOK, INC., SNAP, INC., ALPHABET, INC., GOOGLE, LLC, YOUTUBE, LLC, DISCORD, INC., AMAZON.COM, INC., 4CHAN COMMUNITY SUPPORT, LLC, REDDIT, INC., DEFENDANTS-APPELLANTS, ET AL., DEFENDANTS. (APPEAL NO. 1.)
ORRICK, HERRINGTON & SUTCLIFFE LLP, WASHINGTON D.C. (ERIC A. SHUMSKY, ADMITTED PRO HAC VICE, OF COUNSEL), WILSON SONSINI GOODRICH & ROSATI, P.C., NEW YORK CITY, WEBSTER SZANYI LLP, BUFFALO, AND PERKINS COIE LLP, NEW YORK CITY, FOR DEFENDANTS-APPELLANTS META PLATFORMS, INC., FORMERLY KNOWN AS FACEBOOK, INC., ALPHABET, INC., GOOGLE, LLC, YOUTUBE, LLC, AND REDDIT, INC.
MORRISON & FOERSTER LLP, NEW YORK CITY (JOSEPH R. PALMORE OF COUNSEL), FOR DEFENDANT-APPELLANT DISCORD, INC.
HUESTON HENNIGAN LLP, NEW YORK CITY (MOEZ M. KABA OF COUNSEL), AND GIBSON, MCASKILL & CROSBY, LLP, BUFFALO, FOR DEFENDANT-APPELLANT AMAZON.COM, INC.
HARRIS BEACH MURTHA CULLINA PLLC, NEW YORK CITY (LISA ANNE LECOURS OF COUNSEL), FOR DEFENDANT-APPELLANT 4CHAN COMMUNITY SUPPORT, LLC.
O'MELVENY & MEYERS LLP, NEW YORK CITY (JONATHAN P. SCHNELLER OF COUNSEL), AND HAGERTY & BRADY, BUFFALO, FOR DEFENDANT-APPELLANT SNAP, INC.
THE LAW OFFICE OF JOHN V. ELMORE, P.C., BUFFALO (JOHN V. ELMORE OF COUNSEL), AND SOCIAL MEDIA VICTIMS LAW CENTER PLLC, SEATTLE, WASHINGTON, FOR PLAINTIFFS-RESPONDENTS.
HOGAN LOVELLS US LLP, NEW YORK CITY (JASMEET K. AHUJA OF COUNSEL), FOR CHAMBER OF PROGRESS, ENGINE ADVOCACY, AND WIKIMEDIA FOUNDATION, AMICUS CURIAE.
HOLWELL SHUSTER & GOLDBERG LLP, NEW YORK CITY (DANIEL M. SULLIVAN OF [*2]COUNSEL), FOR PRODUCTS LIABILITY ADVISORY COUNCIL, AMICUS CURIAE.
Appeals from an order of the Supreme Court, Erie County (Paula L.
Feroleto, J.), entered March 18, 2024. The order denied the motions of defendants-appellants to dismiss the complaint against them.
It is hereby ORDERED that the order so appealed from is reversed on the law without costs, the motions are granted and the complaint is dismissed against defendants-appellants.
Opinion by Lindley, J.P.:
These consolidated appeals arise from four separate actions commenced in response to the mass shooting on May 14, 2022 at a grocery store in a predominately Black neighborhood in Buffalo. The shooter, a teenager from the Southern Tier of New York, spent months planning the attack and was motivated by the Great Replacement Theory, which posits that white populations in Western countries are being deliberately replaced by non-white immigrants and people of color. After driving more than 200 miles from his home to Buffalo, the shooter arrived at the store and opened fire on Black individuals in the parking lot and inside the store with a Bushmaster XM-15 semiautomatic rifle, killing 10 people and wounding three others.
The shooter fired approximately 60 rounds from high-capacity magazines attached to his rifle, upon which he had written several racist messages, including "Here's your reparations!" and "Buck status: Broken." Apprehended at the scene, the shooter was charged with multiple felonies in both state court and federal court, where prosecutors are seeking the death penalty. The shooter pleaded guilty in state court to 10 counts of intentional murder and has been sentenced to life in prison without the possibility of parole. As of this writing, the federal charges are still pending.
Plaintiffs in these civil actions are survivors of the attack and family members of the victims, while defendants include the shooter's parents and numerous other parties whose actions or inactions allegedly played a role in the shooting. We are concerned in these appeals only with plaintiffs' causes of action against the so-called "social media defendants," i.e., Meta Platforms, Inc., formerly known as Facebook (Facebook); Instagram LLC (Instagram); Snap, Inc. (Snap); Alphabet, Inc.; Google, LLC (Google); YouTube, LLC (YouTube); Discord, Inc. (Discord); Reddit, Inc.; Twitch Interactive, Inc. (Twitch); Amazon.com, Inc. (Amazon); and 4chan Community Support, LLC (4chan), all of whom have social media platforms that were used by the shooter at some point before or during the attack.
The complaints, amended complaint and second amended complaint (hereafter complaints) in these actions assert various tort causes of action against the social media defendants, including negligence, unjust enrichment and strict products liability based on defective design and failure to warn. According to plaintiffs, the social media platforms in question are defectively designed to include content-recommendation algorithms that fed a steady stream of racist and violent content to the shooter, who over time became motivated to kill Black people. Plaintiffs further allege that the content-recommendation algorithms addicted the shooter to the social media defendants' platforms, resulting in his isolation and radicalization, and that the platforms were designed to stimulate engagement by exploiting the neurological [*3]vulnerabilities of users like the shooter and thereby maximize profits.
Although plaintiffs recognize that some of the social media defendants—e.g., 4chan, Discord, Twitch and Snap—do not use content-recommendation algorithms, they nevertheless allege that the platforms of those defendants are designed with the same core defect contained in the platforms of the social media defendants that use such algorithms: namely, they are designed to be addictive. According to plaintiffs, the addictive features of the social media platforms include "badges," "streaks," "trophies," and "emojis" given to frequent users, thereby fueling engagement. The shooter's addiction to those platforms, the theory goes, ultimately caused him to commit mass murder.
The social media defendants moved to dismiss the complaints against them for failure to state a cause of action (see CPLR 3211 [a] [7]), contending, inter alia, that they are immune from liability under section 230 of the Communications Decency Act (section 230) (see 47 USC § 230 [c] [1], [2]) and the First Amendment of the Federal Constitution, applicable to the states through the Fourteenth Amendment. Supreme Court denied the relevant motions, leading to these appeals. We conclude that the complaints should be dismissed against the social media defendants.
Plaintiffs concede that, despite its abhorrent nature, the racist content consumed by the shooter on the Internet is constitutionally protected speech under the First Amendment, and that the social media defendants cannot be held liable for publishing such content. Plaintiffs further concede that, pursuant to section 230, the social media defendants cannot be held liable merely because the shooter was motivated by racist and violent third-party content published on their platforms. According to plaintiffs, however, the social media defendants are not entitled to protection under section 230 because the complaints seek to hold them liable as product designers, not as publishers of third-party content.
The motion court agreed with plaintiffs, but we do not. Accepting as true all of the facts alleged in the operative complaints, and according plaintiffs the benefit of every possible favorable inference (see Williams v Beemiller, Inc.,
As the United States Supreme Court has observed, the Internet is the most important place in society for the exchange of diverse viewpoints (see Packingham v North Carolina,
Section 230 provides, in pertinent part, that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 USC § 230 [c] [1]). It further provides that "[n]o provider or user of an interactive computer service shall be held liable on account of[:] (A) any action voluntarily taken in good faith to restrict access to or availability of material that the [*4]provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)" (§ 230 [c] [2]).
"The primary purpose of the proposed legislation that ultimately resulted in the Communications Decency Act ('CDA') 'was to protect children from sexually explicit Internet content' . . . Section 230, though—added as an amendment to the CDA bill . . . —was enacted 'to maintain the robust nature of Internet communication and, accordingly, to keep government interference in the medium to a minimum' . . . Indeed, Congress stated in [s]ection 230 that '[i]t is the policy of the United States . . . (1) to promote the continued development of the Internet and other interactive computer services and other interactive media; [and] (2) to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation' " (Force v Facebook, Inc.,
"By its plain language, [section 230] creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service" (Zeran v America Online, Inc.,
With respect to state law claims, section 230 "protects from liability (1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider" (Barnes v Yahoo!, Inc.,
Here, it is undisputed that the social media defendants qualify as providers of interactive computer services. The dispositive question is whether plaintiffs seek to hold the social media defendants liable as publishers or speakers of information provided by other content providers. Based on our reading of the complaints, we conclude that plaintiffs seek to hold the social media defendants liable as publishers of third-party content. We further conclude that the content-recommendation algorithms used by some of the social media defendants do not deprive those defendants of their status as publishers of third-party content. It follows that plaintiffs' tort causes of action against the social media defendants are barred by section 230.
Even assuming, arguendo, that the social media defendants' platforms are products (as opposed to services), and further assuming that they are inherently dangerous, which is a rather large assumption indeed, we conclude that plaintiffs' strict products liability causes of action against the social media defendants fail because they are based on the nature of content posted by third parties on the social media platforms. The immunity test established by Barnes focuses not on the name given to a cause of action but instead on "whether a plaintiff's 'theory of liability would treat a defendant as a publisher or speaker of third-party content' " (Calise v Meta [*5]Platforms, Inc.,
We are not persuaded by plaintiffs' assertion that the social media defendants' algorithms render their products defective, thus depriving them of section 230 protection. Our determination in that regard is consistent with Force, wherein the Second Circuit found no basis in law or logic for "concluding that an interactive computer service is not the 'publisher' of third-party information [within the meaning of section 230] when it uses tools such as algorithms that are designed to match that information with a consumer's interests" (
The appeals at hand are on all fours with M.P., which arose from the killing of nine Black people by a white supremacist at Mother Emanuel AME Church in Charleston, South Carolina. The plaintiff in that action sued Facebook, among other parties, alleging that it was civilly liable for the shooter's crimes. As here, the complaint asserted a cause of action for strict products liability and alleged that the shooter was "radicalized online by white supremacist propaganda that was directed to him" by Facebook (M.P.,
Citing Force, the Fourth Circuit in M.P. affirmed the dismissal of the complaint based on section 230, reasoning that "[d]ecisions about whether and how to display certain information provided by third parties are traditional editorial functions of publishers, notwithstanding the various methods they use in performing that task" (id. at 526). The court likened "Facebook's use of its algorithm to arrange and sort racist and hate-driven content" to newspaper editors deciding which articles to place on front pages and which opinion pieces to place opposite the editorial page, all of which "are integral to the function of publishing" (id. at 525).
Recognizing that the rationale of M.P. compels dismissal of their strict products liability causes of action against the social media defendants, plaintiffs ask us instead to follow Anderson v TikTok, Inc. (
We do not find Anderson to be persuasive authority. If content-recommendation algorithms transform third-party content into first-party content, as the Anderson court [*6]determined, then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of section 230, which was to legislatively overrule Stratton Oakmont, Inc. v Prodigy Servs. Co. (
Although Anderson was not a defamation case, its reasoning applies with equal force to all tort causes of action, including defamation. One cannot plausibly conclude that section 230 provides immunity for some tort claims but not others based on the same underlying factual allegations. There is no strict products liability exception to section 230.
In any event, even if we were to follow Anderson and conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties, it stands to reason that such speech ("expressive activity" as described by the Third Circuit) is protected by the First Amendment under Moody. While TikTok did not seek protection under the First Amendment, our social media defendants do raise the First Amendment as a defense in addition to section 230.
In Moody, the Supreme Court determined that content-moderation algorithms result in expressive activity protected by the First Amendment (see
Although it is true, as plaintiffs point out, that the First Amendment views expressed in Moody are nonbinding dicta, it is recent dicta from a supermajority of Justices of the United States Supreme Court, which has final say on how the First Amendment is interpreted. That is not the type of dicta we are inclined to ignore even if we were to disagree with its reasoning, which we do not.
As the Center for Democracy and Technology explains in its amicus brief, content-recommendation algorithms are simply tools used by social media companies "to accomplish a traditional publishing function, made necessary by the scale at which providers operate." Every method of displaying content involves editorial judgments regarding which content to display and where on the platforms. Given the immense volume of content on the Internet, it is virtually impossible to display content without ranking it in some fashion, and the ranking represents an editorial judgment of which content a user may wish to see first. All of this editorial activity, accomplished by the social media defendants' algorithms, is constitutionally protected speech.
Thus, the interplay between section 230 and the First Amendment gives rise to a "Heads I [*7]Win, Tails You Lose" proposition in favor of the social media defendants. Either the social media defendants are immune from civil liability under section 230 on the theory that their content-recommendation algorithms do not deprive them of their status as publishers of third-party content, per Force and M.P., or they are protected by the First Amendment on the theory that the algorithms create first-party content, as per Anderson. Of course, section 230 immunity and First Amendment protection are not mutually exclusive, and in our view the social media defendants are protected by both. Under no circumstances are they protected by neither.
Plaintiffs' reliance on Lemmon v Snap, Inc. (
§ 230 (c) (1) to fault Snap for publishing other Snapchat-user content (e.g., snaps of friends speeding dangerously) that may have incentivized . . . dangerous behavior" (id. at 1093 n 4). Here, in contrast, plaintiffs seek to do just that, i.e., to hold the social media defendants liable for content posted by other people that allegedly incentivized dangerous behavior by the shooter.
With respect to the applicability of section 230, our dissenting colleagues agree with Chief Judge Katzmann's dissent in Force, which focuses primarily on Facebook's algorithm that suggests friends, groups and events to users, i.e., a "friend- and content-suggestion" algorithm (
To the extent that Chief Judge Katzmann concluded that Facebook's content-recommendation algorithms similarly deprived Facebook of its status as a publisher of third-party content within the meaning of section 230, we believe that his analysis, if applied here, would ipso facto expose most social media companies to unlimited liability in defamation cases. That is the same problem inherent in the Third Circuit's first-party/third-party speech analysis in Anderson. Again, a social media company using content-recommendation algorithms cannot be deemed a publisher of third-party content for purposes of libel and slander claims (thus triggering section 230 immunity) and not at the same time a publisher of third-party content for strict products liability claims.
In the broader context, the dissenters accept plaintiffs' assertion that these actions are about the shooter's "addiction" to social media platforms, wholly unrelated to third-party speech or content. We come to a different conclusion. As we read them, the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he [*8]become addicted to anodyne content, such as cooking tutorials or cat videos.[FN1]
Instead, plaintiffs' theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter's radicalization. Given that plaintiffs' allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are "inextricably intertwined" with the social media defendants' role as publishers of third-party content (M.P.,
If plaintiffs' causes of action were based merely on the shooter's addiction to social media, which they are not, they would fail on causation grounds. It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were not "not foreseeable in the normal course of events" and therefore broke the causal chain (Tennant v Lascelle,
At stake in these appeals is the scope of protection afforded by section 230, which Congress enacted to combat "the threat that tort-based lawsuits pose to freedom of speech [on the] Internet" (Shiamili,
We believe that the motion court's ruling, if allowed to stand, would gut the immunity provisions of section 230 and result in the end of the Internet as we know it. This is so because Internet service providers who use algorithms on their platforms would be subject to liability for all tort causes of action, including defamation. Because social media companies that sort and display content would be subject to liability for every untruthful statement made on their platforms, the Internet would over time devolve into mere message boards.
Although the motion court stated that the social media defendants' section 230 arguments "may ultimately prove true," dismissal at the pleading stage is essential to protect free expression under Section 230 (see Nemet Chevrolet, Ltd.,
While everyone of goodwill condemns the shooter's actions and the vile content that motivated him to assassinate Black people simply because of the color of their skin, there is in our view no reasonable interpretation of section 230 that allows plaintiffs' tort causes of action to survive as against the social media defendants, who are entitled to immunity under the statute as the publishers of third-party content on their platforms.
We therefore reverse the orders in appeal Nos. 1, 3, 5, and 6. Inasmuch as the complaint in appeal No. 2 and the amended complaint in appeal No. 4 were superseded by an amended complaint and a second amended complaint, respectively, the appeals in appeal Nos. 2 and 4 must be dismissed (see Carcone v Noon [appeal No. 1],
All concur except Bannister and Nowak, JJ., who dissent and vote to affirm in the following dissenting opinion: "[W]hy do I always have trouble putting my phone down at night? . . . It's 2 in the morning . . . I should be sleeping . . . I'm a literal addict to my phone[.] I can't stop cons[u]ming." These are the words of a teenager who, on May 14, 2022, drove more than 200 miles to Buffalo to shoot and kill 10 people and injure three more at a grocery store in the heart of a predominantly Black community.
Plaintiffs in these consolidated appeals allege that the shooter did so only after years of exposure to the online platforms of the so-called "social media defendants"—Meta Platforms, Inc., formerly known as Facebook, Inc.; Instagram LLC; Snap, Inc.; Alphabet, Inc.; Google, LLC; YouTube, LLC; Discord, Inc.; Reddit, Inc.; Twitch Interactive, Inc.; Amazon.com, Inc.; and 4chan Community Support, LLC (collectively, defendants)—platforms that, according to plaintiffs, were defectively designed. Plaintiffs allege that defendants intentionally designed their platforms to be addictive, failed to provide basic safeguards for those most susceptible to addiction—minors—and failed to warn the public of the risk of addiction. According to plaintiffs, defendants' platforms did precisely what they were designed to do—they targeted and addicted minor users to maximize their engagement. Plaintiffs allege that the shooter became more isolated and reclusive as a result of his social media use and addiction, and that his addiction, combined with his age and gender, left him particularly susceptible to radicalization and violence—culminating in the tragedy in Buffalo. For the purposes of defendants' CPLR 3211 (a) (7) motions to dismiss, we must "accept the facts as alleged in the [operative] complaint[s] as true, accord plaintiffs the benefit of every possible favorable inference, and determine only whether the facts as alleged fit within any cognizable legal theory" (Leon v Martinez,
Little assumption is required in this case, however. The shooter all but admitted those facts to be true.
Inasmuch as plaintiffs' collective strict products liability causes of action predicate liability on the allegedly defective design of the platforms themselves—and the concomitant failure to warn of the risks of addiction in young people—plaintiffs do not seek to hold defendants liable for any third-party content; thus, we conclude that those causes of action do not implicate section 230 of the Communications Decency Act (section 230) or the First Amendment. Even if section 230 were implicated, however, we conclude that the use of an algorithm to push disparate content to individual end users constitutes the "creation or development of information," which could subject defendants to liability (47 USC § 230 [f] [3]), and is not the type of editorial or publishing decision that would fall within the ambit of section [*10]230 (see Shiamili v Real Estate Group of N.Y., Inc.,
At the outset, we reject the foundation upon which the majority's opinion is built—that plaintiffs' causes of action necessarily seek to hold defendants responsible for radicalizing the shooter given their status "as the publisher[s] or speaker[s] of any information provided by another information content provider" (47 USC § 230 [c] [1]), i.e., that plaintiffs only seek to hold defendants liable for the third-party content the shooter viewed. If that were the only allegation raised by plaintiffs, we would agree with the majority. But it is not.
The operative complaints, when read as a whole, as they must be,[FN3] also allege that defendants' platforms are "products" subject to strict products liability that are addictive—not based upon the third-party content they show but because of the inherent nature of their design. Specifically, plaintiffs allege that defendants' platforms: "prey upon young users' desire for validation and need for social comparison," "lack effective mechanisms . . . to restrict minors' usage of the product," have "inadequate parental controls" and age verification tools that facilitate unfettered usage of the products, and "intentionally place[ ] obstacles to discourage cessation" of the applications. Plaintiffs allege that the various platforms "send push notifications and messages throughout the night, prompting children to re-engage with the apps when they should be sleeping." They further allege that certain products "autoplay" video without requiring the user to affirmatively click on the next video, while others permit the user to "infinite[ly]" scroll, creating a constant stream of media that is difficult to close or leave.
Plaintiffs assert that defendants had a duty to warn the public at large and, in particular, minor users of their platforms and their parents, of the addictive nature of the platforms. They thus claim that defendants could have utilized reasonable alternate designs, including: eliminating "autoplay" features or creating a "beginning and end to a user's '[f]eed' " to prevent a user from being able to "infinite[ly]" scroll; providing options for users to self-limit time used on a platform; providing effective parental controls; utilizing session time notifications or otherwise removing push notifications that lure the user to re-engage with the application; and "[r]emoving barriers to the deactivation and deletion of accounts." These allegations do not seek to hold defendants liable for any third-party content (see 47 USC § 230 [c] [1]); rather, they seek to hold defendants liable for failing to provide basic safeguards to reasonably limit the addictive features of their social media platforms, particularly with respect to minor users. Indeed, other products liability actions similarly premised upon defective designs or failures to warn have been permitted to proceed past the motion to dismiss phase. To that end, the attorneys general of more than 30 states—including the New York Attorney General—are currently embroiled in ongoing, multi-district litigation against several of the same defendants here for virtually identical strict products liability claims under substantive New York law (see In re Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig., 702 F Supp 3d 809, 817, 836-854 [ND Cal 2023])—claims that the District Court concluded were not precluded by section 230 or the First Amendment.
For instance, in Lemmon v Snap, Inc. (
Similarly, in A.M. v Omegle.com, LLC (614 F Supp 3d 814 [D Or 2022]), the District Court held that section 230 did not preclude the plaintiff's products liability action alleging that the defendant's design choices were defective because they permitted minor users to match with adults, noting that the plaintiff was "not claiming that [the defendant] needed to review, edit, or withdraw any third-party content" to remediate its defective design (id. at 819). Just as the defendant in A.M. "could have satisfied its alleged obligation to [p]laintiff by designing its product differently" (id.), plaintiffs here allege that defendants could have designed their platforms to prevent addiction in any number of ways that do not implicate third-party content, such as: restricting minors' access to the platforms through age verification tools; instituting more robust parental controls; removing push notifications; utilizing session time notifications; and otherwise removing barriers to the deactivation and deletion of accounts.
In our view, the majority's reliance on M.P. By & Through Pinckney v Meta Platforms, Inc. (
In short, we agree with the reasoning set forth in Lemmon, A.M., and In re Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig. and conclude that social media platforms are "products" subject to strict products liability in New York. "[W]hen considering whether strict products liability attaches, the question of whether something is a product is often assumed; none of our strict products liability case law provides a clear definition of a 'product' " (Matter of Eighth Jud. Dist. Asbestos Litig.,
In general, the Third Restatement defines a product as "tangible personal property distributed commercially for use or consumption" (Restatement [Third] of Torts: Products [*12]Liability § 19 [a]). Here, defendants largely urge that their social media platforms are not products because they are not tangible goods. We disagree. The Third Restatement explains that even intangible items "are products when the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property that it is appropriate to apply the rules stated in th[e] Restatement" (id.).
That common-sense approach has been echoed by the Court of Appeals, which has recognized that the analysis of whether something is a product is inextricably "[i]ntertwined with . . . the more central question of whether the defendant manufacturer owes a duty to warn" (Terwilliger,
We recognize that tort liability is not open-ended (see generally Espinal v Melville Snow Contrs.,
Though we conclude that plaintiffs' products liability allegations sounding in design defect and failure to warn generally do not implicate section 230, to the extent that plaintiffs claim that defendants' products were defectively designed because they do not create a beginning and end to a user's "feed" or "autoplay" videos, those allegations at least tangentially involve third-party content, and thus a discussion of section 230 is required.
As the majority notes, section 230 (c) (1) states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 USC § 230 [c] [1]). An "information content provider" is defined by the statute to mean "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service" (§ 230 [f] [3] [emphasis added]).
As the Court of Appeals recognized in Shiamili, "[s]ervice providers are only entitled to . . . immunity . . . where the content at issue is provided by 'another information content [*13]provider' . . . It follows that if a defendant service provider is itself the 'content provider,' it is not shielded from liability" (
To that end, we conclude that, as Chief Judge Katzmann stated in his dissent in Force v Facebook, Inc. (
. . . [defendants here are merely] acting as 'the publisher[s] of . . . information provided by another information content provider' " (id. at 76-77 [Katzmann, Ch. J., concurring in part and dissenting in part], quoting 47 USC § 230 [c] [1]), and not developing first-party content in their own right. Relatedly, we conclude that the targeted dissemination of particular information to individual end users does not amount to a traditional editorial or publishing decision that would fall within the ambit of section 230. In that regard, the Court of Appeals' analysis in Shiamili addressed the publication of a publicly available blog post. Clearly, that is a traditional editorial or publication decision—no different than the New York Times choosing which editorials should appear within the most recent volume of their publication.
The conduct at issue in this case is far from any editorial or publishing decision; defendants utilize functions, such as machine learning algorithms, to push specific content on specific individuals based upon what is most apt to keep those specific users on the platform. Some receive cooking videos or videos of puppies, while others receive white nationalist vitriol, each group entirely ignorant of the content foisted upon the other. Such conduct does not "maintain the robust nature of Internet communication" or "preserve the vibrant and competitive free market that presently exists for the Internet" contemplated by the protections of immunity (Force,
The majority concludes, based upon Moody v NetChoice, LLC (
Such a vast expansion of First Amendment jurisprudence cannot be overstated. Taken to its furthest extent, the majority essentially concludes that every defendant would be immune from all state law tort claims involving speech or expressive activity. If the majority is correct, there could never be state tort liability for failing to warn of the potential risks associated with a product, for insisting upon a warning would be state-compelled speech in violation of the First Amendment. Nor could there ever be liability for failing to obtain a patient's informed consent in a medical malpractice action—for the defendant physician's explanation of the procedure, its [*14]alternatives, and the reasonably foreseeable risks and benefits of each proposed course of action—necessarily implicates the defendant physician's First Amendment rights. That simply cannot be the case.
Finally, inasmuch as proximate causation "is for the finder of fact to determine" (Derdiarian v Felix Contr. Corp.,
Entered: July 25, 2025
Ann Dillon Flynn
Clerk of the Court
Footnote 1: We note that plaintiffs' addiction-only theory, even if valid, would not apply to all social media defendants. For instance, plaintiffs do not allege that the shooter was addicted to the livestream service of Twitch and Amazon, which he used during the shooting and only several times prior.
Footnote 2: The social media addiction cases cited by plaintiffs involve psychological harm allegedly caused to users, not, as here, harm caused by addicted users to third parties (see e.g. In re Social Media Adolescent Addiction/Personal Injury Prod. Liab. Litig., 702 F Supp 3d 809, 836-854 [ND Cal 2023]).
Footnote 3: To the extent that any one operative complaint does not set forth the entirety of the factual allegations listed herein, that is not dispositive—it is axiomatic that, in the context of a CPLR 3211 (a) (7) motion to dismiss, "the criterion is whether the proponent of the pleading has a cause of action, not whether [the proponent] has stated one" (Guggenheimer v Ginzburg,
