History
  • No items yet
midpage
Patterson v. Meta Platforms, Inc.
2025 NY Slip Op 04438
| N.Y. App. Div. | 2025
|
Check Treatment

Patterson v Meta Platforms, Inc. (2025 NY Slip Op 04438)

Patterson v Meta Platforms, Inc.
2025 NY Slip Op 04438
Decided on July 25, 2025
Appellate Division, Fourth Department
Lindley
Published by New York State Law Reporting Bureau pursuant to Judiciary Law § 431.
This opinion is uncorrected and subject to revision before publication in the Official Reports.


Decided on July 25, 2025 SUPREME COURT OF THE STATE OF NEW YORK Appellate Division, Fourth Judicial Department
PRESENT: LINDLEY, J.P., CURRAN, BANNISTER, SMITH, AND NOWAK, JJ.

535 CA 24-00513

[*1]DIONA PATTERSON, INDIVIDUALLY, AND AS ADMINISTRATOR OF THE ESTATE OF HEYWARD PATTERSON, ET AL., PLAINTIFFS-RESPONDENTS,

v

META PLATFORMS, INC., FORMERLY KNOWN AS FACEBOOK, INC., SNAP, INC., ALPHABET, INC., GOOGLE, LLC, YOUTUBE, LLC, DISCORD, INC., AMAZON.COM, INC., 4CHAN COMMUNITY SUPPORT, LLC, REDDIT, INC., DEFENDANTS-APPELLANTS, ET AL., DEFENDANTS. (APPEAL NO. 1.)




ORRICK, HERRINGTON & SUTCLIFFE LLP, WASHINGTON D.C. (ERIC A. SHUMSKY, ADMITTED PRO HAC VICE, OF COUNSEL), WILSON SONSINI GOODRICH & ROSATI, P.C., NEW YORK CITY, WEBSTER SZANYI LLP, BUFFALO, AND PERKINS COIE LLP, NEW YORK CITY, FOR DEFENDANTS-APPELLANTS META PLATFORMS, INC., FORMERLY KNOWN AS FACEBOOK, INC., ALPHABET, INC., GOOGLE, LLC, YOUTUBE, LLC, AND REDDIT, INC.

MORRISON & FOERSTER LLP, NEW YORK CITY (JOSEPH R. PALMORE OF COUNSEL), FOR DEFENDANT-APPELLANT DISCORD, INC.

HUESTON HENNIGAN LLP, NEW YORK CITY (MOEZ M. KABA OF COUNSEL), AND GIBSON, MCASKILL & CROSBY, LLP, BUFFALO, FOR DEFENDANT-APPELLANT AMAZON.COM, INC.

HARRIS BEACH MURTHA CULLINA PLLC, NEW YORK CITY (LISA ANNE LECOURS OF COUNSEL), FOR DEFENDANT-APPELLANT 4CHAN COMMUNITY SUPPORT, LLC.

O'MELVENY & MEYERS LLP, NEW YORK CITY (JONATHAN P. SCHNELLER OF COUNSEL), AND HAGERTY & BRADY, BUFFALO, FOR DEFENDANT-APPELLANT SNAP, INC.

THE LAW OFFICE OF JOHN V. ELMORE, P.C., BUFFALO (JOHN V. ELMORE OF COUNSEL), AND SOCIAL MEDIA VICTIMS LAW CENTER PLLC, SEATTLE, WASHINGTON, FOR PLAINTIFFS-RESPONDENTS.

HOGAN LOVELLS US LLP, NEW YORK CITY (JASMEET K. AHUJA OF COUNSEL), FOR CHAMBER OF PROGRESS, ENGINE ADVOCACY, AND WIKIMEDIA FOUNDATION, AMICUS CURIAE.

HOLWELL SHUSTER & GOLDBERG LLP, NEW YORK CITY (DANIEL M. SULLIVAN OF COUNSEL), FOR PRODUCTS LIABILITY ADVISORY COUNCIL, AMICUS CURIAE.




Lindley

Appeals from an order of the Supreme Court, Erie County (Paula L.

Feroleto, J.), entered March 18, 2024. The order denied the motions of defendants-appellants to [*2]dismiss the complaint against them.

It is hereby ORDERED that the order so appealed from is reversed on the law without costs, the motions are granted and the complaint is dismissed against defendants-appellants.

Opinion by Lindley, J.P.:

These consolidated appeals arise from four separate actions commenced in response to the mass shooting on May 14, 2022 at a grocery store in a predominately Black neighborhood in Buffalo. The shooter, a teenager from the Southern Tier of New York, spent months planning the attack and was motivated by the Great Replacement Theory, which posits that white populations in Western countries are being deliberately replaced by non-white immigrants and people of color. After driving more than 200 miles from his home to Buffalo, the shooter arrived at the store and opened fire on Black individuals in the parking lot and inside the store with a Bushmaster XM-15 semiautomatic rifle, killing 10 people and wounding three others.

The shooter fired approximately 60 rounds from high-capacity magazines attached to his rifle, upon which he had written several racist messages, including "Here's your reparations!" and "Buck status: Broken." Apprehended at the scene, the shooter was charged with multiple felonies in both state court and federal court, where prosecutors are seeking the death penalty. The shooter pleaded guilty in state court to 10 counts of intentional murder and has been sentenced to life in prison without the possibility of parole. As of this writing, the federal charges are still pending.

Plaintiffs in these civil actions are survivors of the attack and family members of the victims, while defendants include the shooter's parents and numerous other parties whose actions or inactions allegedly played a role in the shooting. We are concerned in these appeals only with plaintiffs' causes of action against the so-called "social media defendants," i.e., Meta Platforms, Inc., formerly known as Facebook (Facebook); Instagram LLC (Instagram); Snap, Inc. (Snap); Alphabet, Inc.; Google, LLC (Google); YouTube, LLC (YouTube); Discord, Inc. (Discord); Reddit, Inc.; Twitch Interactive, Inc. (Twitch); Amazon.com, Inc. (Amazon); and 4chan Community Support, LLC (4chan), all of whom have social media platforms that were used by the shooter at some point before or during the attack.

The complaints, amended complaint and second amended complaint (hereafter complaints) in these actions assert various tort causes of action against the social media defendants, including negligence, unjust enrichment and strict products liability based on defective design and failure to warn. According to plaintiffs, the social media platforms in question are defectively designed to include content-recommendation algorithms that fed a steady stream of racist and violent content to the shooter, who over time became motivated to kill Black people. Plaintiffs further allege that the content-recommendation algorithms addicted the shooter to the social media defendants' platforms, resulting in his isolation and radicalization, and that the platforms were designed to stimulate engagement by exploiting the neurological vulnerabilities of users like the shooter and thereby maximize profits.

Although plaintiffs recognize that some of the social media defendants—e.g., 4chan, Discord, Twitch and Snap—do not use content-recommendation algorithms, they nevertheless allege that the platforms of those defendants are designed with the same core defect contained in the platforms of the social media defendants that use such algorithms: namely, they are designed to be addictive. According to plaintiffs, the addictive features of the social media platforms include "badges," "streaks," "trophies," and "emojis" given to frequent users, thereby fueling engagement. The shooter's addiction to those platforms, the theory goes, ultimately caused him to commit mass murder.

The social media defendants moved to dismiss the complaints against them for failure to state a cause of action (see CPLR 3211 [a] [7]), contending, inter alia, that they are immune from liability under section 230 of the Communications Decency Act (section 230) (see 47 USC § 230 [c] [1], [2]) and the First Amendment of the Federal Constitution, applicable to the states through the Fourteenth Amendment. Supreme Court denied the relevant motions, leading to these appeals. We conclude that the complaints should be dismissed against the social media defendants.

Plaintiffs concede that, despite its abhorrent nature, the racist content consumed by the shooter on the Internet is constitutionally protected speech under the First Amendment, and that the social media defendants cannot be held liable for publishing such content. Plaintiffs further concede that, pursuant to section 230, the social media defendants cannot be held liable merely because the shooter was motivated by racist and violent third-party content published on their platforms. According to plaintiffs, however, the social media defendants are not entitled to protection under section 230 because the complaints seek to hold them liable as product designers, not as publishers of third-party content.

The motion court agreed with plaintiffs, but we do not. Accepting as true all of the facts alleged in the operative complaints, and according plaintiffs the benefit of every possible favorable inference (see Williams v Beemiller, Inc., 100 AD3d 143, 148 [4th Dept 2012], amended on rearg 103 AD3d 1191 [4th Dept 2013]; see generally Leon v Martinez, 84 NY2d 83, 87-88 [1994]), we conclude that plaintiffs do not have a valid cause of action against the social media defendants (see CPLR 3211 [a] [7]). More specifically, we hold that section 230 affords immunity to the social media defendants from plaintiffs' tort causes of action against them. In our view, a contrary ruling would be inconsistent with the language of section 230 and eviscerate the expressed purpose of the statute.

As the United States Supreme Court has observed, the Internet is the most important place in society for the exchange of diverse viewpoints (see Packingham v North Carolina, 582 US 98, 104 [2017]). The Internet is the modern public square, containing content "as diverse as human thought" (Reno v American Civ. Liberties Union, 521 US 844, 852 [1997] [internal quotation marks omitted]), and section 230 is the scaffolding upon which the Internet is built. Enacted by Congress in 1996 to "preserve the vibrant and competitive free market" for the Internet (47 USC § 230 [b] [2]), the statute immunizes providers of interactive computer services from civil lawsuits arising from user-generated content published on their platforms.

Section 230 provides, in pertinent part, that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 USC § 230 [c] [1]). It further provides that "[n]o provider or user of an interactive computer service shall be held liable on account of[:] (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)" (§ 230 [c] [2]).

"The primary purpose of the proposed legislation that ultimately resulted in the Communications Decency Act ('CDA') 'was to protect children from sexually explicit Internet content' . . . Section 230, though—added as an amendment to the CDA bill . . . —was enacted 'to maintain the robust nature of Internet communication and, accordingly, to keep government interference in the medium to a minimum' . . . Indeed, Congress stated in [s]ection 230 that '[i]t is the policy of the United States . . . (1) to promote the continued development of the Internet and other interactive computer services and other interactive media; [and] (2) to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation' " (Force v Facebook, Inc., 934 F3d 53, 63 [2d Cir 2019], cert denied — US &mdash, 140 S Ct 2761 [2020]; see 47 USC § 230 [b] [1], [2]). "In light of Congress's objectives, the [federal circuit courts] are in general agreement that the text of [s]ection 230 (c) (1) should be construed broadly in favor of immunity" (Force, 934 F3d at 64; see Jane Doe No. 1 v Backpage.com, LLC, 817 F3d 12, 18 [1st Cir 2016], cert denied 580 US 1083 [2017]; Almeida v Amazon.com, Inc., 456 F3d 1316, 1320-1321 [11th Cir 2006]).

"By its plain language, [section 230] creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service" (Zeran v America Online, Inc., 129 F3d 327, 330 [4th Cir 1997], cert denied 524 US 937 [1998]; see Force, 934 F3d at 63-64; see also Shiamili v Real Estate Group of N.Y., Inc., 17 NY3d 281, 289 [2011]; M.P. By & Through Pinckney v Meta Platforms, Inc., 127 F4th 516, 523 [4th Cir 2025] [M.P.]). If applicable, section 230 immunity should be applied "at the earliest possible stage of the case" (Nemet Chevrolet, Ltd. v Consumeraffairs.com, Inc., 591 F3d 250, 255 [4th Cir 2009]; see Word of God Fellowship, Inc. v Vimeo, Inc., 205 AD3d 23, 29 [1st Dept [*3]2022], lv denied 38 NY3d 912 [2022], cert denied — US &mdash, 143 S Ct 746 [2023]).

With respect to state law claims, section 230 "protects from liability (1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider" (Barnes v Yahoo!, Inc., 570 F3d 1096, 1100-1101 [9th Cir 2009]; see Shiamili, 17 NY3d at 286-287).

Here, it is undisputed that the social media defendants qualify as providers of interactive computer services. The dispositive question is whether plaintiffs seek to hold the social media defendants liable as publishers or speakers of information provided by other content providers. Based on our reading of the complaints, we conclude that plaintiffs seek to hold the social media defendants liable as publishers of third-party content. We further conclude that the content-recommendation algorithms used by some of the social media defendants do not deprive those defendants of their status as publishers of third-party content. It follows that plaintiffs' tort causes of action against the social media defendants are barred by section 230.

Even assuming, arguendo, that the social media defendants' platforms are products (as opposed to services), and further assuming that they are inherently dangerous, which is a rather large assumption indeed, we conclude that plaintiffs' strict products liability causes of action against the social media defendants fail because they are based on the nature of content posted by third parties on the social media platforms. The immunity test established by Barnes focuses not on the name given to a cause of action but instead on "whether a plaintiff's 'theory of liability would treat a defendant as a publisher or speaker of third-party content' " (Calise v Meta Platforms, Inc., 103 F4th 732, 740 [9th Cir 2024]; see Federal Trade Commn. v LeadClick Media, LLC, 838 F3d 158, 175 [2d Cir 2016]).

We are not persuaded by plaintiffs' assertion that the social media defendants' algorithms render their products defective, thus depriving them of section 230 protection. Our determination in that regard is consistent with Force, wherein the Second Circuit found no basis in law or logic for "concluding that an interactive computer service is not the 'publisher' of third-party information [within the meaning of section 230] when it uses tools such as algorithms that are designed to match that information with a consumer's interests" (934 F3d at 66). The court reasoned that "[m]erely arranging and displaying others' content to users of Facebook through such algorithms—even if the content is not actively sought by those users—is not enough to hold Facebook responsible as the 'develop[er]' or 'creat[or]' of that content" (id. at 70). "Plaintiffs' suggestion that publishers must have no role in organizing or distributing third-party content in order to avoid 'develop[ing]' that content is both ungrounded in the text of [s]ection 230 and contrary to its purpose" (id. at 70-71). Applied here, the reasoning of Force, which we find persuasive, compels dismissal of the tort causes of action premised on harm caused by the social media defendants' algorithms.

The appeals at hand are on all fours with M.P., which arose from the killing of nine Black people by a white supremacist at Mother Emanuel AME Church in Charleston, South Carolina. The plaintiff in that action sued Facebook, among other parties, alleging that it was civilly liable for the shooter's crimes. As here, the complaint asserted a cause of action for strict products liability and alleged that the shooter was "radicalized online by white supremacist propaganda that was directed to him" by Facebook (M.P., 127 F4th at 521 [internal quotation marks omitted]). The plaintiff further alleged that Facebook's content-recommendation algorithms, along with its quest for user engagement and profits, turned the shooter into a dangerous racist who committed mass murder (see id. at 521-522).

Citing Force, the Fourth Circuit in M.P. affirmed the dismissal of the complaint based on section 230, reasoning that "[d]ecisions about whether and how to display certain information provided by third parties are traditional editorial functions of publishers, notwithstanding the various methods they use in performing that task" (id. at 526). The court likened "Facebook's use of its algorithm to arrange and sort racist and hate-driven content" to newspaper editors deciding which articles to place on front pages and which opinion pieces to place opposite the editorial page, all of which "are integral to the function of publishing" (id. at 525).

Recognizing that the rationale of M.P. compels dismissal of their strict products liability [*4]causes of action against the social media defendants, plaintiffs ask us instead to follow Anderson v TikTok, Inc. (116 F4th 180 [3d Cir 2024]), which is the only appellate authority supporting their position regarding the social media defendants' use of algorithms. In Anderson, TikTok, Inc. (TikTok), via its algorithm, recommended and promoted a "Blackout Challenge" video to the "For You Page" of the plaintiff's child, who watched the video and inadvertently killed herself while accepting the challenge (id. at 181 [internal quotation marks omitted]). Based on Moody v NetChoice, LLC (603 US 707 [2024]), the Third Circuit determined that TikTok's algorithm constituted "expressive activity" within the meaning of the First Amendment, thus rendering the video first-party speech by TikTok (Anderson, 116 F4th at 184). Because section 230 protects Internet service providers (including TikTok) from publishing third-party speech or content, the court concluded that TikTok was not immune from liability for the child's death (see id.).

We do not find Anderson to be persuasive authority. If content-recommendation algorithms transform third-party content into first-party content, as the Anderson court determined, then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of section 230, which was to legislatively overrule Stratton Oakmont, Inc. v Prodigy Servs. Co. (1995 WL 323710, 1995 NY Misc LEXIS 229 [Sup Ct, Nassau County 1995]), where "an Internet service provider was found liable for defamatory statements posted by third parties because it had voluntarily screened and edited some offensive content, and so was considered a 'publisher' " (Shiamili, 17 NY3d at 287-288; see Free Speech Coalition, Inc. v Paxton, — US &mdash, &mdash, 145 S Ct 2291, 2305 n 4 [2025]).

Although Anderson was not a defamation case, its reasoning applies with equal force to all tort causes of action, including defamation. One cannot plausibly conclude that section 230 provides immunity for some tort claims but not others based on the same underlying factual allegations. There is no strict products liability exception to section 230.

In any event, even if we were to follow Anderson and conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties, it stands to reason that such speech ("expressive activity" as described by the Third Circuit) is protected by the First Amendment under Moody. While TikTok, due to its status as a foreign corporation operating abroad, could not seek protection under the First Amendment, our social media defendants can and do raise the First Amendment as a defense in addition to section 230.

In Moody, the Supreme Court determined that content-moderation algorithms result in expressive activity protected by the First Amendment (see 603 US at 744). Writing for the majority, Justice Kagan explained that "[d]eciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own" (id. at 731). While the Moody Court did not consider social media platforms "with feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards" (id. at 736 n 5 [emphasis added]), our plaintiffs do not allege that the algorithms of the social media defendants are based "solely" on the shooter's online actions. To the contrary, the complaints here allege that the social media defendants served the shooter material that they chose for him for the purpose of maximizing his engagement with their platforms. Thus, per Moody, the social media defendants are entitled to First Amendment protection for third-party content recommended to the shooter by algorithms.

Although it is true, as plaintiffs point out, that the First Amendment views expressed in Moody are nonbinding dicta, it is recent dicta from a supermajority of Justices of the United States Supreme Court, which has final say on how the First Amendment is interpreted. That is not the type of dicta we are inclined to ignore even if we were to disagree with its reasoning, which we do not.

As the Center for Democracy and Technology explains in its amicus brief, content-recommendation algorithms are simply tools used by social media companies "to accomplish a traditional publishing function, made necessary by the scale at which providers operate." Every [*5]method of displaying content involves editorial judgments regarding which content to display and where on the platforms. Given the immense volume of content on the Internet, it is virtually impossible to display content without ranking it in some fashion, and the ranking represents an editorial judgment of which content a user may wish to see first. All of this editorial activity, accomplished by the social media defendants' algorithms, is constitutionally protected speech.

Thus, given the interplay between section 230 and the First Amendment, plaintiffs are on the wrong end of a "Heads I Win, Tails You Lose" proposition. Either the social media defendants are immune from civil liability under section 230 on the theory that their content-recommendation algorithms do not deprive them of their status as publishers of third-party content, per Force and M.P., or they are protected by the First Amendment on the theory that the algorithms create first-party content, as per Anderson. Of course, section 230 immunity and First Amendment protection are not mutually exclusive, and in our view the social media defendants are protected by both. Under no circumstances are they protected by neither.

Plaintiffs' reliance on Lemmon v Snap, Inc. (995 F3d 1085 [9th Cir 2021]) is misplaced. The design defect in the defendant's program in Lemmon was its "Speed Filter," which indicated how fast users were traveling when sending messages on Snapchat (id. at 1088). The filter allegedly induced users to drive recklessly while recording videos, and the plaintiffs' harm arose from reckless driving, which flowed directly from the alleged design defect. Because the plaintiffs' causes of action had nothing to do with the content of the messages sent or received by the users (Snap itself created the filter), section 230 did not apply (see id. at 1093). The Ninth Circuit made clear, however, that the plaintiffs "would not be permitted under

§ 230 (c) (1) to fault Snap for publishing other Snapchat-user content (e.g., snaps of friends speeding dangerously) that may have incentivized . . . dangerous behavior" (id. at 1093 n 4). Here, in contrast, plaintiffs seek to do just that, i.e., to hold the social media defendants liable for content posted by other people that allegedly incentivized dangerous behavior by the shooter.

With respect to the applicability of section 230, our dissenting colleagues agree with Chief Judge Katzmann's dissent in Force, which focuses primarily on Facebook's algorithm that suggests friends, groups and events to users, i.e., a "friend- and content-suggestion" algorithm (934 F3d at 76 [Katzmann, Ch. J., concurring in part and dissenting in part]). According to Chief Judge Katzmann, Facebook, by using that algorithm, "is doing more than just publishing content: it is proactively creating networks of people," activity that is not protected by section 230 (id. at 83 [Katzmann, Ch. J., concurring in part and dissenting in part]). Here, plaintiffs do not allege that the social media defendants use "friend- and content-suggestion" algorithms; as noted above, the strict products liability causes of action in question here are based on content-recommendation algorithms in the social media defendants' platforms.

To the extent that Chief Judge Katzmann concluded that Facebook's content-recommendation algorithms similarly deprived Facebook of its status as a publisher of third-party content within the meaning of section 230, we believe that his analysis, if applied here, would ipso facto expose most social media companies to unlimited liability in defamation cases. That is the same problem inherent in the Third Circuit's first-party/third-party speech analysis in Anderson. Again, a social media company using content-recommendation algorithms cannot be deemed a publisher of third-party content for purposes of libel and slander claims (thus triggering section 230 immunity) and not at the same time a publisher of third-party content for strict products liability claims.

In the broader context, the dissenters accept plaintiffs' assertion that these actions are about the shooter's "addiction" to social media platforms, wholly unrelated to third-party speech or content. We come to a different conclusion. As we read them, the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he become addicted to anodyne content, such as cooking tutorials or cat videos.[FN1]

Instead, plaintiffs' theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter's radicalization. Given that plaintiffs' allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are "inextricably intertwined" with the social media defendants' role as publishers of third-party content (M.P., 127 F4th at 525).

If plaintiffs' causes of action were based merely on the shooter's addiction to social media, which they are not, they would fail on causation grounds. It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were not "not foreseeable in the normal course of events" and therefore broke the causal chain (Tennant v Lascelle, 161 AD3d 1565, 1566 [4th Dept 2018]; see Turturro v City of New York, 28 NY3d 469, 484 [2016]). It was the shooter's addiction to white supremacy content, not to social media in general, that allegedly caused him to become radicalized and violent.[FN2]

At stake in these appeals is the scope of protection afforded by section 230, which Congress enacted to combat "the threat that tort-based lawsuits pose to freedom of speech [on the] Internet" (Shiamili, 17 NY3d at 286-287 [internal quotation marks omitted]). As a distinguished law professor has noted, section 230's immunity "particularly benefits those voices from underserved, underrepresented, and resource-poor communities," allowing marginalized groups to speak up without fear of legal repercussion (Enrique Armijo, Section 230 as Civil Rights Statute, 92 U Cin L Rev 301, 303 [2023]). Without section 230, the diversity of information and viewpoints accessible through the Internet would be significantly limited.

We believe that the motion court's ruling, if allowed to stand, would gut the immunity provisions of section 230 and result in the end of the Internet as we know it. This is so because Internet service providers who use algorithms on their platforms would be subject to liability for all tort causes of action, including defamation. Because social media companies that sort and display content would be subject to liability for every untruthful statement made on their platforms, the Internet would over time devolve into mere message boards.

Although the motion court stated that the social media defendants' section 230 arguments "may ultimately prove true," dismissal at the pleading stage is essential to protect free expression under Section 230 (see Nemet Chevrolet, Ltd., 591 F3d at 255 [the statute "protects websites not only from 'ultimate liability,' but also from 'having to fight costly and protracted legal battles' "]). Dismissal after years of discovery and litigation (with ever mounting legal fees) would thwart the purpose of section 230.

While everyone of goodwill condemns the shooter's actions and the vile content that motivated him to assassinate Black people simply because of the color of their skin, there is in our view no reasonable interpretation of section 230 that allows plaintiffs' tort causes of action to survive as against the social media defendants, who are entitled to immunity under the statute as the publishers of third-party content on their platforms.

We therefore reverse the orders in appeal Nos. 1, 3, 5, and 6. Inasmuch as the complaint in appeal No. 2 and the amended complaint in appeal No. 4 were superseded by an amended complaint and a second amended complaint, respectively, the appeals in appeal Nos. 2 and 4 must be dismissed (see Carcone v Noon [appeal No. 1], 214 AD3d 1306, 1306 [4th Dept 2023]). In light of our determination, the remaining contentions advanced by the social media defendants [*6]are rendered academic.

All concur except Bannister and Nowak, JJ., who dissent and vote to affirm in the following dissenting opinion: "[W]hy do I always have trouble putting my phone down at night? . . . It's 2 in the morning . . . I should be sleeping . . . I'm a literal addict to my phone[.] I can't stop cons[u]ming." These are the words of a teenager who, on May 14, 2022, drove more than 200 miles to Buffalo to shoot and kill 10 people and injure three more at a grocery store in the heart of a predominantly Black community.

Plaintiffs in these consolidated appeals allege that the shooter did so only after years of exposure to the online platforms of the so-called "social media defendants"—Meta Platforms, Inc., formerly known as Facebook, Inc.; Instagram LLC; Snap, Inc.; Alphabet, Inc.; Google, LLC; YouTube, LLC; Discord, Inc.; Reddit, Inc.; Twitch Interactive, Inc.; Amazon.com, Inc.; and 4chan Community Support, LLC (collectively, defendants)—platforms that, according to plaintiffs, were defectively designed. Plaintiffs allege that defendants intentionally designed their platforms to be addictive, failed to provide basic safeguards for those most susceptible to addiction—minors—and failed to warn the public of the risk of addiction. According to plaintiffs, defendants' platforms did precisely what they were designed to do—they targeted and addicted minor users to maximize their engagement. Plaintiffs allege that the shooter became more isolated and reclusive as a result of his social media use and addiction, and that his addiction, combined with his age and gender, left him particularly susceptible to radicalization and violence—culminating in the tragedy in Buffalo. For the purposes of defendants' CPLR 3211 (a) (7) motions to dismiss, we must "accept the facts as alleged in the [operative] complaint[s] as true, accord plaintiffs the benefit of every possible favorable inference, and determine only whether the facts as alleged fit within any cognizable legal theory" (Leon v Martinez, 84 NY2d 83, 87-88 [1994]).

Little assumption is required in this case, however. The shooter all but admitted those facts to be true.

Inasmuch as plaintiffs' collective strict products liability causes of action predicate liability on the allegedly defective design of the platforms themselves—and the concomitant failure to warn of the risks of addiction in young people—plaintiffs do not seek to hold defendants liable for any third-party content; thus, we conclude that those causes of action do not implicate section 230 of the Communications Decency Act (section 230) or the First Amendment. Even if section 230 were implicated, however, we conclude that the use of an algorithm to push disparate content to individual end users constitutes the "creation or development of information," which could subject defendants to liability (47 USC § 230 [f] [3]), and is not the type of editorial or publishing decision that would fall within the ambit of section 230 (see Shiamili v Real Estate Group of N.Y., Inc., 17 NY3d 281, 289 [2011]; Anderson v TikTok, Inc., 116 F4th 180, 183-184 [3d Cir 2024]). Therefore, we respectfully dissent.

At the outset, we reject the foundation upon which the majority's opinion is built—that plaintiffs' causes of action necessarily seek to hold defendants responsible for radicalizing the shooter given their status "as the publisher[s] or speaker[s] of any information provided by another information content provider" (47 USC § 230 [c] [1]), i.e., that plaintiffs only seek to hold defendants liable for the third-party content the shooter viewed. If that were the only allegation raised by plaintiffs, we would agree with the majority. But it is not.

The operative complaints, when read as a whole, as they must be,[FN3] also allege that defendants' platforms are "products" subject to strict products liability that are addictive—not based upon the third-party content they show but because of the inherent nature of their design. [*7]Specifically, plaintiffs allege that defendants' platforms: "prey upon young users' desire for validation and need for social comparison," "lack effective mechanisms . . . to restrict minors' usage of the product," have "inadequate parental controls" and age verification tools that facilitate unfettered usage of the products, and "intentionally place[ ] obstacles to discourage cessation" of the applications. Plaintiffs allege that the various platforms "send push notifications and messages throughout the night, prompting children to re-engage with the apps when they should be sleeping." They further allege that certain products "autoplay" video without requiring the user to affirmatively click on the next video, while others permit the user to "infinite[ly]" scroll, creating a constant stream of media that is difficult to close or leave.

Plaintiffs assert that defendants had a duty to warn the public at large and, in particular, minor users of their platforms and their parents, of the addictive nature of the platforms. They thus claim that defendants could have utilized reasonable alternate designs, including: eliminating "autoplay" features or creating a "beginning and end to a user's '[f]eed' " to prevent a user from being able to "infinite[ly]" scroll; providing options for users to self-limit time used on a platform; providing effective parental controls; utilizing session time notifications or otherwise removing push notifications that lure the user to re-engage with the application; and "[r]emoving barriers to the deactivation and deletion of accounts." These allegations do not seek to hold defendants liable for any third-party content (see 47 USC § 230 [c] [1]); rather, they seek to hold defendants liable for failing to provide basic safeguards to reasonably limit the addictive features of their social media platforms, particularly with respect to minor users. Indeed, other products liability actions similarly premised upon defective designs or failures to warn have been permitted to proceed past the motion to dismiss phase. To that end, the attorneys general of more than 30 states—including the New York Attorney General—are currently embroiled in ongoing, multi-district litigation against several of the same defendants here for virtually identical strict products liability claims under substantive New York law (see In re Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig., 702 F Supp 3d 809, 817, 836-854 [ND Cal 2023])—claims that the District Court concluded were not precluded by section 230 or the First Amendment.

For instance, in Lemmon v Snap, Inc. (995 F3d 1085 [9th Cir 2021]), the Ninth Circuit explained that "[t]he duty to design a reasonably safe product is fully independent of [the defendant's] role in monitoring or publishing third-party content" (id. at 1093). Contrary to the majority's conclusion, the design choices at issue in Lemmon—in particular, a "Speed Filter" (id. at 1088)—are no different from the design choices at issue here. Both seek to hold the product designer liable for choices that implicate the manner in which users engage with the platform, rather than the content contained thereon. A "Speed Filter," which encourages users to travel at high rates of speed while utilizing the application (id.), is no different from push notifications encouraging the user to re-engage with the platform at all hours of the night, or design features that "autoplay" video and permit the user to "infinite[ly]" scroll—both encourage users to engage with the application unsafely.

Similarly, in A.M. v Omegle.com, LLC (614 F Supp 3d 814 [D Or 2022]), the District Court held that section 230 did not preclude the plaintiff's products liability action alleging that the defendant's design choices were defective because they permitted minor users to match with adults, noting that the plaintiff was "not claiming that [the defendant] needed to review, edit, or withdraw any third-party content" to remediate its defective design (id. at 819). Just as the defendant in A.M. "could have satisfied its alleged obligation to [p]laintiff by designing its product differently" (id.), plaintiffs here allege that defendants could have designed their platforms to prevent addiction in any number of ways that do not implicate third-party content, such as: restricting minors' access to the platforms through age verification tools; instituting more robust parental controls; removing push notifications; utilizing session time notifications; and otherwise removing barriers to the deactivation and deletion of accounts.

In our view, the majority's reliance on M.P. By & Through Pinckney v Meta Platforms, Inc. (127 F4th 516 [4th Cir 2025] [M.P.]) is misplaced. There, a white supremacist shot and killed nine Black people at Mother Emanuel AME Church in Charleston, South Carolina, and the plaintiff—whose father was murdered inside the church—sued a number of social media platforms claiming that the shooter "was 'radicalized online by white supremacist propaganda that was directed to him by the [d]efendants' " (id. at 521 [emphasis added]). The plaintiff alleged that the shooter's "emotional desensitization" and radicalization were caused by [*8]"repeated" and "extended" exposure to "inflammatory[,] . . . extremist content," and thus sought to hold the defendants responsible for the third-party content that the shooter viewed (id. [internal quotation marks omitted]). Unlike M.P., plaintiffs here seek to hold defendants responsible for failing to provide reasonable safeguards to prevent addiction in minors using their platforms, and in failing to warn of the risks of such addiction. Those claims do not seek to hold defendants responsible for the content the shooter viewed, and indeed, plaintiffs are "not claiming that [defendants] needed to review, edit, or withdraw any third-party content" to remediate the platforms' allegedly defective design (A.M., 614 F Supp 3d at 819).

In short, we agree with the reasoning set forth in Lemmon, A.M., and In re Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig. and conclude that social media platforms are "products" subject to strict products liability in New York. "[W]hen considering whether strict products liability attaches, the question of whether something is a product is often assumed; none of our strict products liability case law provides a clear definition of a 'product' " (Matter of Eighth Jud. Dist. Asbestos Litig., 33 NY3d 488, 494 [2019] [Terwilliger]). As the Court of Appeals noted, " '[a]part from statutes that define "product" for purposes of determining products liability, in every instance it is for the court to determine as a matter of law whether something is, or is not, a product' " (id., quoting Restatement [Third] of Torts: Products Liability § 19, Comment a).

In general, the Third Restatement defines a product as "tangible personal property distributed commercially for use or consumption" (Restatement [Third] of Torts: Products Liability § 19 [a]). Here, defendants largely urge that their social media platforms are not products because they are not tangible goods. We disagree. The Third Restatement explains that even intangible items "are products when the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property that it is appropriate to apply the rules stated in th[e] Restatement" (id.).

That common-sense approach has been echoed by the Court of Appeals, which has recognized that the analysis of whether something is a product is inextricably "[i]ntertwined with . . . the more central question of whether the defendant manufacturer owes a duty to warn" (Terwilliger, 33 NY3d at 494). Indeed, the Court of Appeals has emphasized that in determining whether a duty should attach to a seller, the "governing factors [include] a defendant's control over the design of the product, its standardization, and its superior ability to know—and warn about—the dangers inherent in the product's reasonably foreseeable uses or misuses" (id. at 496; see Matter of New York City Asbestos Litig., 27 NY3d 765, 793, 800-801 [2016] [Dummitt]). The "overarching concern . . . is to 'settle upon the most reasonable allocation of risks, burdens and costs among the parties and within society, accounting for the economic impact of a duty, pertinent scientific information, the relationship between the parties, the identity of the person or entity best positioned to avoid the harm in question, the public policy served by the presence or absence of a duty and the logical basis of a duty' " (Terwilliger, 33 NY3d at 495-496, quoting Dummitt, 27 NY3d at 788).

We recognize that tort liability is not open-ended (see generally Espinal v Melville Snow Contrs., 98 NY2d 136, 139 [2002]), nor should it be. However, in this case, logic and the law compel the conclusion that the social media platforms in question are products, and that the manufacturers of those products can be held liable in products liability (see Dummitt, 27 NY3d at 793, 800-801). Defendants are multi-billion-dollar corporations that derive their revenue from maximizing user engagement on their platforms. They alone control the manufacture and distribution of their respective social media platforms. They are uniquely positioned to know of—and prevent—the harm posed by social media addiction generally and specifically in minors. Once injected into the stream of commerce, their platforms are uniform for all users. That users exchange their data as opposed to currency to use those platforms does not, in our view, vitiate their true nature as products. Thus, and as set forth above, we agree with the courts that have concluded that social media platforms (see A.M., 614 F Supp 3d at 819; see also Lemmon, 995 F3d at 1093; In re Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig., 702 F Supp 3d at 854) and ride-sharing platforms (see Brookes v Lyft Inc., 2022 WL 19799628 at *2-3 [Fla Cir Ct 2022]) are products subject to products liability law.

Though we conclude that plaintiffs' products liability allegations sounding in design defect and failure to warn generally do not implicate section 230, to the extent that plaintiffs [*9]claim that defendants' products were defectively designed because they do not create a beginning and end to a user's "feed" or "autoplay" videos, those allegations at least tangentially involve third-party content, and thus a discussion of section 230 is required.

As the majority notes, section 230 (c) (1) states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 USC § 230 [c] [1]). An "information content provider" is defined by the statute to mean "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service" (§ 230 [f] [3] [emphasis added]).

As the Court of Appeals recognized in Shiamili, "[s]ervice providers are only entitled to . . . immunity . . . where the content at issue is provided by 'another information content provider' . . . It follows that if a defendant service provider is itself the 'content provider,' it is not shielded from liability" (17 NY3d at 289). Inasmuch as "any party 'responsible . . . in part' for the 'creation or development of information' " is a content provider, "any piece of content can have multiple providers" (id.). While the Court of Appeals expressly declined to decide whether to interpret the term "development" broadly or narrowly in Shiamili (see id. at 290), we conclude that the use of design functions, such as algorithmic models that "autoplay" videos or create an "infinite feed," constitutes the "creation or development of information" that would render defendants first-party content providers and, thus, not immune from liability under section 230 (Anderson, 116 F4th at 183 n 8).

To that end, we conclude that, as Chief Judge Katzmann stated in his dissent in Force v Facebook, Inc. (934 F3d 53 [2d Cir 2019], cert denied — US &mdash, 140 S Ct 2761 [2020]), "it strains the English language to say that in targeting and recommending [certain content] to users

. . . [defendants here are merely] acting as 'the publisher[s] of . . . information provided by another information content provider' " (id. at 76-77 [Katzmann, Ch. J., concurring in part and dissenting in part], quoting 47 USC § 230 [c] [1]), and not developing first-party content in their own right. Relatedly, we conclude that the targeted dissemination of particular information to individual end users does not amount to a traditional editorial or publishing decision that would fall within the ambit of section 230. In that regard, the Court of Appeals' analysis in Shiamili addressed the publication of a publicly available blog post. Clearly, that is a traditional editorial or publication decision—no different than the New York Times choosing which editorials should appear within the most recent volume of their publication.

The conduct at issue in this case is far from any editorial or publishing decision; defendants utilize functions, such as machine learning algorithms, to push specific content on specific individuals based upon what is most apt to keep those specific users on the platform. Some receive cooking videos or videos of puppies, while others receive white nationalist vitriol, each group entirely ignorant of the content foisted upon the other. Such conduct does not "maintain the robust nature of Internet communication" or "preserve the vibrant and competitive free market that presently exists for the Internet" contemplated by the protections of immunity (Force, 934 F3d at 63 [internal quotation marks omitted]) but, rather, only serves to further silo, divide and isolate end users by force-feeding them specific, curated content designed to maximize engagement.

The majority concludes, based upon Moody v NetChoice, LLC (603 US 707 [2024]), that even if plaintiffs seek to hold defendants liable for their own first-party content, such conduct is protected by the First Amendment. We disagree. First and foremost, the language defendants rely upon is dicta and has no binding force or effect upon this Court. Second, Moody involved two different state laws that curtailed social media companies' ability to engage in content moderation (see id. at 720-721); in essence, the laws compelled the social media companies to "carry and promote user speech that they would rather discard or downplay" (id. at 728). Government-imposed content moderation laws that specifically prohibit social media companies from exercising their right to engage in content moderation is a far cry from private citizens seeking to hold private actors responsible for their defective products in tort.

Such a vast expansion of First Amendment jurisprudence cannot be overstated. Taken to its furthest extent, the majority essentially concludes that every defendant would be immune from all state law tort claims involving speech or expressive activity. If the majority is correct, there [*10]could never be state tort liability for failing to warn of the potential risks associated with a product, for insisting upon a warning would be state-compelled speech in violation of the First Amendment. Nor could there ever be liability for failing to obtain a patient's informed consent in a medical malpractice action—for the defendant physician's explanation of the procedure, its alternatives, and the reasonably foreseeable risks and benefits of each proposed course of action—necessarily implicates the defendant physician's First Amendment rights. That simply cannot be the case.

Finally, inasmuch as proximate causation "is for the finder of fact to determine" (Derdiarian v Felix Contr. Corp., 51 NY2d 308, 315 [1980], rearg denied 52 NY2d 784 [1980]), we conclude that plaintiffs have stated sufficient facts to withstand defendants' various CPLR 3211 (a) (7) motions. "[A]ccept[ing] the facts as alleged in the [operative] complaint[s] as true [and] accord[ing] plaintiffs the benefit of every possible favorable inference" (Leon, 84 NY2d at 87-88), as we must do, we conclude that it would be premature to dismiss the relevant causes of action at this juncture. We thus would affirm the orders in appeal Nos. 1, 3, 5, and 6, and we would dismiss the appeals in appeal Nos. 2 and 4 (see Carcone v Noon [appeal No. 1], 214 AD3d 1306, 1306 [4th Dept 2023]).

Entered: July 25, 2025

Ann Dillon Flynn

Clerk of the Court

Footnotes


Footnote 1: We note that plaintiffs' addiction-only theory, even if valid, would not apply to all social media defendants. For instance, plaintiffs do not allege that the shooter was addicted to the livestream service of Twitch and Amazon, which he used during the shooting and only several times prior.

Footnote 2: The social media addiction cases cited by plaintiffs involve psychological harm allegedly caused to users, not, as here, harm caused by addicted users to third parties (see e.g. In re Social Media Adolescent Addiction/Personal Injury Prod. Liab. Litig., 702 F Supp 3d 809, 836-854 [ND Cal 2023]).

Footnote 3: To the extent that any one operative complaint does not set forth the entirety of the factual allegations listed herein, that is not dispositive—it is axiomatic that, in the context of a CPLR 3211 (a) (7) motion to dismiss, "the criterion is whether the proponent of the pleading has a cause of action, not whether [the proponent] has stated one" (Guggenheimer v Ginzburg, 43 NY2d 268, 275 [1977]).



Case Details

Case Name: Patterson v. Meta Platforms, Inc.
Court Name: Appellate Division of the Supreme Court of the State of New York
Date Published: Jul 25, 2025
Citation: 2025 NY Slip Op 04438
Court Abbreviation: N.Y. App. Div.
Read the detailed case summary
AI-generated responses must be verified and are not legal advice.
Your Notebook is empty. To add cases, bookmark them from your search, or select Add Cases to extract citations from a PDF or a block of text.