Patterson v Meta Platforms, Inc.
535 CA 24-00513
Appellate Division, Fourth Department
July 25, 2025
2025 NY Slip Op 04438
Lindley
Published by New York State Law Reporting Bureau pursuant to Judiciary Law § 431. This opinion is uncorrected and subject to revision before publication in the Official Reports.
MORRISON & FOERSTER LLP, NEW YORK CITY (JOSEPH R. PALMORE OF COUNSEL), FOR DEFENDANT-APPELLANT DISCORD, INC.
HUESTON HENNIGAN LLP, NEW YORK CITY (MOEZ M. KABA OF COUNSEL), AND GIBSON, MCASKILL & CROSBY, LLP, BUFFALO, FOR DEFENDANT-APPELLANT AMAZON.COM, INC.
HARRIS BEACH MURTHA CULLINA PLLC, NEW YORK CITY (LISA ANNE LECOURS OF COUNSEL), FOR DEFENDANT-APPELLANT 4CHAN COMMUNITY SUPPORT, LLC.
O‘MELVENY & MEYERS LLP, NEW YORK CITY (JONATHAN P. SCHNELLER OF COUNSEL), AND HAGERTY & BRADY, BUFFALO, FOR DEFENDANT-APPELLANT SNAP, INC.
THE LAW OFFICE OF JOHN V. ELMORE, P.C., BUFFALO (JOHN V. ELMORE OF COUNSEL), AND SOCIAL MEDIA VICTIMS LAW CENTER PLLC, SEATTLE, WASHINGTON, FOR PLAINTIFFS-RESPONDENTS.
HOGAN LOVELLS US LLP, NEW YORK CITY (JASMEET K. AHUJA OF COUNSEL), FOR CHAMBER OF PROGRESS, ENGINE ADVOCACY, AND WIKIMEDIA FOUNDATION, AMICUS CURIAE.
HOLWELL SHUSTER & GOLDBERG LLP, NEW YORK CITY (DANIEL M. SULLIVAN OF COUNSEL), FOR PRODUCTS LIABILITY ADVISORY COUNCIL, AMICUS CURIAE.
PRESENT: LINDLEY, J.P., CURRAN, BANNISTER, SMITH, AND NOWAK, JJ.
Opinion by Lindley, J.P.:
Appeals from an order of the Supreme Court, Erie County (Paula L. Feroleto, J.), entered March 18, 2024. The order denied the motions of defendants-appellants to
It is hereby ORDERED that the order so appealed from is reversed on the law without costs, the motions are granted and the complaint is dismissed against defendants-appellants.
These consolidated appeals arise from four separate actions commenced in response to the mass shooting on May 14, 2022 at a grocery store in a predominately Black neighborhood in Buffalo. The shooter, a teenager from the Southern Tier of New York, spent months planning the attack and was motivated by the Great Replacement Theory, which posits that white populations in Western countries are being deliberately replaced by non-white immigrants and people of color. After driving more than 200 miles from his home to Buffalo, the shooter arrived at the store and opened fire on Black individuals in the parking lot and inside the store with a Bushmaster XM-15 semiautomatic rifle, killing 10 people and wounding three others.
The shooter fired approximately 60 rounds from high-capacity magazines attached to his rifle, upon which he had written several racist messages, including “Here‘s your reparations!” and “Buck status: Broken.” Apprehended at the scene, the shooter was charged with multiple felonies in both state court and federal court, where prosecutors are seeking the death penalty. The shooter pleaded guilty in state court to 10 counts of intentional murder and has been sentenced to life in prison without the possibility of parole. As of this writing, the federal charges are still pending.
Plaintiffs in these civil actions are survivors of the attack and family members of the victims, while defendants include the shooter‘s parents and numerous other parties whose actions or inactions allegedly played a role in the shooting. We are concerned in these appeals only with plaintiffs’ causes of action against the so-called “social media defendants,” i.e., Meta Platforms, Inc., formerly known as Facebook (Facebook); Instagram LLC (Instagram); Snap, Inc. (Snap); Alphabet, Inc.; Google, LLC (Google); YouTube, LLC (YouTube); Discord, Inc. (Discord); Reddit, Inc.; Twitch Interactive, Inc. (Twitch); Amazon.com, Inc. (Amazon); and 4chan Community Support, LLC (4chan), all of whom have social media platforms that were used by the shooter at some point before or during the attack.
The complaints, amended complaint and second amended complaint (hereafter complaints) in these actions assert various tort causes of action against the social media defendants, including negligence, unjust enrichment and strict products liability based on defective design and failure to warn. According to plaintiffs, the social media platforms in question are defectively designed to include content-recommendation algorithms that fed a steady stream of racist and violent content to the shooter, who over time became motivated to kill Black people. Plaintiffs further allege that the content-recommendation algorithms addicted the shooter to the social media defendants’ platforms, resulting in his isolation and radicalization, and that the platforms were designed to stimulate engagement by exploiting the neurological vulnerabilities of users like the shooter and thereby maximize profits.
Although plaintiffs recognize that some of the social media defendants—e.g., 4chan, Discord, Twitch and Snap—do not use content-recommendation algorithms, they nevertheless allege that the platforms of those defendants are designed with the same core defect contained in the platforms of the social media defendants that use such algorithms: namely, they are designed to be addictive. According to plaintiffs, the addictive features of the social media platforms include “badges,” “streaks,” “trophies,” and “emojis” given to frequent users, thereby fueling engagement. The shooter‘s addiction to those platforms, the theory goes, ultimately caused him to commit mass murder.
The social media defendants moved to dismiss the complaints against them for failure to state a cause of action (see
Plaintiffs concede that, despite its abhorrent nature, the racist content consumed by the shooter on the Internet is constitutionally protected speech under the First Amendment, and that the social media defendants cannot be held liable for publishing such content. Plaintiffs further concede that, pursuant to
The motion court agreed with plaintiffs, but we do not. Accepting as true all of the facts alleged in the operative complaints, and according plaintiffs the benefit of every possible favorable inference (see Williams v Beemiller, Inc., 100 AD3d 143, 148 [4th Dept 2012], amended on rearg 103 AD3d 1191 [4th Dept 2013]; see generally Leon v Martinez, 84 NY2d 83, 87-88 [1994]), we conclude that plaintiffs do not have a valid cause of action against the social media defendants (see
As the United States Supreme Court has observed, the Internet is the most important place in society for the exchange of diverse viewpoints (see Packingham v North Carolina, 582 US 98, 104 [2017]). The Internet is the modern public square, containing content “as diverse as human thought” (Reno v American Civ. Liberties Union, 521 US 844, 852 [1997] [internal quotation marks omitted]), and
“The primary purpose of the proposed legislation that ultimately resulted in the Communications Decency Act (‘CDA‘) ‘was to protect children from sexually explicit Internet content’ . . . Section 230, though—added as an amendment to the CDA bill . . . —was enacted ‘to maintain the robust nature of Internet communication and, accordingly, to keep government interference in the medium to a minimum’ . . . Indeed, Congress stated in Section 230 that ‘[i]t is the policy of the United States . . . (1) to promote the continued development of the Internet and other interactive computer services and other interactive media; [and] (2) to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation’ ” (Force v Facebook, Inc., 934 F3d 53, 63 [2d Cir 2019], cert denied — US —, 140 S Ct 2761 [2020]; see
“By its plain language, [section 230] creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service” (Zeran v America Online, Inc., 129 F3d 327, 330 [4th Cir 1997], cert denied 524 US 937 [1998]; see Force, 934 F3d at 63-64; see also Shiamili v Real Estate Group of N.Y., Inc., 17 NY3d 281, 289 [2011]; M.P. By & Through Pinckney v Meta Platforms, Inc., 127 F4th 516, 523 [4th Cir 2025] [M.P.]). If applicable,
With respect to state law claims,
Here, it is undisputed that the social media defendants qualify as providers of interactive computer services. The dispositive question is whether plaintiffs seek to hold the social media defendants liable as publishers or speakers of information provided by other content providers. Based on our reading of the complaints, we conclude that plaintiffs seek to hold the social media defendants liable as publishers of third-party content. We further conclude that the content-recommendation algorithms used by some of the social media defendants do not deprive those defendants of their status as publishers of third-party content. It follows that plaintiffs’ tort causes of action against the social media defendants are barred by
Even assuming, arguendo, that the social media defendants’ platforms are products (as opposed to services), and further assuming that they are inherently dangerous, which is a rather large assumption indeed, we conclude that plaintiffs’ strict products liability causes of action against the social media defendants fail because they are based on the nature of content posted by third parties on the social media platforms. The immunity test established by Barnes focuses not on the name given to a cause of action but instead on “whether a plaintiff‘s ’theory of liability would treat a defendant as a publisher or speaker of third-party content’ ” (Calise v Meta Platforms, Inc., 103 F4th 732, 740 [9th Cir 2024]; see Federal Trade Commn. v LeadClick Media, LLC, 838 F3d 158, 175 [2d Cir 2016]).
We are not persuaded by plaintiffs’ assertion that the social media defendants’ algorithms render their products defective, thus depriving them of
The appeals at hand are on all fours with M.P., which arose from the killing of nine Black people by a white supremacist at Mother Emanuel AME Church in Charleston, South Carolina. The plaintiff in that action sued Facebook, among other parties, alleging that it was civilly liable for the shooter‘s crimes. As here, the complaint asserted a cause of action for strict products liability and alleged that the shooter was “radicalized online by white supremacist propaganda that was directed to him” by Facebook (M.P., 127 F4th at 521 [internal quotation marks omitted]). The plaintiff further alleged that Facebook‘s content-recommendation algorithms, along with its quest for user engagement and profits, turned the shooter into a dangerous racist who committed mass murder (see id. at 521-522).
Citing Force, the Fourth Circuit in M.P. affirmed the dismissal of the complaint based on
Recognizing that the rationale of M.P. compels dismissal of their strict products liability
We do not find Anderson to be persuasive authority. If content-recommendation algorithms transform third-party content into first-party content, as the Anderson court determined, then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of
Although Anderson was not a defamation case, its reasoning applies with equal force to all tort causes of action, including defamation. One cannot plausibly conclude that
In any event, even if we were to follow Anderson and conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties, it stands to reason that such speech (“expressive activity” as described by the Third Circuit) is protected by the First Amendment under Moody. While TikTok, due to its status as a foreign corporation operating abroad, could not seek protection under the First Amendment, our social media defendants can and do raise the First Amendment as a defense in addition to
In Moody, the Supreme Court determined that content-moderation algorithms result in expressive activity protected by the First Amendment (see 603 US at 744). Writing for the majority, Justice Kagan explained that “[d]eciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own” (id. at 731). While the Moody Court did not consider social media platforms “with feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards” (id. at 736 n 5 [emphasis added]), our plaintiffs do not allege that the algorithms of the social media defendants are based “solely” on the shooter‘s online actions. To the contrary, the complaints here allege that the social media defendants served the shooter material that they chose for him for the purpose of maximizing his engagement with their platforms. Thus, per Moody, the social media defendants are entitled to First Amendment protection for third-party content recommended to the shooter by algorithms.
Although it is true, as plaintiffs point out, that the First Amendment views expressed in Moody are nonbinding dicta, it is recent dicta from a supermajority of Justices of the United States Supreme Court, which has final say on how the First Amendment is interpreted. That is not the type of dicta we are inclined to ignore even if we were to disagree with its reasoning, which we do not.
As the Center for Democracy and Technology explains in its amicus brief, content-recommendation algorithms are simply tools used by social media companies “to accomplish a traditional publishing function, made necessary by the scale at which providers operate.” Every
Thus, given the interplay between
Plaintiffs’ reliance on Lemmon v Snap, Inc. (995 F3d 1085 [9th Cir 2021]) is misplaced. The design defect in the defendant‘s program in Lemmon was its “Speed Filter,” which indicated how fast users were traveling when sending messages on Snapchat (id. at 1088). The filter allegedly induced users to drive recklessly while recording videos, and the plaintiffs’ harm arose from reckless driving, which flowed directly from the alleged design defect. Because the plaintiffs’ causes of action had nothing to do with the content of the messages sent or received by the users (Snap itself created the filter),
With respect to the applicability of
To the extent that Chief Judge Katzmann concluded that Facebook‘s content-recommendation algorithms similarly deprived Facebook of its status as a publisher of third-party content within the meaning of
In the broader context, the dissenters accept plaintiffs’ assertion that these actions are about the shooter‘s “addiction” to social media platforms, wholly unrelated to third-party speech or content. We come to a different conclusion. As we read them, the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he become addicted to anodyne content, such as cooking tutorials or cat videos.1
Instead, plaintiffs’ theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter‘s radicalization. Given that plaintiffs’ allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are “inextricably intertwined” with the social media defendants’ role as publishers of third-party content (M.P., 127 F4th at 525).
If plaintiffs’ causes of action were based merely on the shooter‘s addiction to social media, which they are not, they would fail on causation grounds. It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were not “not foreseeable in the normal course of events” and therefore broke the causal chain (Tennant v Lascelle, 161 AD3d 1565, 1566 [4th Dept 2018]; see Turturro v City of New York, 28 NY3d 469, 484 [2016]). It was the shooter‘s addiction to white supremacy content, not to social media in general, that allegedly caused him to become radicalized and violent.2
At stake in these appeals is the scope of protection afforded by
We believe that the motion court‘s ruling, if allowed to stand, would gut the immunity provisions of
Although the motion court stated that the social media defendants’
While everyone of goodwill condemns the shooter‘s actions and the vile content that motivated him to assassinate Black people simply because of the color of their skin, there is in our view no reasonable interpretation of
We therefore reverse the orders in appeal Nos. 1, 3, 5, and 6. Inasmuch as the complaint in appeal No. 2 and the amended complaint in appeal No. 4 were superseded by an amended complaint and a second amended complaint, respectively, the appeals in appeal Nos. 2 and 4 must be dismissed (see Carcone v Noon [appeal No. 1], 214 AD3d 1306, 1306 [4th Dept 2023]). In light of our determination, the remaining contentions advanced by the social media defendants
All concur except Bannister and Nowak, JJ., who dissent and vote to affirm in the following dissenting opinion: “[W]hy do I always have trouble putting my phone down at night? . . . It‘s 2 in the morning . . . I should be sleeping . . . I‘m a literal addict to my phone[.] I can‘t stop cons[u]ming.” These are the words of a teenager who, on May 14, 2022, drove more than 200 miles to Buffalo to shoot and kill 10 people and injure three more at a grocery store in the heart of a predominantly Black community.
Plaintiffs in these consolidated appeals allege that the shooter did so only after years of exposure to the online platforms of the so-called “social media defendants“—Meta Platforms, Inc., formerly known as Facebook, Inc.; Instagram LLC; Snap, Inc.; Alphabet, Inc.; Google, LLC; YouTube, LLC; Discord, Inc.; Reddit, Inc.; Twitch Interactive, Inc.; Amazon.com, Inc.; and 4chan Community Support, LLC (collectively, defendants)—platforms that, according to plaintiffs, were defectively designed. Plaintiffs allege that defendants intentionally designed their platforms to be addictive, failed to provide basic safeguards for those most susceptible to addiction—minors—and failed to warn the public of the risk of addiction. According to plaintiffs, defendants’ platforms did precisely what they were designed to do—they targeted and addicted minor users to maximize their engagement. Plaintiffs allege that the shooter became more isolated and reclusive as a result of his social media use and addiction, and that his addiction, combined with his age and gender, left him particularly susceptible to radicalization and violence—culminating in the tragedy in Buffalo. For the purposes of defendants’
Little assumption is required in this case, however. The shooter all but admitted those facts to be true.
Inasmuch as plaintiffs’ collective strict products liability causes of action predicate liability on the allegedly defective design of the platforms themselves—and the concomitant failure to warn of the risks of addiction in young people—plaintiffs do not seek to hold defendants liable for any third-party content; thus, we conclude that those causes of action do not implicate
At the outset, we reject the foundation upon which the majority‘s opinion is built—that plaintiffs’ causes of action necessarily seek to hold defendants responsible for radicalizing the shooter given their status “as the publisher[s] or speaker[s] of any information provided by another information content provider” (
The operative complaints, when read as a whole, as they must be,3 also allege that defendants’ platforms are “products” subject to strict products liability that are addictive—not based upon the third-party content they show but because of the inherent nature of their design. Specifically, plaintiffs allege that defendants’ platforms: “prey upon young users’ desire for validation and need for social comparison,” “lack effective mechanisms . . . to restrict minors’ usage of the product,” have “inadequate parental controls” and age verification tools that facilitate unfettered usage of the products, and “intentionally place[ ] obstacles to discourage cessation” of the applications. Plaintiffs allege that the various platforms “send push notifications and messages throughout the night, prompting children to re-engage with the apps when they should be sleeping.” They further allege that certain products “autoplay” video without requiring the user to affirmatively click on the next video, while others permit the user to “infinite[ly]” scroll, creating a constant stream of media that is difficult to close or leave.
Plaintiffs assert that defendants had a duty to warn the public at large and, in particular, minor users of their platforms and their parents, of the addictive nature of the platforms. They thus claim that defendants could have utilized reasonable alternate designs, including: eliminating “autoplay” features or creating a “beginning and end to a user‘s ‘[f]eed’ ” to prevent a user from being able to “infinite[ly]” scroll; providing options for users to self-limit time used on a platform; providing effective parental controls; utilizing session time notifications or otherwise removing push notifications that lure the user to re-engage with the application; and “[r]emoving barriers to the deactivation and deletion of accounts.” These allegations do not seek to hold defendants liable for any third-party content (see
For instance, in Lemmon v Snap, Inc. (995 F3d 1085 [9th Cir 2021]), the Ninth Circuit explained that “[t]he duty to design a reasonably safe product is fully independent of [the defendant‘s] role in monitoring or publishing third-party content” (id. at 1093). Contrary to the majority‘s conclusion, the design choices at issue in Lemmon—in particular, a “Speed Filter” (id. at 1088)—are no different from the design choices at issue here. Both seek to hold the product designer liable for choices that implicate the manner in which users engage with the platform, rather than the content contained thereon. A “Speed Filter,” which encourages users to travel at high rates of speed while utilizing the application (id.), is no different from push notifications encouraging the user to re-engage with the platform at all hours of the night, or design features that “autoplay” video and permit the user to “infinite[ly]” scroll—both encourage users to engage with the application unsafely.
Similarly, in A.M. v Omegle.com, LLC (614 F Supp 3d 814 [D Or 2022]), the District Court held that
In our view, the majority‘s reliance on M.P. By & Through Pinckney v Meta Platforms, Inc. (127 F4th 516 [4th Cir 2025]) is misplaced. There, a white supremacist shot and killed nine Black people at Mother Emanuel AME Church in Charleston, South Carolina, and the plaintiff—whose father was murdered inside the church—sued a number of social media platforms claiming that the shooter “was ‘radicalized online by white supremacist propaganda that was directed to him by the [d]efendants’ ” (id. at 521 [emphasis added]). The plaintiff alleged that the shooter‘s “emotional desensitization” and radicalization were caused by “repeated” and “extended” exposure to “inflammatory[,] . . . extremist content,” and thus sought to hold the defendants responsible for the third-party content that the shooter viewed (id. [internal quotation marks omitted]). Unlike M.P., plaintiffs here seek to hold defendants responsible for failing to provide reasonable safeguards to prevent addiction in minors using their platforms, and in failing to warn of the risks of such addiction. Those claims do not seek to hold defendants responsible for the content the shooter viewed, and indeed, plaintiffs are “not claiming that [defendants] needed to review, edit, or withdraw any third-party content” to remediate the platforms’ allegedly defective design (A.M., 614 F Supp 3d at 819).
In short, we agree with the reasoning set forth in Lemmon, A.M., and In re Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig. and conclude that social media platforms are “products” subject to strict products liability in New York. “[W]hen considering whether strict products liability attaches, the question of whether something is a product is often assumed; none of our strict products liability case law provides a clear definition of a ‘product’ ” (Matter of Eighth Jud. Dist. Asbestos Litig., 33 NY3d 488, 494 [2019] [Terwilliger]). As the Court of Appeals noted, ” ‘[a]part from statutes that define “product” for purposes of determining products liability, in every instance it is for the court to determine as a matter of law whether something is, or is not, a product’ ” (id., quoting Restatement [Third] of Torts: Products Liability § 19, Comment a).
In general, the Third Restatement defines a product as “tangible personal property distributed commercially for use or consumption” (Restatement [Third] of Torts: Products Liability § 19 [a]). Here, defendants largely urge that their social media platforms are not products because they are not tangible goods. We disagree. The Third Restatement explains that even intangible items “are products when the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property that it is appropriate to apply the rules stated in this Restatement” (id.).
That common-sense approach has been echoed by the Court of Appeals, which has recognized that the analysis of whether something is a product is inextricably “[i]ntertwined with . . . the more central question of whether the defendant manufacturer owes a duty to warn” (Terwilliger, 33 NY3d at 494). Indeed, the Court of Appeals has emphasized that in determining whether a duty should attach to a seller, the “governing factors [include] a defendant‘s control over the design of the product, its standardization, and its superior ability to know—and warn about—the dangers inherent in the product‘s reasonably foreseeable uses or misuses” (id. at 496; see Matter of New York City Asbestos Litig., 27 NY3d 765, 793, 800-801 [2016] [Dummitt]). The “overarching concern . . . is to ‘settle upon the most reasonable allocation of risks, burdens and costs among the parties and within society, accounting for the economic impact of a duty, pertinent scientific information, the relationship between the parties, the identity of the person or entity best positioned to avoid the harm in question, the public policy served by the presence or absence of a duty and the logical basis of a duty’ ” (Terwilliger, 33 NY3d at 495-496, quoting Dummitt, 27 NY3d at 788).
We recognize that tort liability is not open-ended (see generally Espinal v Melville Snow Contrs., 98 NY2d 136, 139 [2002]), nor should it be. However, in this case, logic and the law compel the conclusion that the social media platforms in question are products, and that the manufacturers of those products can be held liable in products liability (see Dummitt, 27 NY3d at 793, 800-801). Defendants are multi-billion-dollar corporations that derive their revenue from maximizing user engagement on their platforms. They alone control the manufacture and distribution of their respective social media platforms. They are uniquely positioned to know of—and prevent—the harm posed by social media addiction generally and specifically in minors. Once injected into the stream of commerce, their platforms are uniform for all users. That users exchange their data as opposed to currency to use those platforms does not, in our view, vitiate their true nature as products. Thus, and as set forth above, we agree with the courts that have concluded that social media platforms (see A.M., 614 F Supp 3d at 819; see also Lemmon, 995 F3d at 1093; In re Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig., 702 F Supp 3d at 854) and ride-sharing platforms (see Brookes v Lyft Inc., 2022 WL 19799628 at *2-3 [Fla Cir Ct 2022]) are products subject to products liability law.
Though we conclude that plaintiffs’ products liability allegations sounding in design defect and failure to warn generally do not implicate
As the majority notes,
As the Court of Appeals recognized in Shiamili, “[s]ervice providers are only entitled to . . . immunity . . . where the content at issue is provided by ‘another information content provider’ . . . It follows that if a defendant service provider is itself the ‘content provider,’ it is not shielded from liability” (17 NY3d at 289). Inasmuch as “any party ‘responsible . . . in part’ for the ‘creation or development of information’ ” is a content provider, “any piece of content can have multiple providers” (id.). While the Court of Appeals expressly declined to decide whether to interpret the term “development” broadly or narrowly in Shiamili (see id. at 290), we conclude that the use of design functions, such as algorithmic models that “autoplay” videos or create an “infinite feed,” constitutes the “creation or development of information” that would render defendants first-party content providers and, thus, not immune from liability under
To that end, we conclude that, as Chief Judge Katzmann stated in his dissent in Force v Facebook, Inc. (934 F3d 53 [2d Cir 2019], cert denied — US —, 140 S Ct 2761 [2020]), “it strains the English language to say that in targeting and recommending [certain content] to users . . . [defendants here are merely] acting as ‘the publisher[s] of . . . information provided by another information content provider’ ” (id. at 76-77 [Katzmann, Ch. J., concurring in part and dissenting in part], quoting
The conduct at issue in this case is far from any editorial or publishing decision; defendants utilize functions, such as machine learning algorithms, to push specific content on specific individuals based upon what is most apt to keep those specific users on the platform. Some receive cooking videos or videos of puppies, while others receive white nationalist vitriol, each group entirely ignorant of the content foisted upon the other. Such conduct does not “maintain the robust nature of Internet communication” or “preserve the vibrant and competitive free market that presently exists for the Internet” contemplated by the protections of immunity (Force, 934 F3d at 63 [internal quotation marks omitted]) but, rather, only serves to further silo, divide and isolate end users by force-feeding them specific, curated content designed to maximize engagement.
The majority concludes, based upon Moody v NetChoice, LLC (603 US 707 [2024]), that even if plaintiffs seek to hold defendants liable for their own first-party content, such conduct is protected by the First Amendment. We disagree. First and foremost, the language defendants rely upon is dicta and has no binding force or effect upon this Court. Second, Moody involved two different state laws that curtailed social media companies’ ability to engage in content moderation (see id. at 720-721); in essence, the laws compelled the social media companies to “carry and promote user speech that they would rather discard or downplay” (id. at 728). Government-imposed content moderation laws that specifically prohibit social media companies from exercising their right to engage in content moderation is a far cry from private citizens seeking to hold private actors responsible for their defective products in tort.
Such a vast expansion of First Amendment jurisprudence cannot be overstated. Taken to its furthest extent, the majority essentially concludes that every defendant would be immune from all state law tort claims involving speech or expressive activity. If the majority is correct, there could never be state tort liability for failing to warn of the potential risks associated with a product, for insisting upon a warning would be state-compelled speech in violation of the First Amendment. Nor could there ever be liability for failing to obtain a patient‘s informed consent in a medical malpractice action—for the defendant physician‘s explanation of the procedure, its alternatives, and the reasonably foreseeable risks and benefits of each proposed course of action—necessarily implicates the defendant physician‘s First Amendment rights. That simply cannot be the case.
Finally, inasmuch as proximate causation “is for the finder of fact to determine” (Derdiarian v Felix Contr. Corp., 51 NY2d 308, 315 [1980], rearg denied 52 NY2d 784 [1980]), we conclude that plaintiffs have stated sufficient facts to withstand defendants’ various
Ann Dillon Flynn
Clerk of the Court
