IN RE: SOCIAL MEDIA ADOLESCENT ADDICTION/PERSONAL INJURY PRODUCTS LIABILITY LITIGATION
702 F.Supp.3d 809
N.D. Cal.2023Background
- Hundreds of individual suits (MDL No. 3047) by children/adolescents (plus parallel school‑district and state AG actions) challenge Facebook/Instagram (Meta), YouTube (Google), TikTok (ByteDance), and Snapchat (Snap) for platform design that allegedly causes addiction, mental‑health harms, sexual exploitation, and spread of CSAM.
- Plaintiffs filed a nearly 300‑page Master Amended Complaint (MAC) identifying five priority claims: New York strict liability (design and failure‑to‑warn), Georgia negligent design and failure‑to‑warn, and Oregon negligence per se (COPPA/Protect Act violations).
- Defendants moved to dismiss on (1) Section 230 immunity and (2) First Amendment protection, and separately argued platforms are not "products" for products‑liability law and that plaintiffs failed to plead duty/causation.
- The MAC alleges discrete, content‑agnostic platform features as defects (examples: weak age verification and parental controls; in‑app filters and unlabeled edited images; barriers to account deletion; default or opt‑in screen‑time limits; reporting mechanics for CSAM; timing/notifications and recommendation algorithms). The Court analyzed those defects individually.
- The Court denied dismissal in part and granted it in part: it held Section 230 and the First Amendment do not categorically bar many product‑based claims (including negligence per se), but granted immunity/First Amendment protection for certain claims tied directly to publishing/recommending third‑party content or timing of third‑party notifications; it allowed product claims for a subset of design features to proceed and dismissed (with leave) claims asserting a duty to prevent all third‑party misconduct.
Issues
| Issue | Plaintiff's Argument | Defendant's Argument | Held |
|---|---|---|---|
| Applicability of Section 230 to product/design claims | Plaintiffs: claims target platform design/tools (not editorial publishing) so §230 doesn't bar them | Defendants: §230 bars claims because injuries flow from publication, recommendation, or algorithmic presentation of third‑party content | Court: conduct‑specific approach — §230 does NOT bar claims tied to content‑agnostic design/features (age verification, parental controls, filters, account deletion, reporting mechanics, certain notifications created by platforms) but DOES bar claims that would require altering publishing/recommendation of third‑party content, algorithmic promotion, geolocation publishing, private‑message publication, and certain feed/autoplay features |
| First Amendment defense | Plaintiffs: many alleged defects are non‑speech design choices; warnings/parental tools not protected | Defendants: tort liability would chill protected publication/dissemination choices (timing, selection, recommendation) | Court: First Amendment bars claims to the extent they would require changing when/how defendants publish or notify third‑party content (notably timing/clustering of notifications and decisions about publishing third‑party content); other non‑speech design claims survive |
| Are platforms or specific features "products" for products‑liability law? | Plaintiffs: specific functionalities are product components (analogous to tangible controls, timers, maps, filters) and thus subject to strict liability/negligence | Defendants: platforms are services/interactive communication tools or non‑tangible expression, so products doctrine inapplicable | Held: functional, feature‑by‑feature analysis — court treats many discrete features (age verification, parental controls/notifications, screen‑time controls, account deletion flows, filters and labeling tools, CSAM reporting mechanics, and certain filters like speed/overlay tools) as product components for pleading purposes; other features tied to editorial choices are not products |
| Duty to prevent third‑party misconduct (e.g., predators/CSAM) | Plaintiffs: defendants knew platforms attract predators; design choices foreseeably increase risk so duty exists | Defendants: no special relationship; absent affirmative risk‑creating conduct, no duty to control third parties; misfeasance standard not met | Held: MAC does not plausibly allege misfeasance sufficient to impose a general duty to prevent third‑party criminal acts; product‑based duties to design reasonably safe features and to warn remain, but claims premised on a broad duty to stop third‑party wrongdoing were dismissed with leave to amend |
Key Cases Cited
- Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir.) (three‑part test: interactive computer service; treated as publisher/speaker; information provided by another)
- Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir.) (failure‑to‑warn claim not barred where duty arose from knowledge independent of content‑monitoring)
- Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir.) (platform feature liability for non‑publishing design choices — speed filter — is not §230‑barred)
- Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir.) (distinguishes neutral tools from content development; platform prompts that materially alter content can defeat §230)
- Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093 (9th Cir.) (algorithmic recommendations and notifications can be neutral tools entitled to §230 immunity)
- Carafano v. Metrosplash.com, Inc., 339 F.3d 1119 (9th Cir.) (§230 aims to encourage online speech and voluntary monitoring; editorial functions protected)
- Winter v. G.P. Putnam’s Sons, 938 F.2d 1033 (9th Cir.) (ideas/expression are not "products" for strict products‑liability claims)
- Ileto v. Glock, Inc., 349 F.3d 1191 (9th Cir.) (duty may arise where manufacturer’s conduct creates or substantially contributes to an illegal secondary market and foreseeable third‑party misuse)
- Twitter, Inc. v. Taamneh, 598 U.S. 471 (U.S.) (Supreme Court discussion of limits on tort liability relating to platform facilitation of third‑party wrongdoing)
