IN RE: SOCIAL MEDIA ADOLESCENT ADDICTION/PERSONAL INJURY PRODUCTS LIABILITY LITIGATION
Case No. 4:22-md-03047-YGR
Unitеd States District Court, Northern District of California
November 14, 2023
Hon. Yvonne Gonzalez Rogers
MDL NO. 3047; Re: Dkt. Nos. 237 & 320; This Document Relates to: Individual Plaintiffs’ Master Amended Complaint
ORDER GRANTING IN PART AND DENYING IN PART DEFENDANTS’ MOTIONS TO DISMISS
This Order addresses the first wave of legal arguments stemming from the filing, on behalf of children and adolescents, of hundreds of individual cases across the United States against five companies operating some of the world’s most used social media platforms: Meta’s Facebook and Instagram, Google’s YouTube, ByteDance’s TikTok, and Snapchat.1 Notably, this multi-district litigation (“MDL”) encompasses, in addition to individual suits, over 140 actions brought on behalf of school districts and actions filed jointly by over thirty state Attorneys General. While plaintiffs’ complaint asserts eighteen claims against defendants, this Order addresses only defendants’ motions to dismiss the individual plaintiffs’ five priority claims.
MDL courts frequently phase motion to dismiss briefing to determine whether the gravamen of the complaint can proceed with discovery, then, in parallel, legal analysis of the remaining claims proceeds. Here, defendants were adamant that the entirety of the complaint should be dismissed under
A Table of Contents outlining the organization of this Order is attached as Appendix A for the reader’s convenience.
I. BACKGROUND
A. Procedural Background
Plaintiffs’ Master Amended Complaint (hereinafter, “MAC”) is nearly 300 pages long asserting eighteen claims brought under various state laws on behalf of hundreds of plaintiffs. For efficiency purposes, the Court required plaintiffs to identify their five priority claims and preferred state law. (See Dkt. No. 131.) They are:
- Claim 1: Strict Products Liability Design Defect – New York
- Claim 2: Strict Products Liability Failure to Warn – New York
- Claim 3: Product-Based Negligent Design Defect – Georgia
- Claim 4: Product-Based Negligent Failure to Warn – Georgia
- Claim 5: Negligence Per Se – Oregon
Defendants filed two separate consolidated dismissal motions in response to the MAC. The first (hereinafter referred to as “MTD1”) addressed whether plaintiffs have legally stated each of the five priority claims. The second focused on immunity and protections under
B. Relevant Facts Alleged
The Court focuses on the allegations relevant to the pending motions. Thus, the Master Amended Complaint alleges as follows:
Defendants are companies that own and operate “social media” platforms.3 (MAC ¶ 1.) Each platform allows users to create profiles and to share content including messages, videos, and photos. Significantly, these platforms are more than mere message boards or search engines. In addition to enabling users to look for content, or to send content to other specific users, in many instances, the platforms determine when and to whom certain content is shown. As noted, the platforms here at issue are Facebook and Instagram, both operated by Meta; Snapchat; TikTok; and YouTube.
i. Defect Allegations
As pled, defendants target children as a core market and designed their platforms to appeal to and addict them.4 Because children still developing impulse control are uniquely susceptible to harms arising out of compulsive use of social media platforms, defendants have “created a youth mental health crisis” through the defective design of their platforms. (Id. at ¶ 96.) Further, these platforms facilitate and contribute to the sexual exploitation and sextortion of children,5 as well as the ongoing production and spread of child sex abuse materials (“CSAM”) online.
To that end, defendants know that children use their products, both from public and internal data. (See, e.g., id. at ¶ 60.) Indeed, the ability to estimate a user’s age and other characteristics increases the value of defendants’ platforms to advertisers. (Id. at ¶¶ 60–61.) Further, defendants specifically try to cultivate children as users. They believe that early adoption of their platforms will increase the likelihood a child will continue to use the platform as they age. Given the susceptibility of the addictive elements of the platforms, adolescents are more likely to use them for long periods of time, allowing defendants to sell more space to advertisers. (See, e.g., id. at ¶ 54.) Millions of children use defendants’ platforms “compulsively.” Many report that they feel they are addicted to the platforms, wish they used them less, and feel harmed by them. (Id. at ¶¶ 91–95.)
Defendants are also aware that their platforms harm child users. (Id. at ¶ 99.) Beginning in at least 2014, researchers began demonstrating that addictive and compulsive use of defendants’ platforms leads to negative mental and physical outcomes for children. (Id. at ¶ 101; see also id. at ¶¶ 96–124 (discussing the nearly decade’s-worth of “scientific and medical studies” linking compulsive use of defendants’ platforms to negative health outcomes).) At least some defendants also knew about these harms from internal data and studies. (See, e.g., id. at ¶¶ 181–85 (as to Meta).)
The MAC describes myriad ways in which the design of defendants’ platforms cause the harms described above. These aspects, or functions, include:
Endless-content: This describes the “endless feeds” of content shown to users via defendants’ platforms. (Id. at ¶ 845(i).) One example is Facebook’s “News Feed,” which presents users a continuous feed of stories, advertisements, and other content, and which never ends. (Id. at ¶ 202; see also id. at ¶¶ 494, 496 (as to analogous Snapchat features); 584–85, 591–92 (as to “continuous scrolling” via TikTok’s “For
Lack of Screen Time Limitations: These designs concern maximizing the length of user sessions and the lack of default or user-imposed protections to limit session duration, such as by time of day or frequency of use. (Id. at ¶¶ 845(e) – (h), (j).) For instance, the TikTok app “intentionally omits the concept of time.” The app does not show users the time or date a video was uploaded, and the app “is designed to cover the clock displayed at the top of users’ iPhones, preventing them from keeping track of time spent” in the app. (Id. at ¶¶ 621–22.)6
Intermittent Variable Rewards or “IVR”: Here, defendants designed algorithms to strategically time when they show content to users in order to maximize engagement. (Id. at ¶ 845(l); see also ¶¶ 77–81 (explaining how IVR works, generally, and how defendants deploy IVR on their platforms).) For example, Instagram may wait until a piece of content receives multiple likes before notifying the user who posted it. That way, the user’s dopamine reaction is intensified after viewing the notification. (Id. at ¶ 79.) TikTok may similarly delay showing a video it knows a user will like until the moment before it anticipates the user would otherwise log out of the app. (Id.)
Ephemeral Content: To create a sense of urgency for users to engage with content, some defendants limit how long certain content is available. (Such content is sometimes referred to as “ephemeral” given its “disappearing” nature.)7 For example, the defining feature of Snapchat is the ability to send and receive “Snaps,” photo or video messages that disappear within a short period of time. (Id. at ¶ 444; see also id. at ¶¶ 294 (as to ephemeral Instagram and Facebook “Stories”); 626–27 (as to disappearing TikTok “Stories”).) Such content also makes it harder to track the spread of CSAM and enables coercive, predatory behavior toward children. (See, e.g., id. at ¶ 523 (describing how Snapchat’s ephemeral content contributes to such harms).)
Limitations on Content Length: The length of content that can be posted is limited to optimize use. (See, e.g., id. at ¶ 224 (as to Instagram videos of up to fifteen seconds long).)
Notifications: Defendants send users notifications on their phones, by text and by email, to draw them back to their respective platforms. For example, the platform may alert users when someone they follow creates new content, or where someone reacts to their content. (Id. at ¶¶ 292–93 (as to Meta).) This includes pushing notifications to users late at night, prompting them to re-engage with the app no matter the cost to their sleep schedule. (Id. at ¶¶ 103 (as to defendants, generally); 488 (as to Snap).) Some defendants also notify users of content created by defendants themselves. For example, Snap rewards continuous engagement with the app by providing “elevated status,” “trophies,” and other awards to frequent users. (Id. at ¶¶ 439, 468 (describing the range of social metrics through which Snapchat “reward[s] users when they engage with [the
Algorithmic Prioritization of Content: Defendants use engagement-based algorithms that promote content to users based on the likelihood it will keep them engaged with and using the platform rather than post content as specifically directed by users or in chronological order. (See, e.g., id. at ¶¶ 227 (as to Instagram); 200 (as to Facebook).) For instance, TikTok tracks user behavior, such аs time spent on a given video, so that it can provide a “never-ending stream of TikToks optimized to hold [users’] attention.” (Id. at ¶ 585 (citation omitted) (alteration in original).) Plaintiffs allege this can be harmful to children not only because it promotes compulsive use, but because it may expose children to “rabbit holes” of harmful or inappropriate content. For example, a child experiencing depression may spend time on a video about suicide and then find themselves receiving an increasing number of suicide related videos. (Id. at ¶¶ 597–601.)
Filters: Defendants provide users with tools, such as filters, so that they can edit photos and videos before posting and/or sharing them. This enables the proliferation of “idealized” content reflecting “fake appearances and experiences,” resulting in, among other things, “harmful body image comparisons.” (Id. at ¶ 88.) For example, Snapchat includes “lenses and filters” that allow users to “blur[] imperfections,” “even[] out skin tone,” and alter facial features and skin color. (Id. at ¶¶ 513–14; see also id. at ¶¶ 314–17 (explaining how Instagram filters enable users to make “improvements” to their appearance, resulting in a range of social comparison, self-hatred, and other harms).) Exposure to these filtered, sometimes unrealistic images create body image and self-esteem issues among youth users. At present, defendants do not inform users when an image has been altered through filters or otherwise edited. As a result, young users are unable to discern unedited and edited content. (Id. at ¶ 845(k); see also id. at ¶ 318 (as to Meta).)
Barriers to Deletion: Each defendant makes it more challenging to delete and/or deactivate a user account than to create one in the first place, thereby creating barriers to children discontinuing use of defendants’ apps, even if they want to. (See, e.g., id. at ¶¶ 358–60 (as to Facebook); 489–90 (as to Snapchat); 638–48 (as to TikTok); 774–77 (as to YouTube).) For instance, a user seeking to delete or deactivate their Facebook or Instagram account “must locate and tap on approximately seven different buttons (through seven different pages and popups) from th[eir] main feed[s].” (Id. at ¶ 359.) Yet, even once navigating that process, they are not able to immediately delete or deactivate their account; instead, Meta imposes a 30-day waiting period during which a user can reactivate their account simply by logging in. (Id. at ¶ 360.)
Connection of Child and Adult Users: Some platforms “recommend minor accounts to adult strangers.” (Id. at ¶ 845(u).) These include “quick add” functions that recommend that users “friend,” “follow,” or otherwise connect with other users. These features recommend connections between adult and child users, facilitating the exploitation of children by adult predators. (See, e.g., id. at ¶ 198 (as to Facebook); ¶¶ 509–10 (as to Snapchat).)
Private Chats: Some defendants have private chat functions, which can be harmful to children as they further enable private communication with adult predators. (See, e.g., id. at ¶¶ 197 (as to Facebook); 225 (as to Instagram).)
Geolocation: Some defendants allow children to share their location with other users, such as by geotagging posts. This too can be used by predators. (See, e.g., id.
Age-Verification: Defendants either do not require users to enter their age upon sign-up or do not have effective age-verification for users, even though such verification technology is readily available and, in some instances, used by defendants in other contexts.8 For example, Meta purports not to allow children under thirteen to access Facebook. The platform relies on a user’s self-reported age when they sign up for the platform to enforce this policy. When a user enters a birthdate showing they are under thirteen years-old, they will be blocked from completing the registration process. However, immediately thereafter, the platform permits them to recomplete the sign-up form, enter an earlier birthday (even if it does not accurately reflect their age), and create an account. (Id. at ¶¶ 328–32.) Snapchat’s age verification systems are similarly defective. (Id. at ¶ 461.)
Lack of Parental Controls: Defendants offer parents limited tools for controlling their children’s access to and use of their respective platforms. Further, their apps do not require parental consent for children to create new accounts, or, where parental consent is required for child-users, children can easily circumvent the requirement by inputting a fake age, as described above. Where the platforms provide tools for parents to control or monitor their child’s use, the tools are inadequate. For example, Snapchat allows parents to “link” to a child’s account and see with whom they communicate, but the app does not enable parents to see what messages are being sent or to control access to many of the app’s features. (Id. at ¶ 522.)
The failure to warn claims are similarly based on the above-referenced alleged defects in their platforms. (See id. at ¶¶ 431–37 (Meta); 543–53 (Snap); 675–89 (TikTok); 812–19 (YouTube).)
ii. Allegations Regarding Causation and Harm
The MAC contains two categories of allegations relative to causation. As to the first, plaintiffs allege that the “defective features” of defendants’ platforms caused their negative physical, mental, and emotional health outcomes, such as anxiety, depression, and self-harm. (See generally id. at ¶ 90.) They support these allegations by making three logical moves. First, they explain, in great detail, how defendants’ platforms work. Second, they assert these platforms are designed to (and in fact do) addict minor users. Third, they show that compulsive use of such platforms results in the harms alleged. (See generally ¶¶ 181–437 (Meta); 438–553 (Snap); 554–689 (TikTok); 690–819 (Google).) As to the second, the MAC is also replete with references to research studies tying use of defendants’ platforms to the types of injuries alleged by plaintiffs. (See id. at ¶ 101; see also id. at ¶¶ 96–124 (collecting studies).)9
iii. Negligence Per Se Allegations Relative to Section 230 & the First Amendment
Here, the Court notes that the MAC alleges claims for negligence per se based on defendants’ violations of two federal statutes, the
In general, plaintiffs allege defendants violate
Plaintiffs allege defendants violate the
II. LEGAL FRAMEWORK
A. Law to Apply in an MDL
In an MDL, the transferee court applies the law of its circuit to issues of federal law, but on issues of state law it applies the state law that would have been applied to the underlying case as if it had never been transferred into the MDL. In re Anthem, Inc. Data Breach Litig., 2015 WL 5286992, at *2 (N.D. Cal. Sept. 9, 2015) (collecting cases). This may require a court to apply different law to the individual cases within the MDL. See In re Dow Co. Sarabond Prods. Liab. Litig., 666 F. Supp. 1466, 1468–70 (D. Colo. 1987) (applying the law of four different circuits to different cases in the same MDL).
B. Motion to Dismiss Standard
The standard under
C. Organization of Analysis
The claims at issue raise multiple broad and distinct theories of harm regarding a wide variety of alleged conduct by defendants. In the interest of efficiency and clarity, this Order is organized as follows:
The Court first addresses the extent to which plaintiffs’ priority claims are barred, if at all, by
Next, the Court assesses whether plaintiffs have stated their products liability claims in terms of the existence of a product (Section V), duty (Section VI), and general causation (Section VII).
III. SECTION 230
A. Section 230(c)(1) Overview
Defendants contend that
The Court begins with the statute which provides:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.11
By way of background, prior to passage of the relevant portions of
Through the
B. Tests to Determine Applicability of Immunity Protections
i. The Barnes Test
The Ninth Circuit has articulated a three-part test for determining if a claim is entitled to
Section 230(c)(1) of theCDA “only protects from liability (1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.
Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1100 (9th Cir. 2009), as amended (Sept. 28, 2009) (footnotes omitted).12 Hereinafter, the Court refers to this as the Barnes test.
Here, plaintiffs allege that defendants fail to meet the second prong. The Court thus directs the bulk of its analysis there.
a. Prong 1: Interactive Computer Services and Information Content Providers
Prong one provides that the act only applies to “information content providers.” As this is not disputed, the Court only briefly addresses this prong.
“The term ‘information content provider’ means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”
b. Prong 2: Publisher or Speaker
The second prong of the Barnes test focuses on “whether ‘the duty the plaintiff alleges’ stems ‘from the defendant‘s status or conduct as a publisher or speaker.’” Lemmon v. Snap, Inc., 995 F.3d 1085, 1091 (9th Cir. 2021) (quoting Barnes, 570 F.3d at 1107). A claim meets this prong where the claim is based on “behavior that is identical to publishing or speaking.” Barnes, 570 F.3d at 1107 (emphasis supplied). Critically,
Thus, Doe v. Internet Brands instructs. There, the plaintiff was an aspiring model who was assaulted at a fake audition third parties had posted on defendant’s website. The plaintiff alleged that defendant was aware of the third parties’ scheme to assault models using the website, but failed to provide any warning to users such as plaintiff. Importantly, defendant’s alleged awareness stemmed “from an outside source, not from monitoring postings” on their site. Id. at 849. The court held that
Further, in Lemmon v. Snap, Inc., 995 F.3d 1085, 1089 (9th Cir. 2021), the Ninth Circuit reversed a district court that found
With respect to the term “publishing” itself, courts understand it to mean “deciding whether to publish or to withdraw from publication third-party content.” Id. The most basic example of online publishing
“Publishing” also includes editorial decisions and functions ancillary to the decision to make content available. Thus, publishing has been found to “involve[] reviewing [and] editing,” such as “review[ing] material submitted for publication, perhaps edit[ing] it for style or technical fluency,” Barnes, 570 F.3d 1102, and “deciding whether to exclude material . . . .” Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1170–71 (9th Cir. 2008). In general, it is any conduct “rooted in the common sense and common definition of what a publisher does.” Barnes, 570 F.3d at 1102 (also “deciding whether to publish, withdraw, postpone or alter content” and other of “‘publisher’s traditional editorial functions’”) (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997)). The Ninth Circuit has also indicated that
c. Prong 3: “Information Provided by Another Information Content Provider”
The third prong overlaps with the prior two and concerns information provided by another information content provider. In re Apple Inc. App Store Simulated Casino-Style Games Litig., 625 F. Supp. 3d 971, 978 (N.D. Cal. 2022) (“Practically speaking, the second and third factor tend to overlap in significant ways.”) No further articulation is required.
ii. The Roommates Test
Given the complexity with which online platforms function, it is not always clear
The Ninth Circuit has established a test for determining if a platform’s actions in altering or presenting content constitute development, namely, whether it provides “neutral tools” for the creation or dissemination of content which does not destroy
Importantly, in Roommates, the Ninth Circuit found some of the allegеd conduct by defendants was neutral, and protected under
The court held that
Similarly, in Dyroff, plaintiff brought various claims holding defendant liable for using an algorithm to recommend a message board to her son based on his past interests and for sending him notification when others posted on the message board after he joined. Her son joined that message board and used it to purchase drugs from another user, leading to his overdose. Plaintiff alleged that the recommendation to join the message board was defendant’s own content thus immunity could not be available under Barnes. It was not acting as an interactive computer service, or publishing another party’s content. The Ninth Circuit disagreed finding that the notifications and recommendations “were content-neutral tools used to facilitate . . . user-to-user
C. Analysis
i. Parties’ “All or Nothing” Approach
As noted at the outset, defendants argue that
Neither side persuades with its all or nothing approach. As described above, application of
ii. Claim 1: Negligent Design – Strict Liability and Claim 3: Negligence – Design
a. Defect Allegations Not Barred By Section 230
The Court begins with plaintiffs’ design defect products liability claims. As relevant thereto, plaintiffs make myriad allegations that do not implicate publishing or monitoring of third-party content and thus are not barred by
- Not providing effective parental controls including notification to parents that children are using the platforms (MAC ¶ 845(b)–(c));
- Not providing options to users to self-restrict time used on a platform (id. at ¶ 845(f)–(g));
- Making it challenging for users to choose to delete their account (id. at ¶ 845(m));
- Not using robust age verification (id. at ¶ 845(a));
- Making it challenging for users to report predator accounts and content to the platform (id. at ¶ 845(p));
- Offering appearance-altering filters (id. at ¶ 864(d));
- Not labelling filtered content (id. at ¶ 845(k));
- Timing and clustering notifications of defendants’ content to increase addictive use (id. at ¶ 845(l))
- Not implementing reporting protocols to allow users or visitors of defendants’ platforms to report CSAM and adult predator accounts specifically without the need to create or log in to the products prior to reporting (id. at ¶ 845(p)).
The Court proceeds to consider defendants’ arguments relative to the above-referenced defects, to the extent defendants addressed them specifically. For instance, defendants do not directly address application of
Defendants’ assertion that other courts have found age verification targeted claims barred by
Here, in contrast, plaintiffs’ allegations are broader. They allege that defendants could use age-verification information to take steps that would not impact their publication of third-party content, such as by notifying parents that a child is on the site, enabling the parent to either limit the child’s access to the site or talk to them about the content they may see. Accordingly, they pose a plausible theory under which failure to validly verify user age harms users that is distinct from harm caused by consumption of third-party content on defendants’ platforms.
Again, defendants do not directly address plaintiffs’ filter-related allegations. Plaintiffs allege that defendants’ products are defective because they provide filters for children to use and because defendants do not label filtered images. Defendants ignore these allegations, arguing only that they cannot be liable for publishing content made using a filter. At the hearing, defendants did suggest that holding them liable for providing the filters is indistinguishable or inseparable from holding them liable for publishing the images created with those filters. The Court disagrees. Plaintiffs plausibly allege that the filters are harmful regardless of whether children eventually post the images that they filtered. Plaintiffs allege that children are harmed simply by creating and then seeing their own altered images. No posting or publication is necessary. There is a defect and a harm separate and apart from publication of any third-party content. Lemmon, 995 F.3d 1092 (allowing product-dеfect claim based on speed filter not barred by
Next,
Finally, defendants have not addressed how altering the ways in which they allow users and visitors to their platforms to report CSAM is barred by
The motion to dismiss the product defect claims based on
b. Defect Allegations Barred By Section 230
By contrast, the following alleged design defects directly target defendants’ roles as publishers of third-party content and are barred by
-
Failing to put “[d]efault protective limits to the length and frequency of sessions” (MAC ¶ 845(e)); - Failing to institute “[b]locks to use during certain times of day (such as during school hours or late at night” (id. at ¶ 845(h));
- Not providing a beginning and end to a user’s “Feed”16 (id. at ¶ 845(i));
- Publishing geolocating information for minors (id. at ¶ 845(t));
- Recommending minor accounts to adult strangers (id. at ¶ 845 (u));
- Limiting content to short-form and ephemeral content, and allowing private content (id. at ¶ 864 (l); briefing, passim);
- Timing and clustering of notifications of third-party content in a way that promotes addiction (id. at ¶ 845(l)); and
- Use of algorithms to promote addictive engagement (id. at ¶ 845(j)).
First, addressing the defects in paragraph 845 (e), (h), and (i) would necessarily require defendants to publish less third-party content.17 Unlike the opt-in restrictions described above, which allow users to choose to view or receive less content, but do not limit defendants’ ability to post such content on their platforms, these alleged defects would inherently limit what defendants are able to publish. Similarly, limiting publication of geolocation data provided by users to be published by the site inherently targets the publishing of third-party content and would require defendants to refrain from publishing such content.18
Second,
Lemmon, supra, and A.M. v. Omegle.com, LLC, 614 F. Supp. 3d 814 (D. Or. 2022) do not compel a different result. In both, the plaintiffs alleged the defendants had violated their duty to plaintiffs through conduct other than publishing third-party content and could have met their duty without changes to publishing conduct. In Lemmon, the plaintiffs alleged the speed filter was defective irrespective of content being posted or published and that defendants could have met their duty to create a safe product by no longer providing the filter, not by changing how they publish any third-party content. Here, in contrast, plaintiffs have not alleged that the recommendation functions are themselves dangerous, they allege they are dangerous because they recommend third-party content: adult profiles. Plaintiffs do not explain how such defect could be rectified other than through limitations on defendants’ publication of third-party content.
Similarly, the plaintiff in Omegle alleged that the defendant’s product was defective because it randomly paired her to chat with an adult, who then abused her. The defendant argued that because it matched users in order for them to chat, it was acting as a publisher of third-party content (the conversation). The court disagreed. The recommendation of a chat partner was distinct from the recommendation or publication of content. Indeed, at the time the matching occurred, the “content” or conversation did not exist. Omegle, 614 F. Supp. 3d 820–21 (“Omegle has attempted to make this a case about [abuser’s] communications to the Plaintiff, but as discussed above, Plaintiff‘s case does not rest on third party content. Plaintiff‘s contention is that the product is designed a way that connects individuals who should not be connected (minor children and adult men) and that it does so before any content is exchanged between them.”). There is no such distinction between the recommendation and publication of content here. Defendants recommend existing third-party content (profiles) to users, which is publishing conduct.
Third,
Fifth, where notifications are made to alert users to third-party content,
Sixth, to the extent plaintiffs challenge defendants’ use of algorithms to determine whether, when, and to whom to publish third-party content,
The parties’ cited cases support this approach. Courts addressing the use of an algorithm to connect a user with certain third-party content have found
Plaintiffs argue that their claims are distinct from those at issue in cases such as Force. They focus not on defendants’ conduct as publishers of third-party content but rather the process through which the decision to publish is made. Said differently, they argue that these algorithms are not formulated merely to connect people with content, rather they are crafted to
Plaintiffs’ framing does not change the analysis. Nothing in
Accordingly, the motion to dismiss the product defect claims based on
iii. Claim 2: Strict Liability – Failure to Warn and Claim 4: Negligence – Failure to Warn
Claims 2 and 4 allege that defendants distributed defective and unreasonably dangerous products without adequately warning users of risks including risk of abuse, addiction, and compulsive use. The Court defines the risks are those created by the defects addressed in claims 1 and 2. Defendants do not brief application of
iv. Claim 5: Negligence Per Se
Plaintiffs’ negligence per se claim is based on defendants’ alleged violations of
The alleged
Accordingly, the Court FINDS no
IV. FIRST AMENDMENT
A. Overview and Legal Framework
Defendants broadly assert that the
“The Free Speech Clause of the
when a new and different medium for communication appears.”25 Id. at 790 (citation omitted). Additionally, under the First Amendment, “the creation and dissemination of information are speech . . . .” Sorrell v. IMS Health Inc., 564 U.S. 552, 570 (2011); Bartnicki v. Vopper, 532 U.S. 514, 527 (2001) (“[I]f the acts of ‘disclosing’ and ‘publishing’ information do not constitute speech, it is hard to imagine what does fall within that category.”) (citation omitted). Dissemination of speech is different from “expressive conduct,” which is conduct that has its own expressive purpose and may be entitled to First Amendment protection. Id. (stating that disclosing or publishing of information is “distinct from the category of expressive conduct”).
That said, “well-defined and narrowly limited classes of speech” provide exceptions to First Amendment protections. Brown, 564 U.S. 791. For instance, obscenity, fighting-words, and incitement may go beyond the protection of free speech. Id. Further, First Amendment rights may also be subject to reasonable time, place, and manner restrictions. As the parties have not argued any of these limitations are relevant
B. Design Defect Claims (Claims 1 & 3)
Defendants argue that the First Amendment protects them from liability for the speech they publish as well as for all choices they have made in disseminating them. Even adopting this premise in full, much of the conduct alleged by plaintiffs does not constitute speech or expression, or publication of same. Indеed, defendants’ briefing ignores these defects and does not explain how holding them liable in that context would be akin to making them liable for speech.
First, several of the defects relate to how users interact with the platforms. As the Court has already found certain defect allegations barred under
- Not providing effective parental controls including notification26 to parents that children are using the platforms (MAC at ¶¶ 845(b)–(c));
- Not providing options to users to self-restrict time used on a platform (id. at ¶¶ 845(f)–(g));
- Making it challenging for users to choose to delete their account (id. at ¶ 845(m));
- Not using robust age verification (id. at ¶ 845(a)); and
- Not implementing reporting protocols to allow users or visitors of defendants’ platforms to report CSAM and adult predator accounts specifically without the need to create or log in to the products prior to reporting (Id. at ¶ 845(p)).
Addressing these defects would not require that defendants change how or what speech they disseminate. For example, parental notifications could plausibly empower parents to limit their children’s access to the platform or discuss platform use with them. Providing users with tools to limit the amount of time they spend on a platform does not alter what the platform is able to publish for those that choose to visit it. As discussed with regard to
Second, plaintiffs’ filter-related defects identified in ¶ 845(k) and ¶ 864(d) of the MAC also survive at this stage. Defendants raise no arguments regarding how the First Amendment protects them from having to label filtered content. Instead, plaintiffs focus on the filters themselves, arguing that they are “tools” to enable users to “modify[] their own expression,” and to “facilitate interactive speech.” Defendants disagree and submit the First Amendment protects the filters in the same way a magazine’s use of “computer technology to alter famous film stills” for fashion photography is protected. (MTD2 at 20 (citing Hoffman v. Capital Cities/ABC, Inc., 255 F.3d 1180, 1183 (9th Cir. 2001)).)
The Court is not persuaded. In Hoffman the Ninth Circuit found the images and alteration of those images were protected by the First Amendment, not the computer editing technology that created the speech. It was the images, the speech, that was protected. Defendants make a distinctly different argument here. They do not allege that they created the filters with any expressive intent or that the filters are in any way their own “speech.” Based on defendants’ own description, the filters are neutral, non-expressive tools provided by defendants. They are not entitled to First
Third, the timing and clustering of notifications of defendants’ content to increase addictive use (id. at ¶ 845(l)) is entitled to First Amendment protection. There is no dispute that the content of the notifications themselves, such as awards, are speech. The Court conceives of no way to interpret plaintiffs’ claim with respect to the frequency of the notifications that would not require defendants to change when and how much they publish speech. This is barred by the First Amendment. Bartnicki, 532 U.S. 527. Accordingly, the Court finds that the First Amendment protects defendants for the timing and clustering of notifications they publish to usеrs regarding content created by defendants themselves. These are fundamentally choices about when and to whom to publish content notifications. The motion to dismiss the product defect claims as to this defect is GRANTED.
In summary, with respect to the identified defects, the First Amendment only affords protection with respect to the timing and clustering notifications of defendants’ content to increase addictive use (MAC at ¶ 845(l)); otherwise, it does not.
C. Failure to Warn Claims (Claims 2 & 4)
As plaintiffs raise in their opposition, defendants’ motion did not raise any arguments specifically addressing application of the First Amendment to failure to warn claims. For the first time on reply defendants attempt to distinguish plaintiffs’ cases and cite other First Amendment cases focused on the duty to warn where the alleged danger is caused by defendant’s publication or dissemination of speech. Even if this were not procedurally improper, the cited cases are not dispositive. Finding the issue effectively waived by defendants at this stage, and not fully briefed, the Court DENIES any belated motion on First Amendment grounds as to the failure to warn claims.
///
D. Claim 5: Negligence Per Se
Defendants did not raise any arguments on First Amendment grounds with respect to the negligence per se claim. Plaintiffs identify this issue and defendants remain silent in reply. The Court deems the silence as a concession that the First Amendment does not bar this claim and DENIES the motion to dismiss this claim on First Amendment grounds.
V. PRODUCTS LIABILITY: WHETHER THE DEFECTS ALLEGED CONCERN “PRODUCTS”
Having determined the applicability of
* * *
Plaintiffs state both design defect and failure to warn claims relative to defendants’ platforms. As discussed at length at the hearing, the claims are predicated upon the existence of a “product.” Thus, the Court begins there.27
A. Background
i. Overview of Parties’ Arguments
As in the
These approaches are overly simplistic and misguided. While acknowledging that these proceedings implicate novel questions of law, including the applicability of products liability torts to the digital world, the parties repeatedly downplay nuances in the caselaw and the facts. The Court declines to adopt either party’s desired approach. Cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed.28 This is borne out in the cases relied upon by all parties. The cases generally concern a specifiс product defect and the determination of whether a specific technology is a product hinges on the specifics of that defect. The same applies here. The Court determines it is necessary to analyze each defect pled by plaintiffs to determine whether they have adequately alleged the existence of a product (or products).
ii. The Court’s Approach to Analyzing Such Arguments
Given the parties’ tactical choices, it is not surprising that neither provides a comprehensive legal framework through which the Court can assess what is and is not a
Thus, the Court begins by setting out a framework, then applies it to plaintiffs’ alleged defects.
As stated above, many of plaintiffs’ alleged defects are barred by
- failure to implement robust age verification processes to determine users’ ages;
- failure to implement effective parental controls;
- failure to implement effective parental notifications;
- failure to implement opt-in restrictions to the length and frequency of use sessions;
- failure to enable default protective limits to the length and frequency of use sessions;
- creating barriers that make it more difficult for users to delete and/or deactivate their accounts than to create them in the first instance;
- failure to label content that has been edited, such as by applying a filter;
- making filters available to users so they can, among other things, manipulate their appearance; and
- failure to create adequate processes for users to report suspected CSAM to defendants’ platforms.
B. Legal Framework
i. Plaintiffs’ Preferred Law
The Court begins with plaintiffs’ preferred law. Neither Georgia nor New York have codified definitions of what constitutes a “product” for the purpose of applying the doctrine of products liability.30 In the absence of such definitions, the Court looks to well-accepted persuasive authority concerning the scope of a “product.” This is especially appropriate where, as here,
The Court is satisfied, on this basis, that applying the approach taken by the Restatement is in keeping with approaches likely to be taken by the respective high courts of states of plaintiffs’ preferred law. In particular, use of the Third Restatement of Torts is consistent with this Court’s obligation “to predict’ how the state high court would rule” based on the information available. In re Lithium Ion Batteries Antitrust Litig., 2014 WL 4955377 (N.D. Cal. Oct. 2, 2014) (citing Hayes v. Cnty. of San Diego, 658 F.2d 867, 871 (9th Cir. 2011)).32
ii. Restatements of Torts
The Restatements of Torts collectively reflect the evolution of the dоctrine of strict products liability over time. For instance, the Second Restatement of Torts focused on individuals who “sell[] any product in a defective condition unreasonably dangerous to the user or consumer or to his property.” Restatement (Second) of Torts § 402A(1) (AM. LAW. INST. 1965) (hereinafter, “Second Restatement”). Such individuals or entities were liable for physical harm caused by their products where they “engaged in the business of selling such a product” and the product was “expected to and [did] reach the user or consumer without substantial change in the condition in which it [was] sold.” Id. at § 402A(1)(a)-(b). The Second Restatement did not, however, define what constituted a “product,” nor did the accompanying commentaries.33
The Third Restatement, published in 1998, did include a definition of “products” for purposes of products liability actions. This definition reads:
(a) A product is tangible personal property distributed commercially for use or consumption. Other items, such as real property and electricity, are products when the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property that it is appropriate to apply the rules stated in th[e] Restatement.
(b) Services, even when provided commercially, are not products.34
(c) Human blood and human tissue, even when provided commercially, are not subject to [the] Restatement.
Restatement (Third) of Torts § 19(a) (AM. LAW. INST. 1998) (hereinafter, “Third Restatement”). This definition as well as the Restatement’s explanatory notes identify three circumstances in which intangible things may be deemed “products.” To summarize:
First, intangible things can be products when analogized to “tangible personal property” based on “the context of [its] distribution and use.” Id.
Second, strict products liability has been imposed in unique circumstances where harm is caused by (i) the distribution of objectively false information or (ii) electricity. Courts have attached products liability to “maps and navigational charts” containing “false information.”35 Id. & cmt. d. They have also done so with respect to certain intangible forces, such as electricity. Specifically, “a majority of courts have held that electricity becomes a product when it passes through the customer’s meter and enters the customer’s premises.” Id. (cleaned up).
Third, the Restatement clarifies that ideas, content, and free expression have consistently been held not to support a products liability claim. The seminal case of Winter v. G.P. Putnam’s Sons, 938 F.2d 1033 (9th Cir. 1991) illustrates the point. There, plaintiffs were mushroom enthusiasts who become severely ill after eating wild mushrooms they identified as non-dangerous based on a reference book. Id. at 1033. They subsequently sued the book’s publisher under a strict products liability theory, alleging the book was defectively designed in that it contained erroneous and misleading information about the identification of deadly mushrooms. Id.
The Ninth Circuit’s opinion in Winter is routinely cited for the proposition that ideas, thoughts, and free expression cannot form a product upon which a products liability can be based. The Ninth Circuit begins with a statement of first principles:
A book containing Shakespeare’s sonnets consists of two parts, the material and print therein, and the ideas and expression thereof. The first may be a product, but the second is not. The latter, were Shakespeare alive, would be governed by copyright laws[, among others.] These doctrines applicable to the second part are aimed at the delicate issues that arise with respect to intangibles such as ideas and expression. Products liability law is geared toward to the tangible world.
Id. at 1034 (emphasis supplied). The court applied this framework to the mushroom reference book. Id. at 1034-36. Following the logic excerpted above, the Ninth Circuit allowed that the mushroom reference book itself could be a product, although its contents, as “pure thought and expression,” were not. Id. at 1035-36.
The logic of Winter has been repeatedly reaffirmed in the case law. For instance, roughly ten years after Winter was decided, the Sixth Circuit relied upon its framework in a case involving claims more akin to those presently before the Court, i.e., one involving the digital world. See James v. Meow Media, Inc., 300 F.3d 683 (6th Cir. 2002) (affirming James v. Meow Media, Inc., 90 F.Supp.2d 798, 800 (W.D. Ky. 2000)). There, a 14-year-old student shot, wounded, and killed his high school classmates. Parents brought suit against entities that developed and distributed violent online content which the assailant consumed through video games and movies. 90 F.Supp.2d at 809. They alleged that the “inherent dangerousness” of the content rendered it a “product” for which its developers should be held strictly liable. Id. The trial court disagreed, dismissing plaintiffs’ products liability claims on the grounds that “intangible thoughts, ideas, and expressive content are not ‘products’ within the realm of the strict liability doctrine.” Id. at 810-11. The Sixth Circuit ultimately affirmed. See James, 300 F.3d at 701 (quoting Watters v. TSR, 904 F.2d 378 (6th Cir. 1990)) (“The video game cartridges, movie cassette, and internet transmissions are not sufficiently ‘tangible’ to constitute products in the sense of their communicative content.”) (cleaned up).)
C. Analysis
The Court uses the legal framework outlined above to analyze plaintiffs’ products liability claims. First, the Court uses the framework to address the parties’ “all or nothing” approach to whether defendants’ platforms are products. In this regard, the Court considers whether defendants’ platforms are: (1) services; (2) tangible; (3) analogous to tangible personal property; (4) akin to ideas, content, and free expression upon which products liability claims cannot be based; and/or (5) akin to “software,” and should, on that basis, be treated as products. Second, the Court conducts an analysis of the functionalities of defendants’ platforms challenged by plaintiffs.
///
///
i. Parties’ “All or Nothing” Approach
a. Whether Defendants’ Platforms are Services
The parties dispute whether defendants’ platforms should be classified globally as services, not products. On the one hand, defendants maintain that the platforms are simply “interactive communication services.” (MTD1 at 3:3.) Because they merely
These arguments are wanting. As to defendants, a review of the cases reveals that, where courts actually considered whether web-based platforms such as defendants’ are services, they offered minimal, if any, rationale for such classifications.36 This is presumably because the issue appeared either obvious or was not contested. Plaintiffs meanwhile fail to persuade that defendants transfigure their platforms into “products” simply by using the word “product” in internal and external communications.37 (See generally MAC ¶¶ 171–80). Hiring “Product Managers” to work on a platform does not render that platform a product. See e.g., Jacobs, 2023 WL 2655586, at *3 & n.1 (“[T]he use by Facebook of the term ‘product’ does not resolve the question of whether Facebook represents a ‘product’ for the purposes of [the at-issue products liability claims].”) In myriad circumstances, courts look to the substance of an issue, not merely the label. A label can be a factor, but without more, is hollow. The Court will not rest its analysis
purely on a label. Accordingly, parties’ global arguments as to whether defendants’ platforms are services do not resolve this dispute.
b. Whether Defendants’ Platforms are Tangible
Second, plaintiffs argue that defendants’ platforms are in fact tangible in the sense that they “have very tangible manifestations to their users.” (Pls’ Opp’n at 25:22–23.) They contend that defendants “design their apps to be visually stimulating, to make noises and vibrate, and to prompt Plaintiffs and other users to pick up their devices to swipe, click, and flick the user interface . . . .” (Id. at 25:23–24.) Further, defendants “track their users’ physical interactions with the apps,” such as measuring “how long a child ‘hovers’ on
While creative, the Court disagrees. It is the phones that vibrate, make sounds, or otherwise manifest, physically, defendants’ design choices. Any connection between defendants and such haptics is therefore too attenuated for the Court to find that defendants’ platforms are in fact tangible products. Doing so would erode any distinction between phone manufacturers (for example, when they calibrate how a phone vibrates in a user’s hands) and platform operators like defendants.
Accordingly, the Court determines defendants’ platforms are not tangible.38
c. Whether Defendants’ Platforms are Analogous to Tangible Personal Property
Third, plaintiffs barely argue that defendants’ platforms are sufficiently analogous to tangible personal property to be products.39 See Third Restatement § 19(a) (defining products to encompass tangible things that are analogous to tangible personal property in “the context of their distribution and use”). The Court addresses each of their two arguments.
One, plaintiffs contend that the “distribution” of defendants’ platforms is akin to platforms made by “product designers,” overseen by “product managers,” and then packaged and shipped to the public via stores. The Court has already addressed, and incorporates here, the flaws in relying solely on generic labels. In terms of the analogy that the platforms are, like tangible goods, similarly purchased in a store, i.e., an online app store, that specific analogy fails to persuade. The Court determines that this analogy is not sufficiently direсt. For example, defendants’ platforms are not exclusively accessed by downloading an app from an online “store.” They can be accessed via their webpages and can even come pre-loaded on certain connected devices. (See, e.g., MAC ¶¶ 186–89 (implicitly acknowledging that Facebook was initially developed as a website but later was configured into a mobile phone app); 578 (noting that TikTok can be accessed via web browser); 692 (explaining that YouTube “comes pre-installed on many Smart-TV’s.”)40
Two, plaintiffs submit, without analysis, as follows:
Consumers store Defendants’ apps on their personal electronic devices and use them for personal purposes. There is no functional difference between downloading an app from the App Store and using it on your phone, and buying a container from the Container Store and using it on your countertop.
(Pls’ Opp’n at 25:11-14.) Treating as self-evident the similarities between defendants’ platforms and a physical container purchased from a storage solutions company fails. The Court cannot discern what plaintiffs seek to demonstrate through this analogy. A social media platform is not like a container. To that end, plaintiffs have not established as a global matter that defendants’ platforms are akin to tangible personal
d. Whether Defendants’ Platforms are Akin to Ideas, Content, or Free Expression
Fourth, the Court considers whether defendants’ platforms are akin to ideas, content, and free expression upon which products liability claims cannot be based. Defendants emphatically argue that plaintiffs’ claims rise and fall with the content-based allegations made in the MAC. Plaintiffs again emphasize that they do not challenge any content hosted on defendants’ platforms, but challenge defendants’ design choices in how to structure and operate their platforms, which subsequently caused them harm.
In light of the above and for efficiency, the Court analyzes the parties’ key cases focusing on the distinction between design- and content-focused claims to determine whether resolution of the pending motions on their “all-or-nothing” arguments is possible. To this end, the Court begins with defendants’ cases. In addition to Winter and its progeny, such as James, defendants also rely on cases like Estate of B.H. v. Netflix, Inc., 2022 WL 551701, at *1 (N.D. Cal. Jan. 12, 2022), appeal docketed, No. 22-15260 (9th Cir. Feb. 23, 2022) and Rodgers v. Christie, 795 F. App’x 878 (3d Cir. 2020) to assert that products liability claims relating to content-delivery systems are not cognizable.
In Netflix, decided by this Court in 2022, plaintiffs brought a range of claims, including for products liability, against Netflix in connection with Netflix’s production, dissemination, and recommendation of a television show involving suicide to a young girl who went on to herself commit suicide. Netflix, 2022 WL 551701, at *3; see also Am. Compl. ¶ 6, Netflix, No. 4:21-CV-06561-YGR (N.D. Cal. Sept. 22, 2021), ECF No. 22 (alleging Netflix “used its sophisticated, targeted recommendation systems to push the [at issue s]how” on children). This Court determined that plaintiffs failed to state a strict products liability claim and granted defendant Netflix’s motion to dismiss because plaintiffs “premised” the operative complaint “on the content and dissemination of the show.” Netflix, 2022 WL 551701, at *3. Relying on Winter, the Court likened plaintiffs’ claims to those against “books, moves, or other forms of media.” Id.
This litigation is distinguishable from Netflix, however. There, plaintiffs’ injuries were inseparable from a specific show. The Court was therefore left with no option but to conclude that, “[w]ithout the contеnt,” meaning the show in question, “there would be no claim.” Id. Not so here. As pled, plaintiffs allege harms stemming from the design of defendants’ platforms.
Rodgers similarly does not persuade, but on different grounds. There, plaintiff brought suit against the entity responsible for designing “a multifactor risk estimation model” used by the New Jersey state court to make pretrial release determinations, and which played a role in the decision to release a man who killed plaintiff’s son. Rodgers, 795 Fed. App’x at 878-79.41 The district court dismissed the complaint, finding that the model was not a product under New Jersey law, and the Third Circuit affirmed. Id. at 879. In doing so, the Third Circuit emphasized two things: (i) the model at issue was not distributed commercially; and (ii) the model was not “remotely analogous” to tangible personal property because “information, guidance, ideas, and recommendations are not products under the Third Restatement.” Id. at 879-80 (cleaned up). Rodgers is therefore distinguishable on two grounds. First, defendants’
Viewed in context, therefore, Netflix and Rodgers do not demonstrate why dismissal of all plaintiffs’ products liability claims is necessary. By contrast, Brookes, Lemmon, and Omegle are examples of cases in which courts took plaintiffs’ preferred approach of distinguishing between products liability claims that are focused primarily on content (and thus, were not cognizable) and those focused primarily on design (which are cognizable).42 The Court analyzes these cases next.
The Court begins with Brookes as it is most analogous to this litigation. There, a Florida intermediate appellate court held, with the benefit of a full record and on summary judgment, that the ridesharing company Lyft’s mobile app was a product. Brookes v. Lyft, Inc., 2022 WL 19799628, at *3 (Fla. Cir. Ct. Sept. 30, 2022). The court clarified, first, that Lyft’s app was not a service, writing that “Lyft’s connection to the application is not simply the use of it to provide a service.” Id. at *3. Instead, “Lyft [was] the designer and distributor of the application,” which was “defective because of the way it habituat[ed] and distract[ed] Lyft drivers to constantly monitor the application,” including while driving. Id. at *1, *3. The court concluded that the “design choices” gave rise to the harms alleged and Lyft could be held accountable under Florida’s products liability framework. Id. at *2. The logic of Brookes therefore follows plaintiffs’ products liability theory in this MDL. Here, plaintiffs allege harm arising out of defendants’ choices about how to design their platforms, including the at-issue defects.43
Like Brookes, Lemmon also supports the use of products liability in this litigation. While that case is discussed in more detail later in this Order, it suffices at this point to note that the Ninth Circuit there assumed (perhaps because they found it obvious) that plaintiffs adequately alleged a product-based negligence claim against Snap. Lemmon, 995 F.2d at 1093. Plaintiffs there challenged Snap’s Speed Filter functionality, a tool that enabled users to overlay
Finally, Omegle is also instructive and invokes Lemmon. That case involved products liability claims against Omegle.com, a chat platform that randomly connects users for video calls. The court found its design defective insofar as it randomly connected minor and adult users before any contact. Omegle, 614 F.Supp.3d at 817. Relying in part on Lemmon, the court determined that plaintiff adequately pled her products claims. See id. at 819 (noting that defendant Omegle “could have satisfied its alleged obligation to Plaintiff by designing its product differently—for example, by designing a product so that it did not match minors and adults.”). In doing so, the court rejected the notion that defendant would have “needed to review, edit, or withdraw any third-party content” in response to plaintiff’s claims. Id. at 820. The court reiterated that plaintiff’s “case [did] not rest on third party content” because she contended “that the product [was] designed [in] a way that connects individuals who should not be connected (minor children and adult men).” Id. at 820-21. Thus, Omegle, like Brookes, stands for the proposition that products claims focused on the design of digital platforms, as opposed to their content, may be cognizable.
Accordingly, the Court determines defendants’ global arguments that plaintiffs’ allegations concern only third-party content and should be dismissed on that basis fails to persuade. Defendants similarly argued that unless the allegations assert the “software made the operation of a vehicle more dangerous,” the action should be dismissed. Defs’ Reply at 7:16–19 (citing Jane Doe No. 1 v. Uber Techs., Inc., 79 Cal.App.5th 410, 419 (2022)). This is misguided. First, Jane Doe did not conduct a product analysis. Second, the trial court below had found that the Uber ridesharing app was not a product in part because it was used to provide a service to the plaintiff (i.e., obtaining a ride). See Doe v. Uber Tech., Inc., 2020 WL 13801354, at *7 (Super. Ct. Cal. Nov. 30, 2020) (“By plaintiffs’ own allegations, the Uber App was used to gain a service: a ride.”) (citation omitted). No such services are implicated here.
However, as discussed, infra, this does not end the analysis. Instead, a more detailed and searching analysis of the specific defects alleged is required.
e. Whether Defendants’ Platforms, as Software, are Products
Fifth, the Court examines whether defendants’ platforms are akin to “software” and on that basis are products globally speaking. The Third Restatement anticipated that courts might, at some future date, be asked to determine whether software is a product.44 It did not express a view on the matter and instead simply made two notes. First, academics have long urged such an extension of tort doctrine. Second, courts could turn to the
Relying on those notes, plaintiffs argue that, under the above-referenced framework, defendants’ platforms are “software” and should be treated as a product. They rely on three cases to support this argument: Communications Groups, RRX Industries, and Neilson Business Equipment Center.46 These cases are not products liability cases, however; they are cases in which courts determined that contracts for software implicate “goods” and аre therefore governed by the
Defendants submit that plaintiffs’ preferred approach would extend the limits of products liability too far by finding, in effect, that any software can be a product, even software that operates as a service or deals primarily with ideas, content, and free expression that cannot typically form the basis of a products liability claim. That said, neither plaintiffs nor defendants analyze in detail plaintiffs’ cases, nor do they apply the facts of such cases to this litigation. The Court nonetheless addresses each.
To start, the Court determines at the outset that Communications Groups is irrelevant to this litigation because it deals with custom software, which is a service.47 Communications Groups arose from a dispute over a contract for “the installation” of “specifically designed software equipment for defendant’s particular telephone and computer system, needs, and purposes.” 138 Misc.2d at 83. The transaction at issue involved multiple pieces of “identifiable and movable equipment such as recording, accounting and traffic analysis and optimizations, modules, buffer, directories, and an operational
user guide and other items.” Id. RRX Industries is similar to Communications Groups but does not appear to have involved bespoke software. Rather, it arose from a “computer software contract” dispute involving “a software system for use in [] medical laboratories.” RRX Industries, 772 F.2d at 545. The Ninth Circuit, in finding the software constituted a “good,” applied a California law defining as “goods” “all things . . . which are movable at the time of identification to the contract for sale . . . .” Id. at 546 (quoting
The Court therefore finds plaintiffs’ analogy to the treatment of software under the
Accordingly, the Court declines to treat the platforms as products by way of analogy
ii. The Court’s Defect-Specific Approach
As repeatedly emphasized herein, the allegations in the MAC warrant a more fulsome analysis than the global approaches taken. Thus, the Court analyzes whether the various functionalities of defendants’ platforms challenged by plaintiffs are products.48 For each, the Court draws on the various considerations outlined above (i.e., whether the functionality is analogizable to tangible personal property or more akin to ideas, content, and free expression) to inform the analysis. Depending on the functionality at issue, the Court’s analysis may be limited to one consideration; for other defects, multiple considerations may determine the outcome.
///
///
///
///
a. Defective Parental Controls and Age Verification (Defects i, ii, and iii)
The first three design defects relate to defendants’ allegedly defective parental controls and age verification systems, namely: (i) a failure to implement robust age verification processes to determine users’ ages (MAC ¶ 845(a); see also, e.g., id. at ¶¶ 59, 134, 140, 327-35 (Meta), 461-62 (Snap), 568-74 (TikTok)); (ii) a failure to implement effective parental controls (id. at ¶ 845(b); see also, e.g., id. at ¶¶ 134, 141, 262 & 346 (Meta), 566 & 579 (TikTok)); and (iii) a failure to implement effective parental notifications (id. at ¶ 845(c)).
The Court begins by asking whether these alleged defects are analogous to tangible personal property in the context of their use and distribution. See Third Restatement § 19(a). The answer is yes. Myriad tangible products contain parental locks or controls to protect young children. Take, for instance, parental locks on bottles containing prescription medicines. Other examples include parental locks on televisions that enable adults to determine which channels or shows young children should be permitted to watch while unsupervised.
The Court also considers whether these defects concern design elements of defendants’ platforms and are content-agnostic, as plaintiffs argue, or are more akin to ideas, content, and free expression upon which products liability claims cannot be based. Again, these identified defects primarily relate to the manner in which young users are able to access defendants’ apps, including whether their age is accurately assessed during the sign-up process and whether, subsequent to signing up, their activity and settings can be accessed and controlled by their parents.
These defects are therefore more akin to user interface/experience choices, such as those found to be products in Brookes, where a Florida intermediate appellate court determined the Lyft mobile app was a product. See generally Brookes, 2022 WL 19799628. As in Omegle, the defects alleged here also concern minors’ abilities to access online platforms. 614 F.Supp.3d at 817, 821. Such claims are therefore content-agnostic.
Defendants’ counterarguments do not persuade otherwise. They urge that these defects are not products because the alleged harm is derived from words, images, and content, relying primarily on James to do so.49 See, supra, Section VI.B.ii (discussing James). Defendants’ repackaging of the MAC for their own purposes (such as by asserting these defects are inseparable from content parents may wish to block) does not control, however. The Court determines plaintiffs’ pleadings plausibly support their contentions, and therefore distinguish this litigation from James.
For these reasons, these three design defects are classified as products.50
b. Failure to Assist Usеrs in Limiting In-App Screen Time (Defects iv and v)
The next two design defects pertain to app session duration: (i) a failure to implement opt-in restrictions to the length and frequency of use sessions (MAC ¶ 845(f); see also, e.g., ¶¶ 195 & 263 (summarizing such allegations against Meta)); and (ii) a failure to implement default protective limits to the length and frequency of use sessions (id. ¶ 845(e); see also, e.g., ¶¶ 195 & 263 (Meta)).
Again, the Court begins with an analogy to tangible personal property. The most obvious analog to these identified defects is physical timers and alarms, which have long been in use. Modern examples are also available. For instance, many of us carry in our pockets smart phones which are tangible products. These phones contain features that enable users to receive auto-notifications should they exceed pre-set “screen time” limits.51 These examples are sufficiently analogous to tangible personal property in terms of their use and distribution.52
Importantly, these alleged defects are also content-agnostic. Plaintiffs’ theory concerns the manner in which users access the apps (i.e., for uninterrupted, long periods of time), not the content they view there. For this reason, these alleged defects are not excluded on the grounds that they pertain to “ideas, thoughts, and expressive content” under Winter and its progeny. Cf. James, 300 F.3d at 701 (holding
Accordingly, the Court finds the two above-referenced design defects are product components and therefore appropriately fall within a product liability claim.
c. Creating Barriers to Account Deactivation and/or Deletion (Defect vii)
Plaintiffs allege that each defendant’s account deactivation/deletion process is needlessly complicated and serves to disincentivize users from leaving their respective social media platforms. (MAC ¶ 845(m); see also, e.g., ¶¶ 358–59, 362 (Meta), 489 (Snap), 639, 645, 647 (TikTok), 774 (YouTube).)
Here, defendants’ global arguments casting all of plaintiffs’ allegations as essentially content-related are particularly lacking. The manner in which an individual user is able to deactivate or delete an account does not pertain directly to ideas, content, or free expression and is content agnostic. Cf. Winter, 938 F.2d at 1034 (concluding, in relevant part, that “ideas and [the] expression thereof” are not products). Defendants’ suggestion otherwise strains credulity.53
Further, the Court is not inclined to view this alleged defect as akin to a service. In some senses, account deletion and deactivation may be analogized to interactions a consumer might have with a service provider (such as closing an account with a bank or credit card company, for instance). The distinction here, however, is that account deletion and deactivation, as pled, is a user-directed process. The MAC does not assert that employees of defendants must assist users in processing such requests. This distinguishes the alleged barriers to account deletion and deactivation here from account-related services provided in other contexts.54
Given the procedural posture of the aсtion, the Court therefore finds a sufficient plausible basis to classify the defect as a product.
d. Failure to Label Edited Content (Defect vi)
Next, plaintiffs allege defendants fail to label images and videos that have been edited through in-app “filters” as edited content. (MAC ¶ 845(k); see also, e.g., ¶ 318 (“Meta has intentionally designed its products to not alert adolescent users when images have been altered through filters or edited. Meta has therefore designed its product so that users, including plaintiffs, cannot know which images are real and which are fake, deepening negative appearance comparison.”)
This alleged defect concerns the design of defendants’ social media platforms rather than the content made available through such platforms. See, e.g., Brookes, 2022 WL 19799628, at *3. That said, the Court recognizes that the labeling, or failing to label, content, in any way, is tied to the nature of the content itself. See, e.g., James v. Meow Media, Inc., 90 F.Supp.2d 798, 810–11 (W.D. Ky. 2000), aff’d 300 F.3d 683 (6th Cir. 2002) (“[I]ntangible thoughts, ideas, and expressive content are not products within the realm of the strict liability doctrine.”) (cleaned up). However, that connection relates to the output of the labeling, not the labeling tool itself.
On balance, and given the posture of this litigation, the Court is required to accept plaintiffs’ allegations as true when testing the sufficiency of their claims. For this reason, the Court finds that this design defect may proceed as a product to the extent that plaintiffs’ allegations center on the design of the filter. For instance, labeling a photo as “edited” does not alter the underlying photo as much as it guides the user in better understanding how to interpret that photo. The Court finds this distinction meaningful. Accordingly, while a closer question, plaintiffs have plausibly stated the existence of a product relative to this defect.
e. Making Filters Available to Users to Manipulate Content (Defect viii)
The next alleged defect concerns defendants’ filters, which enable users to manipulate content prior to posting it on defendants’ platforms or otherwise sharing it with others. (MAC ¶ 864(d); see also, e.g., ¶¶ 88 (all defendants), 131 & 649–53 (TikTok), 210 & 314–26 (Instagram), 256 (Facebook), 513–19 (Snapchat).)
Plaintiffs challenge two main categories of filters. One, they target filters that permit users to “blur imperfections” and otherwise enhance their appearance in order to “create the perfect selfie.” (Id. at ¶ 514.) Plaintiffs assert the widespread use of such filters promotes unattainable beauty standards and facilitates social comparison, which combine to cause negative mental health outcomes for users, particularly young girls. Two, they target filters like Snapchat’s Speed Filter, which enable users to overlay content on top of existing content. Specifically, the Speed Filter is a functionality that enables users to overlay the speed they are traveling in real life onto a photo or video before sharing that content with others via the Snapchat app. (Id. at ¶¶ 132, 517–18 (describing the Speed Filter).)
The Court examines these categories of filters separately.
With respect to the filters that permit appearance alteration, the Court notes that defendants, admittedly in the First Amendment context, have referred to such filters as “tools that allow users to speak to one another,” such as by “creating or modifying their own expression (including with visual effects that change the look of images).” (MTD1 at 19:19–21 (emphasis supplied).) Defendants’ use of the word “tools” here is notable because defendants implicitly concede that a distinction exists between a “tool,” or functionality, that permit users to manipulate content and the content itself.55 Here, the concession inures to plaintiffs’ benefit as it bolsters their
With respect to Snapchat’s Speed Filter,56 the Court views the Ninth Circuit’s opinion in Lemmon as on point. See, supra, Section VI.C.i.d (discussing Lemmon). Like in Lemmon, plaintiffs here also challenge the design of Snapchat’s platform insofar as it provides users with the Speed Filter as a tool for overlaying their speed onto photos and videos. As plaintiffs challenge essentially the same functionality as was at issue in that case and plead their allegations in similar ways, the Court determines Lemmon applies here. Thus, the Court finds that plaintiffs have adequately pled that the Speed Filter is a product or component thereof, and that plaintiffs’ products liability claims may proceed as to that defect.57
Accordingly, the Court determines plaintiffs have plausibly alleged that both categories of filters are products and permits their claims to proceed on that basis.
f. Failure to Enable Processes to Report CSAM (Defect ix)
Finally, the Court analyses plaintiffs’ allegations that defendants failed to design their platforms to include “reporting protocols [that] allow users or visitors” “to report CSAM and adult predator accounts specifically without the need to create or log in to the products prior to reporting.” (MAC ¶ 845(p) (emphasis supplied).)
The Court determines this allegation specifically concerns the design of defendants’ platforms. Plaintiffs seek to hold defendants’ accountable for requiring users to have logged into a registered account in order to report certain obscene content or profiles. This is quintessentially a matter of design, user interface, and system architecture rather than content. See generally Brookes, 2022 WL 19799628, at *3.
Accordingly, the Court determines that plaintiffs have adequately alleged that the design of defendants’ CSAM and adult predator account reporting mechanisms are products.58
D. Conclusion
For the foregoing reasons, the Court determines that plaintiffs adequately plead the existence of product components as to each alleged defect analyzed herein.59 As such, the Court reaches the remaining elements of plaintiffs’ products liability claims: duty and causation.60
VI. DUTY
The Court now addresses whether plaintiffs have adequately pled the duty element of their product-based negligence claims (Priority Claims 3–4). The Court analyzes two issues: One, have plaintiffs pled that defendants owe a duty to users of their social media platforms. Two, do defendants owe a duty to prevent third parties, such as adult predators, from using defendants’ platforms to harm plaintiff users.
A. Duty to Users of Defendants’ Platforms
First, with respect to whether defendants owe a duty to plaintiff users of their social media platforms, the analysis is straightforward. It is well-established, including in this circuit, that manufacturers of products owe such duties to users. See Third Restatement § 1.61 The parties
Here, in the preсeding section of this Order, the Court determined that plaintiffs adequately pled the existence of products in connection with the defects analyzed. Thus, defendants owe users the duty to design such products in a reasonably safe manner and to warn about risks they pose.62 This duty is informed by the context at issue, namely that plaintiffs are minor children. It is not, however, heightened on that basis.63
Similarly, plaintiffs’ preferred law, that of Georgia and New York, imposes duties on product manufacturers. See, e.g., Micallef v. Miehle Co., Division of Miehle-Goss Dexter, Inc., 39 N.Y.2d 376, 385 (Ct. App. 1976) (“[W]e hold that a manufacturer is obligated to exercise that degree of care in his plan or design so as to avoid any unreasonable risk of harm to anyone who is likely to be exposed to the danger when the product is used in the manner for which the product was intended as well as an unintended yet reasonably foreseeable use . . . .“) (citations omitted); Reece v. J.D. Posillico, Inc., 164 A.D.3d 1285, 1287–88 (N.Y. Supreme Ct., App. Div. 2018) (“A product may be defective when it contains a manufacturing flaw, is defectively designed, or is not accompanied by adequate warnings for the use of the product. A manufacturer has a duty to warn against latent dangers resulting from foreseeable sues of its product of which it knew or should have known.“) (citations omitted); Chrysler Corp. v. Batten, 264 Ga. 723, 724 (1994) (“[A] manufacturer has a duty to exercise reasonable care in manufacturing its products so as to make products that are reasonably safe for intended or foreseeable uses, the manufacturer of a product which, to its actual or constructive knowledge, involves danger to users, has a duty to give warning of such danger.“) (cleaned up).
The Court recognizes that nuances exist, however. For example, under Oregon law, “there is typically no freestanding duty, only a fact question of foreseeability.” Pls’ Opp‘n at 35 n.20 (citing Towe v. Sacagawea, Inc., 347 P.3d 766, 774–75 (Or. 2015)). That said, the Court declines to consider such granular distinctions at this early stage in the proceedings, especially given the posture and insufficient briefing.
The Court also notes here that plaintiffs raised a third issue, namely the impact of public policy in limiting or narrowing any duty recognized by the Court. Given the parties agree that product makers owe a duty relative to introducing their products into the stream of commerce and duty is informed by the context at issue, namely that plaintiffs are minor children. It is not, however, heightened on that basis.63
B. Duty to Prevent Third Party Harm
Second, with respect to whether defendants’ duty extends to preventing third parties from harming plaintiff users by using the social media platforms, the parties disagree. Defendants argue that plaintiffs have not established the requisite misfeasance upon which such a duty could be based. Plaintiffs assert that the factual allegations in the MAC are sufficient.
As above, the Court begins with the Restatements. In general, entities do not owe a duty to prevent harm by third parties to their users, subject to two exceptions. Namely, duties may attach where (i) a “special relationship” exists between the entity and its users or between the entities and the third parties potentially causing the harm or (ii) the entity itself creates a risk of harm by third parties to its users.64
In terms of whether an entity is creating a risk of harm itself, common law principles draw a “distinction between misfeasance and nonfeasance.” See Dyroff, 2017 WL 5665670, at *12 (citation omitted). Duty is typically not imposed for nonfeasance, which is defined as “a failure to act.” Id. (citations omitted). By contrast, misfeasance can create a duty when the defendants are “responsible for making the plaintiff‘s position worse, i.e., defendant has created a risk.” Id. (citations omitted); see also Ziencik v. Snap, Inc., 2023 WL 2638314, *5 (C.D. Cal. Feb. 3, 2023) (acknowledging that, while defendants “generally owe[] no duty to protect another from the conduct of third parties,” “such a duty may arise when a defendant engages in risk-causing conduct.“); Weirum v. RKO Gen., Inc., 15 Cal.3d 40, 49 (1975) (recognizing a duty to protect where “the defendant is responsible for making the plaintiff‘s position worse, i.e., defendant has created a risk“).66 Thus, cases embrace a general proposition that a duty to protect users from third party harm is recognized where the actor has created a risk of harm to another or permitted the risk of such harm to increase.67 See Restatement (Third) of Torts: Liability for Physical and Emotional Harm (“Third Restatement:
Defendants read Vesley to require that a defendant “actively encourage a third party to commit the unlawful act” in order for a duty to attach. See MTD1 at 37:3–4 (citing Vesley, 762 F.3d at 666 (additional citations omitted)); see also Defs’ Reply at 20:3–6 (citing Vesley for the proposition that active assistance to a criminal third party is sufficient to trigger the existence of a duty). However, the Court declines to adopt such a high bar for finding misfeasance. This is for three reasons. First, the bulk of the authority cited by the parties and which the Court has reviewed stops short of requiring active encouragement of criminality. Second, the recognition of such a rule would further restrict the scope of the liability recognized by the Third Restatement in ways this Court does not believe were intended. Third, the Court in Vesley applied the law of Illinois, which is not the state selected by plaintiffs as their preferred law. That said, it is the controlling law for Illinois claims.
Having articulated a general standard, the next question is whether that MAC sufficiently alleges facts to support misfeasance. Again, the parties disagree as to the specificity required.
Defendants rely heavily on Jackson v. Airbnb, Inc. There, the court declined to impose a duty on the web-based, short-term rental service to safeguard renters from criminal acts by third parties on the grounds that plaintiffs’ claims relative to the platform were conclusory. 639 F.Supp.3d at 1009.68 Defendants further emphasize that courts have required affirmative, concerted conduct that increases the risk of harm in order to find misfeasance. See, e.g., Bucher v. State ex rel. Or. Corr. Div., 853 P.2d 798, 805 (D. Or. 1993) (en banc) (“[M]ere ‘facilitation’ of an unintended adverse result, where intervening intentional criminality of another person is the harm-producing force, does not cause the harm sо as to support liability for it.“) (citation omitted).69
By contrast, plaintiffs focus on Ileto and Hacala. Those cases found a duty. Defendants distinguish the cases on myriad grounds, including that the products at issue were physical. The Court addresses each.70
First, in Ileto, the Ninth Circuit found that plaintiffs’ allegations that defendant gun manufacturers intentionally overproduced weapons, thereby creating “an illegal secondary firearms market” was “more than sufficient to raise a factual question as to whether the Defendants owed the plaintiff a duty of care.” Ileto v. Glock, 349 F.3d 1191, 1204 (9th Cir. 2003). The Ninth Circuit emphasized that defendants themselves had “created an illegal secondary market targeting prohibited purchasers” and it was those actions that had “placed plaintiffs in a situation in which they were exposed to an unreasonable risk of harm through the foreseeable conduct of a prohibited purchaser,” such as the assailant who committed the mass shooting there at issue. Id. Thus, plaintiffs adequately pled that Glock, Inc.‘s duty stemmed from placing its products, i.e., guns, essentially into the black market and by extension into criminals’ hands. See generally id. at 1024–05.
Plaintiffs suggest that Ileto supports the proposition that where a company has a business plan that leads to foreseeable third party risk, a duty should be extended to third parties.
Defendants distinguish on three grounds: “(1) firearms are tangible products; (2) they were the sole cause of the direct and immediate physical harms
The Court agrees with defendants’ third argument. In short, the allegations in Ileto explicitly connected the defendant manufacturers’ and distributors’ actions and knowledge with “criminals and underage end users.” Id. at 1197. Plaintiffs further alleged that the Bureau of Alcohol, Tobacco, and Firearms contacted them regarding defendants’ illegal gun distribution and yet defendants failed to implement even basic safeguards, such as contractual provisions in their distributor contracts “to address the risks associated with prohibited purchasers.” Id. at 1198.
Here, plaintiffs allege that defendants designed their platforms to push children to use their platforms as much as possible. In that context, defendants enabled features that encourage children to connect with other users, such as adults. (See, e.g., MAC ¶ 530 (Snapchat “allows users to voice or video call one another in the app” which, “when paired with the many others that permit easy access to minors by predators, such as Quick Add and Snap Map,” can facilitate contact between adult predators and minors).) Plaintiffs urge that a duty should be imposed where defendants “knew or should have known that the design of [their] products attracts, enables, and facilitates child predators, and that such predators use [their] apps to recruit and sexually exploit children for the production of CSAM and its distribution on [their platforms].” (Id. at ¶ 164; see also id. at ¶ 144.) Ultimately, however, plaintiffs only allege that defendants sought to increase minors’ use of their platforms while “knowing or having reason to know” that adult predators also used the sites and therefore increased the risk to the minors.
This generality of the allegations is insufficient to show misfeasance.71 See e.g., Ziencik, 2023 WL 2638314, at *5, and Dyroff, 2017 WL 5665670, at *15 (collectively, holding that merely operating a website or web-based platform used by malicious third parties is insufficient to constitute misfeasance). Moreover, at least with respect to defendant TikTok, plaintiffs would appear to acknowledge that defendants have taken more precautionary steps than defendants in Ileto.72
Plaintiffs’ reliance on Hacala does not compel a different result. There, a California intermediate appellate court imposed a duty on the scooter ride-sharing company
Hacala is distinguishable for three reasons. First, the decision related to a California statutory duty. See id. at 310 (“[E]veryone is responsible . . . for an injury occasioned to another by his or her want of ordinary care or skill in the management of his or her property or person.“) (citation omitted). Second, the court‘s decision was not based on a misfeasance analysis,73 but rather Bird‘s own responsibilities with respect to the deployment of scooters. Third and relatedly, the court‘s decision was informed by Bird‘s own agreement “to take measures to prevent” injuries by pedestrians, such as tripping over haphazardly abandoned scooters, “when it obtained [its] permit from the City [of Los Angeles]” to operate. Id. at 301. Given these distinctions, the ultimate decision has less persuasive value.
Accordingly, the MAC does not sufficiently allege misfeasance such that a duty should attach for third party conduct. However, it may be possible for plaintiffs, especially with the benefit of discovery, to amend their pleadings to more explicitly and specifically explain the basis for the misfeasance by defendants that they claim.74 The Court therefore DISMISSES WITH LEAVE TO AMEND plaintiffs’ product-based negligence claims to the extent such claims are premised on the existence of a duty to protect users from third party actors using their platforms.
* * *
For the foregoing reasons, the Court determines defendants owe plaintiff users of their social media platforms duties owing to their status as product makers, which are limited in scope to the defects previously determined by this Court to be product components.75 Defendants do not, however, owe plaintiffs a duty to protect them from harm from third party users of defendants’ platforms. As set forth herein,
VII. CAUSATION
Defendants move to dismiss on the issue of causation. Here, they focus on the inadequacy of the short-form complaints. With respect to the issue of general causation, the parties brief Ninth Circuit and California law. The Court need not separately determine the law of New York, Georgia, or other jurisdictions as they are sufficiently consistent for pleading purposes, and no party has suggested otherwise.76 In summary, plaintiffs have generally, and adequately, alleged causation for purposes of their strict products liability and product-based negligence claims.77
Issues regarding short form complaints are more appropriately addressed in a parallel track of revised disclosures and, perhaps, additional motion practice.
A. Allegations of General Causation
i. Master Amended Complaint
The MAC includes two main categories of allegations on the causation element. Those specific to how particular design features caused plaintiffs’ harms and those asserting that research studies have tied use of defendants’ platforms to harms similar to those alleged.
As to the first category, plaintiffs contend that the “defective features” of defendants’ platforms have caused and contributed to a range of negative physical, mental, and emotional health outcomes, including anxiety, depression, and self-harm. (MAC ¶ 90.) Plaintiffs support these allegations by describing, in great detail, how defendants’ social media offerings work (See, e.g., id. at ¶¶ 181-437 (Meta); 438-553 (Snap); 554-689 (ByteDance); 690-819 (Google).) Then, they tie the mechanics of the platforms to plaintiffs by asserting that they are designed to induce compulsive use by minors, such as MDL plaintiffs. Defendants are alleged to do this through efforts to, among others, “addict users” to their platforms, id. at ¶ 247 (Facebook); “exploit[] and monetize[] social comparison,” id. at ¶ 312 (Instagram); “promote compulsive and excessive use,” id. at ¶¶ 491-97 (Snap); “inundate users with features” that “maximize the time users (including children) spend using” the platforms, id. at ¶¶ 727–29 (YouTube); and avoid controlling the spread of child sexual abuse material, id. at ¶¶ 654–74 (TikTok).)
As to the second category, plaintiffs argue that myriad studies tie defendants’ design and operation of their platforms to the types of injuries alleged by plaintiffs. (See id. at ¶ 101; see also id. at ¶¶ 96–124 (compiling studies over many years).)
ii. Analysis
As stated above, the Court determines the law of state jurisdictions is sufficiently
As outlined above, plaintiffs allege defendants made design choices with respect to their platforms which caused plaintiffs’ injuries, including adolescent addiction, and negative physical, mental, and emotional health outcomes. The allegations are rooted in academic studies empirically demonstrating causal connections. Thus, given the procedural posture and Rule 8 standards,80 plaintiffs’ allegations are sufficient.81 Defendants’ assertion that more is required at this preliminary stage lacks merit.82
B. Specific Causation
With respect to the adequacy of the plaintiffs’ short-form complaints, defendants’ objections were made previously and addressed.83 The Court sees no reason to depart from its prior position. MDLs are designed to allow for an efficient progression of litigation and the approach here is consistent with other, similar cases, such as In re Allergan Biocell Textured Breast Implant Products Liability Litigation.84 There, as here, an analysis of the “common facts” alleged in plaintiffs’ MAC is sufficient for plaintiffs to plausibly allege, at this preliminary stage, that defendants’ actions proximately caused plaintiffs’ injuries.
First, Snap repeatedly argued, both in their supplemental filings and at the hearing, that plaintiffs have not pled adequately severe (and therefore cognizable) harms arising from the design of Snapchat. This is not accurate. The MAC contains sufficiently detailed allegations as to the harms caused by the design of Snapchat to satisfy
Second, Snap appears to suggest that a recent district court case in California is relevant to the causation analysis conducted above, writing that the court there “rejected the argument that alleged Snapchat ‘design defects’ . . . were the source of harm suffered by Plaintiffs who experienced sexual exploitation by bad actors who abused the platform.” See Snap‘s Supplemental Reply Brief at 1:16–19. However, that argument relies on L.W., a case decided on
For the reasons addressed here as well as those detailed, supra, at notes 3, 10, and 58, the Court DENIES Snap‘s request for dismissal of plaintiffs’ claims relative to the Snapchat platform.
Defendants’ reliance on Adams v. BRG Sports, Inc. does not compel a different result.85 There, the court dismissed on the grounds that the complaint was insufficiently particularized and had “obfuscate[d] whether each and every plaintiff [alleged] that his injury [was] сaused by the defendants’ negligence, defective design, and/or inadequate warnings.” Id. at *3. There is no such confusion here. The MDL plaintiffs allege harm stemming from defendants’ platforms, as set forth in the MAC and their individual short-form complaints.
Accordingly, the Court DENIES defendants’ motion to dismiss for failure to adequately plead causation.
VIII. CONCLUSION
For the foregoing reasons, the Court GRANTS IN PART and DENIES IN PART the pending motions to dismiss. As such, discovery will be allowed to proceed, and the Court will work with parties on the next phase of briefing.
The Court summarizes its key rulings as follows:
- MTD2 is GRANTED on
Section 230 grounds as to plaintiffs’ products liabilitydesign defect claims (Claims 1 and 3) to the extent they are based on the defects alleged at paragraphs 845 (e), (h), (i), (t), (u), (l),86 and (j), as well as paragraph 864(l) of the MAC. MTD2 is GRANTED on First Amendment grounds as to plaintiffs’ products liability design defect claims (Claims 1 and 3) arising out of the defect alleged at paragraph 845(l) of the MAC insofar as that defect concerns the timing and clustering of notifications of defendants’ content. MTD2 is otherwise DENIED as set forth herein. - The Court FINDS plaintiffs’ negligence per se claim (Claim 5) not barred by
Section 230 or theFirst Amendment . - With respect to the functionalities that remain after the rulings with respect to
Section 230 and theFirst Amendment , MTD1 is GRANTED WITH LEAVE TO AMEND as to plaintiffs’ claims that defendants had and breached a duty to protect platform users from harm by third parties, such as adult predators. MTD1 is otherwise DENIED as set forth herein. - With respect to the arguments regarding the remaining elements of plaintiffs’ negligence per se claim (Claim 5), as they require adequate briefing, those arguments are deemed WITHDRAWN without prejudice.
- Snap‘s supplemental filings requesting dismissal of plaintiffs’ claims specific to the Snapchat platform are DENIED.
This terminates Dkt Nos. 237 and 320.
IT IS SO ORDERED.
Dated: Nov. 14, 2023
YVONNE GONZALEZ ROGERS
UNITED STATES DISTRICT JUDGE
