STUART FORCE, individually and as Administrator on behalf of the Estate of TAYLOR FORCE, ROBBI FORCE, KRISTIN ANN FORCE, ABRAHAM RON FRAENKEL, individually and as Administrator on behalf of the Estate of YAAKOV NAFTALI FRAENKEL, and as the natural and legal guardian of minor plaintiffs A.H.H.F., A.L.F., N.E.F, N.S.F., and S.R.F., A.H.H.F., A.L.F., N.E.F., N.S.F., S.R.F., RACHEL DEVORA SPRECHER FRAENKEL, individually and as Administrator on behalf of the Estate of YAAKOV NAFTALI FRAENKEL and as the natural and legal guardian of minor plaintiffs A.H.H.F., A.L.F., N.E.F, N.S.F., and S.R.F., TZVI AMITAY FRAENKEL, SHMUEL ELIMELECH BRAUN, individually and as Administrator on behalf of the Estate of CHAYA ZISSEL BRAUN, CHANA BRAUN, individually and as Administrator on behalf of the Estate of CHAYA ZISSEL BRAUN, SHIMSHON SAM HALPERIN, SARA HALPERIN, MURRAY BRAUN, ESTHER BRAUN, MICAH LAKIN AVNI, individually and as Joint Administrator on behalf of the Estate of RICHARD LAKIN, MAYA LAKIN, individually and as Joint Administrator on behalf of the Estate of RICHARD LAKIN, MENACHEM MENDEL RIVKIN, individually and as the natural and legal guardian of minor plaintiffs S.S.R., M.M.R., R.M.R., S.Z.R., BRACHA RIVKIN, individually and as the natural and legal guardian of minor plaintiffs S.S.R., M.M.R., R.M.R., and S.Z.R., S.S.R., M.M.R., R.M.R., S.Z.R., Plaintiffs-Appellants, v. FACEBOOK, INC., Defendant-Appellee.
No. 18-397
United States Court of Appeals For the Second Circuit
August Term, 2018. Argued: February 25, 2019. Decided: July 31, 2019.
Appeal from the United States District Court for the Eastern District of New York. No. 16-cv-5158 — Nicholas G. Garaufis, Judge.
In the United States Court of Appeals For the Second Circuit
August Term, 2018
No. 18-397
STUART FORCE, individually and as Administrator on behalf of the Estate of TAYLOR FORCE, ROBBI FORCE, KRISTIN ANN FORCE, ABRAHAM RON FRAENKEL, individually and as Administrator on behalf of the Estate of YAAKOV NAFTALI FRAENKEL, and as the natural and legal guardian of minor plaintiffs A.H.H.F., A.L.F., N.E.F, N.S.F., and S.R.F., A.H.H.F., A.L.F., N.E.F., N.S.F., S.R.F., RACHEL DEVORA SPRECHER FRAENKEL, individually and as Administrator on behalf of the Estate of YAAKOV NAFTALI FRAENKEL and as the natural and legal guardian of minor plaintiffs A.H.H.F., A.L.F., N.E.F, N.S.F., and S.R.F., TZVI AMITAY FRAENKEL, SHMUEL ELIMELECH BRAUN, individually and as Administrator on behalf of the Estate of CHAYA ZISSEL BRAUN, CHANA BRAUN, individually and as Administrator on behalf of the Estate of CHAYA ZISSEL BRAUN, SHIMSHON SAM HALPERIN, SARA HALPERIN, MURRAY BRAUN, ESTHER BRAUN, MICAH LAKIN AVNI, individually and as Joint Administrator on behalf of the Estate of RICHARD LAKIN, MAYA LAKIN, individually and as Joint Administrator on behalf of the Estate of RICHARD LAKIN, MENACHEM MENDEL RIVKIN, individually and as the natural and legal guardian of minor plaintiffs S.S.R., M.M.R., R.M.R., S.Z.R., BRACHA RIVKIN, individually and as the natural and legal guardian of minor plaintiffs S.S.R., M.M.R., R.M.R., and S.Z.R., S.S.R., M.M.R., R.M.R., S.Z.R.,
Plaintiffs-Appellants,
v.
FACEBOOK, INC., Defendant-Appellee.1
Appeal from the United States District Court for the Eastern District of New York. No. 16-cv-5158 — Nicholas G. Garaufis, Judge.
ARGUED: FEBRUARY 25, 2019
DECIDED: JULY 31, 2019
Before: KATZMANN, Chief Judge, DRONEY, and SULLIVAN, Circuit Judges.
Plaintiffs-Appellants, U.S. citizen victims of Hamas terrorist attacks in Israel (or their representatives), appeal from a final judgment of the United States District Court for the Eastern District of New York (Garaufis, J.). Plaintiffs brought federal civil anti-terrorism and Israeli law claims against Defendant-Appellee Facebook, Inc., alleging that Facebook unlawfully assisted Hamas in those attacks. The district court dismissed the claims on the basis of
Chief Judge KATZMANN concurs in this opinion except as to Parts I and II of the Discussion, concurs in the judgment with respect to plaintiffs’ foreign law claims, and dissents from the judgment with respect to plaintiffs’ federal claims.
MEIR KATZ (Robert J. Tolchin, on the brief), The Berkman Law Office, LLC, Brooklyn, New York, for Plaintiffs-Appellants.
CRAIG S. PRIMIS (K. Winn Allen, Matthew S. Brooker, on the brief), Kirkland & Ellis, LLP, Washington,
DRONEY, Circuit Judge:
The principal question presented in this appeal is whether
The district court granted Facebook‘s motion to dismiss plaintiffs’ First Amended Complaint under
On appeal, plaintiffs argue that the district court improperly dismissed their claims because Section 230(c)(1) does not provide immunity to Facebook under the circumstances of their allegations.
We conclude that the district court properly applied Section 230(c)(1) to plaintiffs’ federal claims. Also, upon our review of plaintiffs’ assertion of diversity jurisdiction over their foreign law claims,
FACTUAL AND PROCEDURAL BACKGROUND
I. Allegations in Plaintiffs’ Complaint2
Because this case comes to us on a motion to dismiss, we recount the facts as plaintiffs provide them to us, treating as true the allegations in their complaint. See Galper v. JP Morgan Chase Bank, N.A., 802 F.3d 437, 442 (2d Cir. 2015).
A. The Attacks
Hamas is a Palestinian Islamist organization centered in Gaza. It has been designated a foreign terrorist organization by the United States and Israel. Since it was formed in 1987, Hamas has conducted thousands of terrorist attacks against civilians in Israel.
Plaintiffs’ complaint describes terrorist attacks by Hamas against five Americans in Israel between 2014 and 2016. Yaakov Naftali Fraenkel, a teenager, was kidnapped by a Hamas operative in 2014 while walking home from school in Gush Etzion, near Jerusalem, and then was shot to death. Chaya Zissel Braun, a 3-month-old baby, was killed at a train station in Jerusalem in 2014 when a Hamas operative drove a car into a crowd. Richard Lakin died after Hamas members shot and stabbed him in an attack on a bus in Jerusalem in 2015. Graduate student Taylor
B. Facebook‘s Alleged Role in the Attacks
1. How Facebook Works
Facebook operates an “online social network platform and communications service[].” App‘x 230. Facebook users populate their own “Facebook ‘pages‘” with “content,” including personal identifying information and indications of their particular “interests.” App‘x 250–51, 345. Organizations and other entities may also have Facebook pages. Users can post content on others’ Facebook pages, reshare each other‘s content, and send messages to one another. The content can be text-based messages and statements, photos, web links, or other information.
Facebook users must first register for a Facebook account, providing their names, telephone numbers, and email addresses. When registering, users do not specify the nature of the content they intend to publish on the platform, nor does Facebook screen new users based on its expectation of what content they will share with other Facebook users. There is no charge to prospective users for joining Facebook.3
Facebook does not preview or edit the content that its users post. Facebook‘s terms of service specify that a user “own[s] all of the content and information [the user] post[s] on Facebook, and [the user] can control how it is shared through [the user‘s] privacy and application settings.” App‘x 252 (alterations in original).
While Facebook users may view each other‘s shared content simply by visiting other Facebook pages and profiles, Facebook also provides a personalized “newsfeed” page for each user. Facebook uses algorithms — “a precisely defined set of mathematical or logical operations for the performance of a particular task,” Algorithm, Oxford English Dictionary (3d ed. 2012) — to determine the content to display to users on the newsfeed webpage. Newsfeed content is displayed within banners or modules and changes frequently. The newsfeed algorithms — developed by programmers employed by Facebook — automatically analyze Facebook users’ prior behavior on the Facebook website to predict and display the content that is most likely to interest and engage those particular users. Other algorithms similarly use Facebook users’ behavioral and demographic data to show those users third-party groups, products, services, and local events likely to be of interest to them.
Facebook‘s algorithms also provide “friend suggestions,” which, if accepted by the user, result in those users seeing each other‘s shared content. App‘x 346–47. The friend-suggestion algorithms are based on such factors as the users’ common membership in Facebook‘s online “groups,” geographic location, attendance at events, spoken language, and mutual friend connections on Facebook. App‘x 346.
Facebook‘s advertising algorithms and “remarketing” technology also allow advertisers on Facebook to target specific ads to its users who are likely to be most interested
2. Hamas‘s Use of Facebook4
Plaintiffs allege that Hamas used Facebook to post content that encouraged terrorist attacks in Israel during the time period of the attacks in this case. The attackers allegedly viewed that content on Facebook. The encouraging content ranged in specificity; for example, Fraenkel, although not a soldier, was kidnapped and murdered after Hamas members posted messages on Facebook that advocated the kidnapping of Israeli soldiers. The attack that killed the Braun baby at the light rail station in Jerusalem came after Hamas posts encouraged car-ramming attacks at light rail stations. By contrast, the killer of Force is alleged to have been a Facebook user, but plaintiffs do not set forth what specific content encouraged his attack, other than that “Hamas . . . use[d] Facebook to promote terrorist stabbings.” App‘x 335.
Hamas also used Facebook to celebrate these attacks and others, to transmit political messages, and to generally support further violence against Israel. The perpetrators were able to view this content because, although Facebook‘s terms and policies bar such use by Hamas and other designated foreign terrorist organizations, Facebook has allegedly failed to remove the “openly maintained” pages and associated content of certain Hamas leaders, spokesmen, and other members. App‘x 229. It is also alleged that Facebook‘s algorithms directed such content to the personalized newsfeeds of the individuals who harmed the plaintiffs. Thus, plaintiffs claim, Facebook enables Hamas “to disseminate its messages directly to its intended audiences,” App‘x 255, and to “carry out the essential communication components of [its] terror attacks,” App‘x 256.
II. Facebook‘s Antiterrorism Efforts
A. Intended Uses of Facebook
Facebook has Terms of Service that govern the use of Facebook and purport to incorporate Facebook‘s Community Standards.5 In its Terms of Service, Facebook represents that its services are intended to “[c]onnect you with people and organizations you care about,” by, among other things, “[p]rovid[ing] a personalized experience” and “[h]elp[ing] you discover content, products, and services that may interest you.” Terms of Service, Facebook, https://www.facebook.com/terms.php (last visited June 26, 2019). To
do so, Facebook “must collect and use your personal data,” id., subject to a detailed “Data Policy,” Data Policy, Facebook, https://www.facebook.com/about/privacy/update
B. Prohibited Uses of Facebook
According to the current version of Facebook‘s Community Standards, Facebook “remove[s] content that expresses support or praise for groups, leaders, or individuals involved in,” inter alia, “[t]errorist activity.” 2. Dangerous Individuals and Organizations, Community Standards, Facebook, https://www.facebook.com/communitystandards/dangerous_individuals_organizations (last visited June 26, 2019). “Terrorist organizations and terrorists” may not “maintain a presence” on Facebook, nor is “coordination of support” for them allowed. Id. Facebook “do[es] not allow symbols that represent any [terrorist] organizations or [terrorists] to be shared on [the] platform without context that condemns or neutrally discusses the content.” Id. In addition, Facebook purports to ban “hate speech” and to “remove content that glorifies violence or celebrates the suffering or humiliation of others.” Objectionable Content, Community Standards, Facebook, https://www.facebook.com/communitystandards/objectionable_content (last visited June 26, 2019).
Facebook‘s Terms of Service also prohibit using its services “to do or share anything” that is, inter alia, “unlawful” or that “infringes or violates someone else‘s rights.”6 Terms of Service, supra. Violating
any of these policies may result in Facebook suspending or disabling a user‘s account, removing the user‘s content, blocking access to certain features, and contacting law enforcement. Id.
According to recent testimony by Facebook‘s General Counsel in a United States Senate hearing, Facebook employs a multilayered strategy to enforce these policies and combat extremist content on its platform.7 Facebook claimed in the hearing that most of the content it removes is identified by Facebook‘s internal procedures before it is reported by users. For example, terrorist photos or videos that users attempt to upload are matched against an inventory of known terrorist content. Facebook is also experimenting with artificial intelligence to block or remove “text that might be advocating for terrorism.” App‘x 373. When Facebook detects terrorist-related content, it also uses artificial intelligence to identify similar, socially
interconnected accounts, content, and pages that may themselves support terrorism.
The General Counsel also testified that, for content that is not automatically detected, Facebook employs thousands of people who respond to user reports of inappropriate content and remove such
III. District Court Proceeding
Plaintiffs brought this action on July 10, 2016, in the United States District Court for the Southern District of New York. On consent of the parties, the action was transferred to the United States District Court for the Eastern District of New York on September 16, 2016.9 In their First Amended Complaint, Plaintiffs claimed that, under
the district court had diversity-based subject matter jurisdiction under
Facebook moved to dismiss plaintiffs’ claims for lack of personal jurisdiction under Rule 12(b)(2) and for failure to state a claim under Rule 12(b)(6). The district court determined that it had personal jurisdiction over Facebook, a ruling that Facebook does not challenge on appeal. But the district court also held that
favor,
Plaintiffs then filed a
STANDARD OF REVIEW
Because the district court determined that it was futile to allow plaintiffs to file a second amended complaint, we evaluate that proposed complaint “as we would a motion to dismiss, determining whether [it] contains enough facts to state a claim to relief that is
plausible on its face.”12 Ind. Pub. Ret. Sys. v. SAIC, Inc., 818 F.3d 85, 92 (2d Cir. 2016) (citation and internal quotation marks omitted). We accept as true all alleged facts in both the First Amended Complaint and the proposed second amended complaint.13 See Ashcroft v. Iqbal, 556 U.S. 662, 678 (2009). We also review de novo a district court‘s grant of a
DISCUSSION
On appeal, plaintiffs contend that the district court improperly held that
In response to plaintiffs’ claims, Facebook contends that
We first turn to the issues regarding
I. Background of Section 230(c)(1)
The primary purpose of the proposed legislation that ultimately resulted in the Communications Decency Act (“CDA“) “was to protect children from sexually explicit internet content.” FTC v. LeadClick Media, LLC, 838 F.3d 158, 173 (2d Cir. 2016) (citing 141 Cong. Rec. S1953 (daily ed. Feb. 1, 1995) (statement of Sen. Exon)). Section 230, though—added as an amendment to the CDA bill, id.—was enacted “to maintain the robust nature of Internet communication and, accordingly, to keep government interference in the medium to a minimum,” Ricci, 781 F.3d at 28 (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)). Indeed, Congress stated in Section 230 that “[i]t is the policy of the United States—(1) to promote the continued development of the Internet and other interactive computer services and other interactive media; [and] (2) to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”
In the seminal Fourth Circuit decision interpreting the immunity of Section 230 shortly after its enactment, Zeran v. America Online, Inc., that court described Congress‘s concerns underlying Section 230:
The amount of information communicated via interactive computer services is . . . staggering. The specter of . . . liability in an area of such prolific speech would have an obvious chilling effect. It would be impossible for service providers to screen each of their millions of postings for possible problems. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress . . . chose to immunize service providers to avoid any such restrictive effect.
The addition of Section 230 to the proposed CDA also “assuaged Congressional concern regarding the outcome of two inconsistent judicial decisions,” Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y. 1991) and Stratton Oakmont, Inc. v. Prodigy Servs. Co., No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995), both of which “appl[ied] traditional defamation law to internet providers,” LeadClick, 838 F.3d at 173. As we noted in LeadClick, “[t]he first [decision] held that an interactive computer service provider could not be liable for a third party‘s defamatory statement . . . but the second imposed liability where a service provider filtered its content in an effort to block obscene material.” Id. (citations omitted) (citing 141 Cong. Rec. H8469-70 (daily ed. Aug. 4, 1995 (statement of Rep. Cox))).
To “overrule Stratton,” id., and to accomplish its other objectives,
In light of Congress‘s objectives, the Circuits are in general agreement that the text of
II. Whether Section 230(c)(1) Protects Facebook‘s Alleged Conduct17
The parties agree that Facebook is a provider of an “interactive computer service,” but dispute whether plaintiffs’ claims allege that (1) Facebook is acting as the protected publisher of information, and (2) the challenged information is provided by Hamas, or by Facebook itself.18
A. Whether Plaintiffs’ Claims Implicate Facebook as a “Publisher” of Information
Certain important terms are left undefined by
Plaintiffs seek to hold Facebook liable for “giving Hamas a forum with which to communicate and for actively bringing Hamas’ message to interested parties.” Appellants’ Reply Br. 37; see also, e.g., Appellants’ Br. 50–51 (arguing that the federal anti-terrorism statutes “prohibit[] Facebook from supplying Hamas a platform and communications services“). But that alleged conduct by Facebook falls within the heartland of what it means to be the “publisher” of information under
Plaintiffs also argue that Facebook does not act as the publisher of Hamas‘s content within the meaning of
Indeed, arranging and distributing third-party information inherently forms “connections” and “matches” among speakers, content, and viewers of content, whether in interactive internet forums or in more traditional media.22 That is an essential result of publishing. Accepting plaintiffs’ argument would eviscerate
Plaintiffs’ “matchmaking” argument would also deny immunity for the editorial decisions regarding third-party content that interactive computer services have made since the early days of the Internet. The services have always decided, for example, where on their sites (or other digital property) particular third-party content should reside and to whom it should be shown. Placing certain third-party content on a homepage, for example, tends to recommend that content to users more than if it were located elsewhere on a website. Internet services have also long been able to target the third-party content displayed to users based on, among other things, users’ geolocation, language of choice, and
Seen in this context, plaintiffs’ argument that Facebook‘s algorithms uniquely form “connections” or “matchmake” is wrong. That, again, has been a fundamental result of publishing third-party content on the Internet since its beginning. Like the decision to place third-party content on a homepage, for example, Facebook‘s algorithms might cause more such “matches” than other editorial decisions. But that is not a basis to exclude the use of algorithms from the scope of what it means to be a “publisher” under
Second, plaintiffs argue, in effect, that Facebook‘s use of algorithms is outside the scope of publishing because the algorithms automate Facebook‘s editorial decision-making. That argument, too, fails because “so long as a third party willingly provides the essential published content, the interactive service provider receives full immunity regardless of the specific edit[orial] or selection process.” Carafano, 339 F.3d at 1124; see Marshall‘s Locksmith, 925 F.3d at 1271 (holding that “automated editorial act[s]” are protected by Section 230) (quoting O‘Kroley v. Fastcase, Inc., 831 F.3d 352, 355 (6th Cir. 2016)); cf., e.g., Roommates.Com, 521 F.3d at 1172; Herrick, 765 F. App‘x at 591. We disagree with plaintiffs that in enacting Section 230 to, inter alia, “promote the continued development of the Internet,”
Our dissenting colleague calls for a narrow textual interpretation of
- The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens.
- These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.
- The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.
- The Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation.
- Increasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services.
We therefore conclude that plaintiffs’ claims fall within Facebook‘s status as the “publisher” of information within the meaning of
B. Whether Facebook is the Provider of the Information
We turn next to whether Facebook is plausibly alleged to itself be an “information content provider,” or whether it is Hamas that provides all of the complained-of content. “The term ‘information content provider’ means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”
The term “development” in Section 230(f)(3) is undefined. However, consistent with broadly construing “publisher” under Section 230(c)(1), we have recognized that a defendant will not be considered to have developed third-party content unless the defendant directly and “materially” contributed to what made the content itself “unlawful.” LeadClick, 838 F.3d at 174 (quoting Roommates.Com, 521 F.3d at 1168). This “material contribution” test, as the Ninth Circuit has described it, “draw[s] the line at the ‘crucial distinction between, on the one hand, taking actions . . . to . . . display actionable content and, on the other hand, responsibility for what makes the displayed content [itself] illegal or actionable.‘” Kimzey v. Yelp! Inc., 836 F.3d 1263, 1269 n.4 (9th Cir. 2016) (quoting Jones, 755 F.3d at 413-14).
Although it did not explicitly adopt the “material contribution” test, the D.C. Circuit‘s recent decision in Marshall‘s Locksmith Service v. Google, 925 F.3d 1263, illustrates how a website‘s display of third-party information does not cross the line into content development. There, “scam locksmiths“—who were apparently actual locksmiths seeking to mislead consumers with lock emergencies into believing that they were closer in proximity to the emergency location than they actually were—allegedly provided Google, Microsoft, and Yahoo!‘s internet mapping services with false locations, some of which were exact street addresses and others which were “less-exact,” such as telephone area codes. Id. at 1265-70. The internet mapping services of Google, Microsoft, and Yahoo! translated this information into textual and pictorial “pinpoints” on maps that were displayed to the services’ users. Id. at 1269. The D.C. Circuit concluded that this “translation” of the third-party information by the interactive computer services did not develop that information (or create new content) because the underlying “information [was] entirely provided by the third party, and the choice of presentation” fell within the interactive computer services’ prerogative as publishers. Id. (emphasis added).
As to the “less-exact” location information, such as area codes, provided by the scam locksmiths, the plaintiffs also argued that the mapping services’ algorithmic translation of this information into exact pinpoint map locations developed or created the misleading information. Id. at 1269-70. The D.C. Circuit also rejected that argument, holding that “defendants’ translation of [imprecise] third-party information into map pinpoints does not convert them into ‘information content providers’ because defendants use a neutral algorithm to make that translation.” Id. at 1270. In using the term “neutral,” the court observed that the algorithms were alleged to make no distinction between “scam” and other locksmiths and that the algorithms did not materially alter (i.e., they “hew[ed] to“) the underlying information provided by the third parties. Id. at 1270 n.5, 1270-71.
Here, plaintiffs’ allegations about Facebook‘s conduct do not render it responsible for the Hamas-related content. As an initial matter, Facebook does not edit (or
Nor does Facebook‘s acquiring certain information from users render it a developer for the purposes of Section 230. Facebook requires users to provide only basic identifying information: their names, telephone numbers, and email addresses. In so doing, Facebook acts as a “neutral intermediary.” LeadClick, 838 F.3d at 174. Moreover, plaintiffs concede in the pleadings that Facebook does not publish that information, cf., e.g., Roommates.Com, 521 F.3d at 1172, and so such content plainly has no bearing on plaintiffs’ claims.
Plaintiffs’ allegations likewise indicate that Facebook‘s algorithms are content “neutral” in the sense that the D.C. Circuit used that term in Marshall‘s Locksmith: The algorithms take the information provided by Facebook users and “match” it to other users—again, materially unaltered—based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers.24 Merely arranging and displaying others’ content to users of Facebook through such algorithms—even if the content is not actively sought by those users—is not enough to hold Facebook responsible as the “develop[er]” or “creat[or]” of that content. See, e.g., Marshall‘s Locksmith, 925 F.3d at 1269-71; Roommates.Com, 521 F.3d at 1169-70.
Plaintiffs’ arguments to the contrary are unpersuasive. For one, they point to the Ninth Circuit‘s decision in Roommates.Com as holding that requiring or encouraging users to provide any particular information whatsoever to the interactive computer service transforms a defendant into a developer of that information. The Roommates.Com holding, however, was not so broad; it concluded only that the site‘s conduct in requiring users to select from “a limited set of pre-populated answers” to respond to particular “discriminatory questions” had a content-development effect that was actionable in the context of the Fair Housing Act. See 521 F.3d at 1166. There is no comparable allegation here.
Plaintiffs also argue that Facebook develops Hamas‘s content because Facebook‘s algorithms make that content more “visible,” “available,” and “usable.” Appellants’ Br. at 45-46. But making information more available is, again, an essential part of traditional publishing; it does not amount to “developing” that information within the meaning of Section 230. Similarly, plaintiffs assert that Facebook‘s algorithms suggest third-party content to users “based on what Facebook believes will cause the user to use Facebook as much as possible” and that Facebook intends to “influence” consumers’ responses to that content. Appellants’ Br. 48. This does not describe anything more than Facebook vigorously fulfilling its role as a publisher. Plaintiffs’ suggestion that publishers must have no role in organizing or distributing third-party content in order to avoid “develop[ing]” that content is both ungrounded
Finally, we note that plaintiffs also argue that Facebook should not be afforded Section 230 immunity because Facebook has chosen to undertake efforts to eliminate objectionable and dangerous content but has not been effective or consistent in those efforts. However, again, one of the purposes of Section 230 was to ensure that interactive computer services should not incur liability as developers or creators of third-party content merely because they undertake such efforts—even if they are not completely effective.25
We therefore conclude from the allegations of plaintiffs’ complaint that Facebook did not “develop” the content of the Facebook postings by Hamas and that Section 230(c)(1) applies to Facebook‘s alleged conduct in this case.
III. Whether Applying Section 230(c)(1) to Plaintiffs’ Claims Would Impair the Enforcement of a Federal Criminal Statute
Plaintiffs also argue that Section 230(c)(1) may not be applied to their claims because that would impermissibly “impair the enforcement” of a “Federal criminal statute.” Appellant‘s Br. at 52 (quoting
We agree with the district court‘s conclusion that Section 230(e)(1) is inapplicable in this civil action. Even accepting, arguendo, plaintiffs’ assertion that a civil litigant could be said to “enforce” a criminal statute through a separate civil remedies provision, any purported ambiguity in Section 230(e)(1) is resolved by its title, “No effect on criminal law.”26 “Criminal law” concerns “prosecuting and punishing offenders” and is “contrasted with civil law,” which, as here, concerns “private relations between individuals.” Criminal Law, Civil Law, Oxford English Dictionary (3d ed. 2010). Furthermore, as the First Circuit pointed out in Jane Doe No. 1 v. Backpage.com, LLC, “where Congress wanted to include both civil and criminal remedies in CDA provisions, it did so through broader language.” 817 F.3d at 23. Section 230(e)(4), for example, states that Section 230 “should not ‘be construed to limit the application of the Electronic Communications Privacy Act of 1986,’ a
IV. Whether the Anti-Terrorism Act‘s Civil Remedies Provision, 18 U.S.C. § 2333, Implicitly Narrowed or Repealed Section 230(c)(1)
Plaintiffs also argue that the ATA‘s civil remedies provision,
“[R]epeals by implication are not favored and will not be presumed unless the intention of the legislature to repeal is clear and manifest.” Nat‘l Ass‘n of Home Builders v. Defs. of Wildlife, 551 U.S. 644, 662 (2007) (citation, internal quotation marks, and alterations omitted). In other words, “[a]n implied repeal will only be found where provisions in two statutes are in irreconcilable conflict, or where the latter Act covers the whole subject of the earlier one and is clearly intended as a substitute.” Branch v. Smith, 538 U.S. 254, 273 (2003) (citation and internal quotation marks omitted). Here, there is no irreconcilable conflict between the statutes. Section 230 provides an affirmative defense to liability under Section 2333 for only the narrow set of defendants and conduct to which Section 230 applies. JASTA merely expanded Section 2333‘s cause of action to secondary liability; it provides no obstacle—explicit or implicit—to applying Section 230.
V. Whether Applying Section 230(c)(1) to Plaintiffs’ Claims Would Be Impermissibly Extraterritorial
Plaintiffs also argue that the presumption against the extraterritorial application of federal statutes bars applying Section 230(c)(1) to their claims because Hamas posted content and conducted the attacks from overseas, and because Facebook‘s employees who failed to take down Hamas‘s content were allegedly located outside the United States, in Facebook‘s
Under the canon of statutory interpretation known as the “presumption against extraterritoriality,” “[a]bsent clearly expressed congressional intent to the contrary, federal laws will be construed to have only domestic application.” RJR Nabisco, Inc. v. European Cmty., 136 S. Ct. 2090, 2100 (2016). The Supreme Court has instructed courts to apply “a two-step framework for analyzing extraterritoriality issues.” Id. at 2101. “At the first step, we ask whether the presumption against extraterritoriality has been rebutted—that is, whether the statute gives a clear, affirmative indication that it applies extraterritorially.” Id.
If the statute is not extraterritorial on its face, then “at the second step we determine whether the case involves a domestic application of the statute, and we do this by looking to the statute‘s ‘focus.‘” Id. “The focus of a statute is the object of its solicitude, which can include the conduct it seeks to regulate, as well as the parties and interests it seeks to protect or vindicate.” WesternGeco LLC v. ION Geophysical Corp., 138 S. Ct. 2129, 2137 (2018) (citation, internal quotation marks, and alterations omitted). “If the conduct relevant to the statute‘s focus occurred in the United States, then the case involves a permissible domestic application even if other conduct occurred abroad . . . .” RJR Nabisco, 136 S. Ct. at 2101. “[B]ut if the conduct relevant to the focus occurred in a foreign country, then the case involves an impermissible extraterritorial application regardless of any other conduct that occurred in U.S. territory.” Id.
The two-step framework arguably does not easily apply to a statutory provision that affords an affirmative defense to civil liability. Indeed, it is unclear how an American court could apply such a provision “extraterritorially.” Even if it could be applied extraterritorially—say, by somehow treating the defendant‘s conduct rather than the lawsuit itself as the “focus” of a liability-limiting provision—the presumption against extraterritoriality primarily “serves to avoid the international discord that can result when U.S. law is applied to conduct in foreign countries.” Id. at 2100. Allowing a plaintiff‘s claim to go forward because the cause of action applies extraterritorially, while then applying the presumption to block a different provision setting out defenses to that claim, would seem only to increase the possibility of international friction. Such a regime could also give plaintiffs an advantage when they sue over extraterritorial wrongdoing that they would not receive if the defendant‘s conduct occurred domestically. It is doubtful that Congress ever intends such a result when it writes provisions limiting civil liability.
The Ninth Circuit addressed this issue in Blazevska v. Raytheon Aircraft Co., 522 F.3d 948 (9th Cir. 2008), which was decided prior to the Supreme Court‘s adoption of the two-step extraterritoriality framework. The plaintiffs in Blazevska argued that the General Aviation Revitalization Act‘s (“GARA“) statute of repose could not limit the defendant‘s liability because, like here, certain events related to plaintiffs’ claims occurred overseas. Id. at 950. The Ninth Circuit disagreed, holding that the presumption against extraterritoriality was inapplicable to a liability-limiting statute. It found that GARA did not “impermissibly regulate conduct that has occurred abroad,” and instead,
merely eliminates the power of any party to bring a suit for damages against a general aviation aircraft manufacturer, in a U.S. federal or state court, after the limitation period. The only conduct it could arguably be said to regulate is the ability of a party to initiate an action for damages against a manufacturer in American courts—an entirely domestic endeavor. Congress has no power to tell courts of foreign countries whether they could entertain a suit against an American defendant.
Id. at 953. “Accordingly,” the Ninth Circuit held, “the presumption against extraterritoriality simply is not implicated by GARA‘s application.” Id.
The Supreme Court has left open the question of whether certain types of statutes might not be subject to the presumption against extraterritoriality. See WesternGeco, 138 S. Ct. at 2136 (noting, without deciding, the question whether “the presumption against extraterritoriality should never apply to statutes . . . that merely provide a general damages remedy for conduct that Congress has declared unlawful“). However, we need not decide here whether the presumption against extraterritoriality is “simply . . . not implicated,” Blazevska, 522 F.3d at 953, by statutes that merely limit liability.
civil liability, or whether the two-step RJR Nabisco framework must be applied, because that framework is workable in this context and compels the same result. At step two, we conclude from the text ofVI. Foreign Law Claims
Turning next to plaintiffs’ foreign tort claims, the parties disagree as to the reach of
Plaintiffs allege that, under
Here, a substantial majority of the plaintiffs are alleged to be United States citizens domiciled in Israel.30 A suit based on diversity jurisdiction may not proceed with these plaintiffs as parties.
In addition, “[i]t is well established that for a case to come within [
The joinder of Israel-domiciled U.S.-citizen plaintiffs requires us either to dismiss the diversity-based claims altogether, or exercise our discretion to: 1) dismiss those plaintiffs who we determine are “dispensable jurisdictional spoilers;” or 2) vacate in part the judgment of the district court and remand for it to make that indispensability determination and to determine whether dismissal of those individuals would be appropriate. SCS Commc‘ns, Inc. v. Herrick Co., 360 F.3d 329, 335 (2d Cir. 2004). As for the plaintiffs for whom no state citizenship is alleged, we have discretionary authority to accept submissions for the purpose of amending the complaint on appeal, or we could remand for amendment. See Leveraged Leasing, 87 F.3d at 47 (“Defective allegations of jurisdiction may be amended, upon terms, in the trial or appellate courts.” (quoting
We decline to exercise our discretion to attempt to remedy these jurisdictional defects. This is not a case in which a small number of nondiverse parties defeats jurisdiction, but rather one in which—after multiple complaints have been submitted—most of the plaintiffs are improperly joined. Moreover, the case remains at the pleading stage, with discovery not yet having begun. Proceeding with the few diverse plaintiffs would be inefficient given the expenditure of judicial and party resources that would be required to address the jurisdictional defects. The most appropriate course is for any diverse plaintiffs to bring a new action and demonstrate subject matter jurisdiction in that action.31 Accordingly, plaintiffs’ foreign law claims are dismissed, without prejudice.32
CONCLUSION
For the foregoing reasons, we AFFIRM the judgment of the district court as to plaintiffs’ federal claims and DISMISS plaintiffs’ foreign law claims.
KATZMANN, Chief Judge, concurring in part and dissenting in part:
I agree with much of the reasoning in the excellent majority opinion, and I join that opinion except for Parts I and II of the Discussion. But I must respectfully part company with the majority on its treatment of Facebook‘s friend- and content-suggestion algorithms under the Communications Decency Act (“CDA“).1
As to the reasons for my disagreement, consider a hypothetical. Suppose that you are a published author. One day, an acquaintance calls. “I‘ve been reading over everything you‘ve ever published,” he informs you. “I‘ve also been looking at everything you‘ve ever said on the Internet. I‘ve done the same for this other author. You two have very similar interests; I think you‘d get along.” The acquaintance then gives you the other author‘s contact information and photo, along with a link to all her published works. He calls back three more times over the next week with more names of writers you should get to know.
Now, you might say your acquaintance fancies himself a matchmaker. But would you say he‘s acting as the publisher of the other authors’ work?
Facebook and the majority would have us answer this question “yes.” I, however, cannot do so. For the scenario I have just described is little different from how Facebook‘s algorithms allegedly work. And while those algorithms do end up showing users profile, group, or event pages written by other users, it strains the English language to say that in targeting and recommending
It would be one thing if congressional intent compelled us to adopt the majority‘s reading. It does not. Instead, we today extend a provision that was designed to encourage computer service providers to shield minors from obscene material so that it now immunizes those same providers for allegedly connecting terrorists to one another. Neither the impetus for nor the text of
The Anti-Terrorism Act (“ATA“) claims in this case fit this bill. According to plaintiffs’ Proposed Second Amended Complaint (“PSAC“)—which we must take as true at this early stage—Facebook has developed “sophisticated algorithm[s]” for bringing its users together. App‘x 347 ¶ 622. After collecting mountains of data about each user‘s activity on and off its platform, Facebook unleashes its algorithms to generate friend, group, and event suggestions based on what it perceives to be the user‘s interests. Id. at 345-46 ¶¶ 608-14. If a user posts about a Hamas attack or searches for information about a Hamas leader, Facebook may “suggest” that that user become friends with Hamas terrorists on Facebook or join Hamas-related Facebook groups. By “facilitat[ing] [Hamas‘s] ability to reach and engage an audience it could not otherwise reach as effectively,” plaintiffs allege that Facebook‘s algorithms provide material support and personnel to terrorists. Id. at 347 ¶ 622; see id. at 352-58 ¶¶ 646-77. As applied to the algorithms, plaintiffs’ claims do not seek to punish Facebook for the content others post, for deciding whether to publish third parties’ content, or for editing (or failing to edit) others’ content before publishing it. In short, they do not rely on treating Facebook as “the publisher” of others’ information. Instead, they would hold Facebook liable for its affirmative role in bringing terrorists together.
When it comes to Facebook‘s algorithms, then, plaintiffs’ causes of action do not run afoul of the CDA. Because the court below did not pass on the merits of the ATA claims pressed below, I would send this case back to the district court to decide the merits in the first instance. The majority, however, cuts off all possibility for relief based on algorithms like Facebook‘s, even if these or future plaintiffs could prove a sufficient nexus between those algorithms and their injuries. In light of today‘s decision and other judicial interpretations of the statute that have generally immunized social media companies—and especially in light of the new reality that has evolved since the CDA‘s passage—Congress may wish to revisit the CDA to better calibrate the circumstances where such immunization is appropriate and inappropriate in light of congressional purposes.
I.
To see how far we have strayed from the path on which Congress set us out, we must consider where that path began. What is now
The action began in the Senate. Senator James J. Exon introduced the CDA on February 1, 1995. See 141 Cong. Rec. 3,203. He presented a revised bill on June 9, 1995, “[t]he heart and the soul” of which was “its protection for families and children.” Id. at 15,503 (statement of Sen. Exon). The Exon Amendment sought to reduce the proliferation of pornography and other obscene material online by subjecting to civil and criminal penalties those who use interactive computer services to make, solicit, or transmit offensive material. Id. at 15,505.
The House of Representatives had the same goal—to protect children from inappropriate online material—but a very different sense of how to achieve it. Congressmen Christopher Cox (R-California) and Ron Wyden (D-Oregon) introduced an amendment to the Telecommunications Act, entitled “Online Family Empowerment,” about two months after the revised CDA appeared in the Senate. See id. at 22,044. Making the argument for their amendment during the House floor debate, Congressman Cox stated:
We want to make sure that everyone in America has an open invitation and feels welcome to participate in the Internet. But as you know, there is some reason for people to be wary because, as a Time Magazine cover story recently highlighted, there is in this vast world of computer information, a literal computer library, some offensive material, some things in the bookstore, if you will, that our children ought not to see.
As the parent of two, I want to make sure that my children have access to this future and that I do not have to worry about what they might be running into on line. I would like to keep that out of my house and off my computer.
Id. at 22,044-45. Likewise, Congressman Wyden said: “We are all against smut and pornography, and, as the parents of two small computer-literate children, my wife and I have seen our kids find their way into these chat rooms that make their middle-aged parents cringe.” Id. at 22,045.
As both sponsors noted, the debate between the House and the Senate was not over the CDA‘s primary purpose but rather over the best means to that shared end. See id. (statement of Rep. Cox) (“How should we do this? . . . Mr. Chairman, what we want are results. We want to make sure we do something that actually works.“); id. (statement of Rep. Wyden) (“So let us all stipulate right at the outset the importance of protecting our kids and going to the issue of the best way to do it.“). While the Exon Amendment would have the FCC regulate online obscene materials, the sponsors of the House proposal “believe[d] that parents and families are better suited to guard the portals of cyberspace and protect our children than our Government bureaucrats.” Id. at 22,045 (statement of Rep. Wyden). They also feared the effects the Senate‘s approach might have on the Internet itself. See id. (statement of Rep. Cox) (“[The amendment] will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet, that we do
There was only one problem with this approach, as the House sponsors saw it. A New York State trial court had recently ruled that the online service Prodigy, by deciding to remove certain indecent material from its site, had become a “publisher” and thus was liable for defamation when it failed to remove other objectionable content. Stratton-Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710, at *4 (N.Y. Sup. Ct. May 24, 1995) (unpublished). The authors of
The House having passed the Cox-Wyden Amendment and the Senate the Exon Amendment, the conference committee had before it two alternative visions for countering the spread of indecent online material to minors. The committee chose not to choose. Congress instead adopted both amendments as part of a final Communications Decency Act. See
Section 230 overruled Stratton-Oakmont through two interlocking provisions, both of which survived the legislative process unscathed. The first, which is at issue in this case, states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The legislative history illustrates that in passing
None of this is to say that
Illuminating Congress‘s original intent does, however, underscore the extent of
II.
With the CDA‘s background in mind, I turn to the text. By its plain terms,
The word “publisher” in this statute is thus inextricably linked to the “information provided by another.” The question is whether a plaintiff‘s claim arises from a third party‘s information, and—crucially—whether to establish the claim the court must necessarily view the defendant, not as a publisher in the abstract, but rather as the publisher of that third-party information. See FTC v. LeadClick Media, LLC, 838 F.3d 158, 175 (2d Cir. 2016) (stating inquiry as “whether the cause of action inherently requires the court to treat the defendant as the ‘publisher or speaker’ of content provided by another“).
For this reason,
Accordingly, our precedent does not grant publishers CDA immunity for the full range of activities in which they might engage. Rather, it “bars lawsuits seeking to hold a service provider liable for its exercise of a publisher‘s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content” provided by another for publication. LeadClick, 838 F.3d at 174 (citation and internal quotation marks omitted); accord Oberdorf, 2019 WL 2849153, at *10; Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 19 (1st Cir. 2016); Jones v. Dirty World Entm‘t Recordings LLC, 755 F.3d 398, 407 (6th Cir. 2014); Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009); Zeran, 129 F.3d at 330; see Klayman v. Zuckerberg, 753 F.3d 1354, 1359 (D.C. Cir. 2014); Ben Ezra, Weinstein, & Co., Inc. v. Am. Online Inc., 206 F.3d 980, 986 (10th Cir. 2000). For instance, a claim against a newspaper based on the content of a classified ad (or the decision to publish or withdraw that ad) would fail under the CDA not because newspapers traditionally publish classified ads, but rather because such a claim would necessarily treat the newspaper as the publisher of the ad-maker‘s content. Similarly, the newspaper does not act as an “information content provider“—and thus
This case is different. Looking beyond Facebook‘s “broad statements of immunity” and relying “rather on a careful exegesis of the statutory language,” Barnes, 570 F.3d at 1100, the CDA does not protect Facebook‘s friend- and content-suggestion algorithms. A combination of two factors, in my view, confirms that claims based on these algorithms do not inherently treat Facebook as the publisher of third-party content.5 First, Facebook uses the algorithms to create and communicate its own message: that it thinks you, the reader—you, specifically—will like this content. And second, Facebook‘s suggestions contribute to the creation of real-world social networks. The result of at least some suggestions is not just that the user consumes a third party‘s content. Sometimes, Facebook‘s suggestions allegedly lead the user to become part of a unique global community, the creation and maintenance of which goes far beyond and differs in kind from traditional editorial functions.
It is true, as the majority notes, see ante, at 47, that Facebook‘s algorithms rely on and display users’ content. However, this is not enough to trigger the protections of
If a third party got access to Facebook users’ data, analyzed it using a proprietary algorithm, and sent its own messages to Facebook users suggesting that people become friends or attend one another‘s events, the third party would not be protected as “the publisher” of the users’ information. Similarly, if Facebook were to use the algorithms to target its own material to particular users, such that the resulting posts consisted of “information provided by” Facebook rather than by “another information content provider,”
Yet that is ultimately what plaintiffs allege Facebook is doing. The PSAC alleges that Facebook “actively provides ‘friend suggestions’ between users who have expressed similar interests,” and that it “actively suggests groups and events to users.” App‘x 346 ¶¶ 612-13. Facebook‘s algorithms thus allegedly provide the user with a message from Facebook. Facebook is telling users—perhaps implicitly, but clearly—that they would like these people,
Moreover, in part through its use of friend, group, and event suggestions, Facebook is doing more than just publishing content: it is proactively creating networks of people. Its algorithms forge real-world (if digital) connections through friend and group suggestions, and they attempt to create similar connections in the physical world through event suggestions. The cumulative effect of recommending several friends, or several groups or events, has an impact greater than the sum of each suggestion. It envelops the user, immersing her in an entire universe filled with people, ideas, and events she may never have discovered on her own. According to the allegations in the complaint, Facebook designed its website for this very purpose. “Facebook has described itself as a provider of products and services that enable users . . to find and connect with other users ....” App‘x 250 ¶ 129. CEO Mark Zuckerberg has similarly described Facebook as “build[ing] tools to help people connect with the people they want,” thereby “extending people‘s capacity to build and maintain relationships.” Id. at 251 ¶ 132. Of course, Facebook is not the only company that tries to bring people together this way, and perhaps other publishers try to introduce their readers to one another. Yet the creation of social networks goes far beyond the traditional editorial functions that the CDA immunizes.
Another way to consider the CDA immunity question is to “look . . . to what the duty at issue actually requires: specifically, whether the duty would necessarily require an internet company to monitor[, alter, or remove] third-party content.” HomeAway.com, 918 F.3d at 682. Here, too, the claims regarding the algorithms are a poor fit for statutory immunity. The duty not to provide material support to terrorism, as applied to Facebook‘s use of the algorithms, simply requires that Facebook not actively use that material to determine which of its users to connect to each other. It could stop using the algorithms altogether, for instance. Or, short of that, Facebook could modify its algorithms to stop them introducing terrorists to one another. None of this would change any underlying content, nor would it necessarily require courts to assess further the difficult question of whether there is an affirmative obligation to monitor that content.
In reaching this conclusion, I note that ATA torts are atypical. Most of the common torts that might be pleaded in relation to Facebook‘s algorithms “derive liability from behavior that is identical to publishing or speaking“—for instance, “publishing defamatory material; publishing material that inflicts emotional distress; or . . . attempting to de-publish hurtful material but doing it badly.” Barnes, 570 F.3d at 1107.
For these reasons,
III.
Even if we sent this case back to the district court, as I believe to be the right course, these plaintiffs might have proven unable to allege that Facebook‘s matchmaking algorithms played a role in the attacks that harmed them. However, assuming arguendo that such might have been the situation here, I do not think we should foreclose the possibility of relief in future cases if victims can plausibly allege that a website knowingly brought terrorists together and that an attack occurred as a direct result of the site‘s actions. Though the majority shuts the door on such claims, today‘s decision also illustrates the extensive immunity that the current formulation of the CDA already extends to social media companies for activities that were undreamt of in 1996. It therefore may be time for Congress to reconsider the scope of
As is so often the case with new technologies, the very qualities that drive social media‘s success—its ease of use, open access, and ability to connect the world—have also spawned its demons. Plaintiffs’ complaint illustrates how pervasive and blatant a presence Hamas and its leaders have maintained on Facebook. Hamas is far from alone—Hezbollah, Boko Haram, the Revolutionary Armed Forces of Colombia, and many other designated terrorist organizations use Facebook to recruit and rouse supporters. Vernon Silver & Sarah Frier, Terrorists Are Still Recruiting on Facebook, Despite Zuckerberg‘s Reassurances, Bloomberg Businessweek (May 10, 2018), http://www.bloomberg.com/news/articles/2018-05-10/terrorists-creep-onto-facebook-as-fast-as-it-can-shut-them-down. Recent news reports suggest that many social media sites have been slow to remove the plethora of terrorist and extremist accounts populating their platforms,6 and that such efforts, when they
Of course, the failure to remove terrorist content, while an important policy concern, is immunized under
Take Facebook. As plaintiffs allege, its friend-suggestion algorithm appears to connect terrorist sympathizers with pinpoint precision. For instance, while two researchers were studying Islamic State (“IS“) activity on Facebook, one “received dozens of pro-IS accounts as recommended friends after friending just one pro-IS account.” Waters & Postings, supra, at 78. More disturbingly, the other “received an influx of Philippines-based IS supporters and fighters as recommended friends after liking several non-extremist news pages about Marawi and the Philippines during IS‘s capture of the city.” Id. News reports indicate that the friend-suggestion feature has introduced thousands of IS sympathizers to one another. See Martin Evans, Facebook Accused of Introducing Extremists to One Another Through ‘Suggested Friends’ Feature, The Telegraph (May 5, 2018), http://www.telegraph.co.uk/news/2018/05/05/facebook-accused-introducing-extremists-one-another-suggested.
And this is far from the only Facebook algorithm that may steer people toward terrorism. Another turns users’ declared interests into audience categories to enable microtargeted advertising. In 2017, acting on a tip, ProPublica sought to direct an ad at the algorithmically-created category “Jew hater“—which turned out to be real, as were “German Schutzstaffel,” “Nazi Party,” and “Hitler did nothing wrong.” Julia Angwin et al., Facebook Enabled Advertisers to Reach ‘Jew Haters,’ ProPublica (Sept. 14, 2017), https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters. As the “Jew hater” category was too small for Facebook to run an ad campaign, “Facebook‘s automated system suggested ‘Second Amendment’ as an additional category . . . presumably because its system had correlated gun enthusiasts with anti-Semites.” Id.
That‘s not all. Another Facebook algorithm auto-generates business pages by scraping employment information from users’ profiles; other users can then “like” these pages, follow their posts, and see who else has liked them. Butler & Ortutay, supra. ProPublica reports that extremist organizations including al-Qaida, al-Shabab, and IS have such auto-created pages, allowing them to recruit the pages’ followers. Id. The page for al-Qaida in the Arabian
This case, and our CDA analysis, has centered on the use of algorithms to foment terrorism. Yet the consequences of a CDA-driven, hands-off approach to social media extend much further. Social media can be used by foreign governments to interfere in American elections. For example, Justice Department prosecutors recently concluded that Russian intelligence agents created false Facebook groups and accounts in the years leading up to the 2016 election campaign, bootstrapping Facebook‘s algorithm to spew propaganda that reached between 29 million and 126 million Americans. See 1 Robert S. Mueller III, Special Counsel, Report on the Investigation Into Russian Interference in the 2016 Presidential Election 24-26, U.S. Dep‘t of Justice (March 2019), http://www.justice.gov/storage/report.pdf. Russia also purchased over 3,500 advertisements on Facebook to publicize their fake Facebook groups, several of which grew to have hundreds of thousands of followers. Id. at 25-26. On Twitter, Russia developed false accounts that impersonated American people or groups and issued content designed to influence the election; it then created thousands of automated “bot” accounts to amplify the sham Americans’ messages. Id. at 26-28. One fake account received over six million retweets, the vast majority of which appear to have come from real Twitter users. See Gillian Cleary, Twitterbots: Anatomy of a Propaganda Campaign, Symantec (June 5, 2019), http://www.symantec.com/blogs/threat-intelligence/twitterbots-propaganda-disinformation. Russian intelligence also harnessed the reach that social media gave its false identities to organize “dozens of U.S. rallies,” some of which “drew hundreds” of real-world Americans. Mueller, Report, supra, at 29. Russia could do all this only because social media is designed to target messages like Russia‘s to the users most susceptible to them.
While Russia‘s interference in the 2016 election is the best-documented example of foreign meddling through social media, it is not the only one. Federal intelligence agencies expressed concern in the weeks before the 2018 midterm election “about ongoing campaigns by Russia, China and other foreign actors, including Iran,” to “influence public sentiment” through means “including using social media to amplify divisive issues.” Press Release, Office of Dir. of Nat‘l Intelligence, Joint Statement from the ODNI, DOJ, FBI, and DHS: Combatting Foreign Influence in U.S. Elections, (Oct. 19, 2018), https://www.dni.gov/index.php/newsroom/press-releases/item/1915-joint-statement-from-the-odni-doj-fbi-and-dhs-combating-foreign-influence-in-u-s-elections. News reports also suggest that China targets state-sponsored propaganda to Americans on Facebook and purchases Facebook ads to amplify its communications. See Paul Mozur, China Spreads Propaganda to U.S. on Facebook, a Platform It Bans at Home, N.Y. Times (Nov. 8, 2017), https://www.nytimes.com/2017/11/08/technology/china-facebook.html.
Widening the aperture further, malefactors at home and abroad can manipulate social media to promote extremism. “Behind every Facebook ad, Twitter feed, and YouTube recommendation is an algorithm that‘s designed to keep users using: It tracks preferences through clicks and hovers, then spits out a steady stream of
There is also growing attention to whether social media has played a significant role in increasing nationwide political polarization. See Andrew Soergel, Is Social Media to Blame for Political Polarization in America?, U.S. News & World Rep. (Mar. 20, 2017), https://www.usnews.com/news/articles/2017-03-20/is-social-media-to-blame-for-political-polarization-in-america. The concern is that “web surfers are being nudged in the direction of political or unscientific propaganda, abusive content, and conspiracy theories.” Wu, Radical Ideas, supra. By surfacing ideas that were previously deemed too radical to take seriously, social media mainstreams them, which studies show makes people “much more open” to those concepts. Max Fisher & Amanda Taub, How Everyday Social Media Users Become Real-World Extremists, N.Y. Times (Apr. 25, 2018), http://www.nytimes.com/2018/04/25/world/asia/facebook-extremism.html. At its worst, there is evidence that social media may even be used to push people toward violence.7 The sites are not entirely to blame, of course—they would not have such success
While the majority and I disagree about whether
Whether, and to what extent, Congress should allow liability for tech companies that encourage terrorism, propaganda, and extremism is a question for legislators, not judges. Over the past two decades “the Internet has outgrown its swaddling clothes,” Roommates.Com, 521 F.3d at 1175 n.39, and it is fair to ask whether the rules that governed its infancy should still oversee its adulthood. It is undeniable that the Internet and social media have had many positive effects worth preserving and promoting, such as facilitating open communication, dialogue, and education. At the same time, as outlined above, social media can be used by evildoers who pose real threats to our democratic society. A healthy debate has begun both in the legal academy9 and in the policy community10 about changing the scope of
Notes
However, as detailed post,
