Lead Opinion
The principal question presented in this appeal is whether
The district court granted Facebook's motion to dismiss plaintiffs' First Amended Complaint under Federal Rule of Civil Procedure 12(b)(6) on the basis of Section 230(c)(1) immunity, an affirmative defense. After entering judgment without prejudice to moving to file an amended complaint, the district court denied with prejudice plaintiffs' motion to file a second amended complaint on the basis that the proposed complaint did not cure the deficiencies in the First Amended Complaint.
On appeal, plaintiffs argue that the district court improperly dismissed their claims because Section 230(c)(1) does not provide immunity to Facebook under the circumstances of their allegations.
We conclude that the district court properly applied Section 230(c)(1) to plaintiffs' federal claims. Also, upon our review of plaintiffs' assertion of diversity jurisdiction over their foreign law claims,
FACTUAL AND PROCEDURAL BACKGROUND
I. Allegations in Plaintiffs' Complaint
Because this case comes to us on a motion to dismiss, we recount the facts as plaintiffs provide them to us, treating as true the allegations in their complaint.
See
Galper v. JP Morgan Chase Bank, N.A.
,
A. The Attacks
Hamas is a Palestinian Islamist organization centered in Gaza. It has been designated a foreign terrorist organization by the United States and Israel. Since it was formed in 1987, Hamas has conducted thousands of terrorist attacks against civilians in Israel.
Plaintiffs' complaint describes terrorist attacks by Hamas against five Americans in Israel between 2014 and 2016. Yaakov Naftali Fraenkel, a teenager, was kidnapped by a Hamas operative in 2014 while walking home from school in Gush Etzion, near Jerusalem, and then was shot to death. Chaya Zissel Braun, a 3-month-old baby, was killed at a train station in Jerusalem in 2014 when a Hamas operative drove a car into a crowd. Richard Lakin died after Hamas members shot and stabbed him in an attack on a bus in Jerusalem in 2015. Graduate student Taylor Force was stabbed to death by a Hamas attacker while walking on the Jaffa boardwalk in Tel Aviv in 2016. Menachem Mendel Rivkin was stabbed in the neck in 2016 by a Hamas operative while walking to a restaurant in a town near Jerusalem. He suffered serious injuries but survived. Except for Rivkin, plaintiffs are the representatives of the estates of those who died in these attacks and family members of the victims.
B. Facebook's Alleged Role in the Attacks
1. How Facebook Works
Facebook operates an "online social network platform and communications service[ ]." App'x 230. Facebook users populate their own "Facebook 'pages' " with "content," including personal identifying information and indications of their particular "interests." App'x 250-51, 345. Organizations and other entities may also have Facebook pages. Users can post content on others' Facebook pages, reshare each other's content, and send messages to one another. The content can be text-based messages and statements, photos, web links, or other information.
Facebook users must first register for a Facebook account, providing their names, telephone numbers, and email addresses. When registering, users do not specify the nature of the content they intend to publish on the platform, nor does Facebook screen new users based on its expectation of what content they will share with other Facebook users. There is no charge to prospective users for joining Facebook.
Facebook does not preview or edit the content that its users post. Facebook's terms of service specify that a user "own[s] all of the content and information [the user] post[s] on Facebook, and [the user] can control how it is shared through [the user's] privacy and application settings." App'x 252 (alterations in original).
While Facebook users may view each other's shared content simply by visiting other Facebook pages and profiles, Facebook also provides a personalized "newsfeed" page for each user. Facebook uses algorithms-"a precisely defined set of mathematical or logical operations for the performance of a particular task," Algorithm , Oxford English Dictionary (3d ed. 2012)-to determine the content to display to users on the newsfeed webpage. Newsfeed content is displayed within banners or modules and changes frequently. The newsfeed algorithms-developed by programmers employed by Facebook-automatically analyze Facebook users' prior behavior on the Facebook website to predict and display the content that is most likely to interest and engage those particular users. Other algorithms similarly use Facebook users' behavioral and demographic data to show those users third-party groups, products, services, and local events likely to be of interest to them.
Facebook's algorithms also provide "friend suggestions," which, if accepted by the user, result in those users seeing each other's shared content. App'x 346-47. The friend-suggestion algorithms are based on such factors as the users' common membership in Facebook's online "groups," geographic location, attendance at events, spoken language, and mutual friend connections on Facebook. App'x 346.
Facebook's advertising algorithms and "remarketing" technology also allow advertisers on Facebook to target specific ads to its users who are likely to be most interested in them and thus to be most beneficial to those advertisers. App'x 347. Those advertisements are displayed on the users' pages and other Facebook webpages. A substantial portion of Facebook's revenues is from such advertisers.
2. Hamas's Use of Facebook
Plaintiffs allege that Hamas used Facebook to post content that encouraged terrorist attacks in Israel during the time period of the attacks in this case. The attackers allegedly viewed that content on Facebook. The encouraging content ranged in specificity; for example, Fraenkel, although not a soldier, was kidnapped and murdered after Hamas members posted messages on Facebook that advocated the kidnapping of Israeli soldiers. The attack that killed the Braun baby at the light rail station in Jerusalem came after Hamas posts encouraged car-ramming attacks at light rail stations. By contrast, the killer of Force is alleged to have been a Facebook user, but plaintiffs do not set forth what specific content encouraged his attack, other than that "Hamas ... use[d] Facebook to promote terrorist stabbings." App'x 335.
Hamas also used Facebook to celebrate these attacks and others, to transmit political messages, and to generally support further violence against Israel. The perpetrators were able to view this content because, although Facebook's terms and policies bar such use by Hamas and other designated foreign terrorist organizations, Facebook has allegedly failed to remove the "openly maintained" pages and associated content of certain Hamas leaders, spokesmen, and other members. App'x 229. It is also alleged that Facebook's algorithms directed such content to the personalized newsfeeds of the individuals who harmed the plaintiffs. Thus, plaintiffs claim, Facebook enables Hamas "to disseminate its messages directly to its intended audiences," App'x 255, and to "carry out the essential communication components of [its] terror attacks," App'x 256.
II. Facebook's Antiterrorism Efforts
A. Intended Uses of Facebook
Facebook has Terms of Service that govern the use of Facebook and purport to incorporate Facebook's Community Standards.
B. Prohibited Uses of Facebook
According to the current version of Facebook's Community Standards, Facebook "remove[s] content that expresses support or praise for groups, leaders, or individuals involved in," inter alia , "[t]errorist activity." 2. Dangerous Individuals and Organizations , Community Standards , Facebook, https://www.facebook.com/communitystandards/dangerous_individuals_organizations (last visited June 26, 2019). "Terrorist organizations and terrorists" may not "maintain a presence" on Facebook, nor is "coordination of support" for them allowed. Id . Facebook "do[es] not allow symbols that represent any [terrorist] organizations or [terrorists] to be shared on [the] platform without context that condemns or neutrally discusses the content." Id . In addition, Facebook purports to ban "hate speech" and to "remove content that glorifies violence or celebrates the suffering or humiliation of others." Objectionable Content , Community Standards , Facebook, https://www.facebook.com/communitystandards/objectionable_content (last visited June 26, 2019).
Facebook's Terms of Service also prohibit using its services "to do or share anything" that is,
inter alia
, "unlawful" or that "infringes or violates someone else's rights."
According to recent testimony by Facebook's General Counsel in a United States Senate hearing, Facebook employs a multilayered strategy to enforce these policies and combat extremist content on its platform.
The General Counsel also testified that, for content that is not automatically detected, Facebook employs thousands of people who respond to user reports of inappropriate content and remove such
content.
Id.
Facebook also has a 150-person team of "counterterrorism specialists," including academics, engineers, and former prosecutors and law enforcement officers.
III. District Court Proceeding
Plaintiffs brought this action on July 10, 2016, in the United States District Court for the Southern District of New York. On consent of the parties, the action was transferred to the United States District Court for the Eastern District of New York on September 16, 2016.
Facebook moved to dismiss plaintiffs' claims for lack of personal jurisdiction under Rule 12(b)(2) and for failure to state a claim under Rule 12(b)(6). The district court determined that it had personal jurisdiction over Facebook, a ruling that Facebook does not challenge on appeal. But the district court also held that
Plaintiffs then filed a Rule 59(e) motion to alter the judgment, asking the district court to reconsider its dismissal of their First Amended Complaint, and filed a motion seeking leave to file a second amended complaint. The proposed complaint retained all of plaintiffs' prior claims for relief and added a claim that Facebook had concealed its alleged material support to Hamas. In January 2018, the district court denied plaintiffs' motions with prejudice, holding that plaintiffs' proposed second amended complaint was futile in light of
STANDARD OF REVIEW
Because the district court determined that it was futile to allow plaintiffs to file a second amended complaint, we evaluate that proposed complaint "as we would a motion to dismiss, determining whether [it] contains enough facts to state a claim to relief that is plausible on its face."
DISCUSSION
On appeal, plaintiffs contend that the district court improperly held that Section 230(c)(1) barred their claims. Plaintiffs argue that their claims do not treat Facebook as the "publisher" or "speaker" of content
In response to plaintiffs' claims, Facebook contends that Section 230(c)(1) provides it immunity and that, even absent such immunity, plaintiffs fail to plausibly allege that Facebook assisted Hamas in the ways required for their federal antiterrorism claims and Israeli law claims.
We first turn to the issues regarding
Section 230(c)(1).
I. Background of Section 230(c)(1)
The primary purpose of the proposed legislation that ultimately resulted in the Communications Decency Act ("CDA") "was to protect children from sexually explicit internet content."
FTC v. LeadClick Media, LLC
,
In the seminal Fourth Circuit decision interpreting the immunity of Section 230 shortly after its enactment, Zeran v. America Online, Inc. , that court described Congress's concerns underlying Section 230 :
The amount of information communicated via interactive computer services is ... staggering. The specter of ... liability in an area of such prolific speech would have an obvious chilling effect. It would be impossible for service providers to screen each of their millions of postings for possible problems. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress ... chose to immunize service providers to avoid any such restrictive effect.
The addition of Section 230 to the proposed CDA also "assuaged Congressional concern regarding the outcome of two inconsistent judicial decisions,"
Cubby, Inc. v. CompuServe, Inc.,
To "overrule
Stratton
,"
id
., and to accomplish its other objectives, Section 230(c)(1) provides that "[n]o provider ... of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information
content provider."
In light of Congress's objectives, the Circuits are in general agreement that the text of Section 230(c)(1) should be construed broadly in favor of immunity.
See
LeadClick,
II. Whether Section 230(c)(1) Protects Facebook's Alleged Conduct
The parties agree that Facebook is a provider of an "interactive computer service," but dispute whether plaintiffs' claims allege that (1) Facebook is acting as the protected publisher of information, and (2) the challenged information is provided by Hamas, or by Facebook itself.
A. Whether Plaintiffs' Claims Implicate Facebook as a "Publisher" of Information
Certain important terms are left undefined by Section 230(c)(1), including "publisher."
Plaintiffs seek to hold Facebook liable for "giving Hamas a forum with which to communicate and for actively bringing Hamas' message to interested parties." Appellants' Reply Br. 37;
see also, e.g.
, Appellants' Br. 50-51 (arguing that the federal anti-terrorism statutes "prohibit[ ] Facebook from supplying Hamas a platform and communications services"). But that alleged conduct by Facebook falls within the heartland of what it means to be the "publisher" of information under Section 230(c)(1). So, too, does Facebook's alleged failure to delete content from Hamas members' Facebook pages.
See
LeadClick
,
Plaintiffs also argue that Facebook does not act as the publisher of Hamas's content within the meaning of Section 230(c)(1) because it uses algorithms to suggest content to users, resulting in "matchmaking." Appellants' Br. 51-52. For example, plaintiffs allege that Facebook's "newsfeed" uses algorithms that predict and show the third-party content that is most likely to interest and engage users. Facebook's algorithms also provide "friend suggestions," based on analysis of users' existing social connections on Facebook and other behavioral and demographic data. And, Facebook's advertising algorithms and "remarketing" technology allow advertisers to target ads to its users who are likely most interested in those ads.
We disagree with plaintiffs' contention that Facebook's use of algorithms renders it a non-publisher. First, we find no basis in the ordinary meaning of "publisher," the other text of Section 230, or decisions interpreting Section 230, for concluding that an interactive computer service is not the "publisher" of third-party information when it uses tools such as algorithms that are designed to match that information with a consumer's interests.
Indeed, arranging and distributing third-party information inherently forms "connections" and "matches" among speakers, content, and viewers of content, whether in interactive internet forums or in more traditional media.
Plaintiffs' "matchmaking" argument would also deny immunity for the editorial decisions regarding third-party content that interactive computer services have made since the early days of the Internet. The services have always decided, for example, where on their sites (or other digital property) particular third-party content should reside and to whom it should be shown. Placing certain third-party content on a homepage, for example, tends to recommend that content to users more than if it were located elsewhere on a website. Internet services have also long been able to target the third-party content displayed to users based on, among other things, users' geolocation, language of choice, and registration information. And, of course, the services must also decide what type and format of third-party content they will display, whether that be a chat forum for classic car lovers, a platform for blogging, a feed of recent articles from news sources frequently visited by the user, a map or directory of local businesses, or a dating service to find romantic partners. All of these decisions, like the decision to host third-party content in the first place, result in "connections" or "matches" of information and individuals, which would have not occurred but for the internet services' particular editorial choices regarding the display of third-party content. We, again, are unaware of case law denying Section 230(c)(1) immunity because of the "matchmaking" results of such editorial decisions.
Seen in this context, plaintiffs' argument that Facebook's algorithms uniquely form "connections" or "matchmake" is wrong. That, again, has been a fundamental result of publishing third-party content on the Internet since its beginning. Like the decision to place third-party content on a homepage, for example, Facebook's algorithms might cause more such "matches" than other editorial decisions. But that is not a basis to exclude the use of algorithms from the scope of what it means to be a "publisher" under Section 230(c)(1). The matches also might-as compared to those resulting from other editorial decisions-present users with targeted content of even more interest to them, just as an English speaker, for example, may be best matched with English-language content. But it would turn Section 230(c)(1) upside down to hold that Congress intended that when publishers of third-party content become especially adept at performing the functions of publishers, they are no longer immunized from civil liability.
Second, plaintiffs argue, in effect, that Facebook's use of algorithms is outside the scope of publishing because the algorithms
automate
Facebook's editorial decision-making. That argument, too, fails because "so long as a third party willingly provides the essential published content, the interactive service provider receives full immunity regardless of the specific edit[orial] or selection process."
Carafano
,
Our dissenting colleague calls for a narrow textual interpretation of Section 230(c)(1) by contending that the Internet was an "afterthought" of Congress in the CDA because the medium received less "committee attention" than other forms of media and that Congress, with Section 230, "tackled only ... the ease with which the Internet delivers indecent or offensive material, especially to minors." Dissent at 78. But such a constrained view of Section 230 simply is not supported by the actual text of the statute that Congress passed. In addition to the broad language of Section 230(c)(1) and the pro-Internet-development policy statements in Section 230 (discussed supra at 63, 67), Congress announced the following specific findings in Section 230 :
(1) The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens.
(2) These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.
(3) The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.
(4) The Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation.
(5) Increasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services.
We therefore conclude that plaintiffs' claims fall within Facebook's status as the "publisher" of information within the meaning of Section 230(c)(1).
B. Whether Facebook is the Provider of the Information
We turn next to whether Facebook is plausibly alleged to
itself
be an "information content provider," or whether it is Hamas that provides all of the complained-of content. "The term 'information content provider' means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service."
The term "development" in Section 230(f)(3) is undefined. However, consistent with broadly construing "publisher" under Section 230(c)(1), we have recognized that a defendant will not be considered to have developed third-party content unless the defendant directly and "materially" contributed to what made the content itself "unlawful."
LeadClick
,
Employing this "material contribution" test, we held in
FTC v. LeadClick
that the defendant LeadClick had "developed" third parties' content by giving specific instructions to those parties on how to edit "fake news" that they were using in their ads to encourage consumers to purchase their weight-loss products.
LeadClick
,
Although it did not explicitly adopt the "material contribution" test, the D.C. Circuit's recent decision in
Marshall's Locksmith Service v. Google
,
As to the "less-exact" location information, such as area codes, provided by the scam locksmiths, the plaintiffs also argued that the mapping services' algorithmic translation of this information into exact pinpoint map locations developed or created the misleading information. Id . at 1269-70. The D.C. Circuit also rejected that argument, holding that "defendants' translation of [imprecise] third-party information into map pinpoints does not convert them into 'information content providers' because defendants use a neutral algorithm to make that translation." Id . at 1270. In using the term "neutral," the court observed that the algorithms were alleged to make no distinction between "scam" and other locksmiths and that the algorithms did not materially alter (i.e., they "hew[ed] to") the underlying information provided by the third parties. Id . at 1270 n.5, 1270-71.
Here, plaintiffs' allegations about Facebook's conduct do not render it responsible for the Hamas-related content. As an initial matter, Facebook does not edit (or suggest edits) for the content that its users-including Hamas-publish. That practice is consistent with Facebook's Terms of Service, which emphasize that a Facebook user "own[s] all of the content and information [the user] post[s] on Facebook, and [the user] can control how it is shared through [the user's] privacy and application settings." App'x 252.
Nor does Facebook's acquiring certain information from users render it a developer for the purposes of Section 230. Facebook requires users to provide only basic identifying information: their names, telephone numbers, and email addresses. In so doing, Facebook acts as a "neutral intermediary."
LeadClick
,
Plaintiffs' allegations likewise indicate that Facebook's algorithms are content "neutral" in the sense that the D.C. Circuit used that term in
Marshall's Locksmith
: The algorithms take the information provided by Facebook users and "match" it to other users-again, materially unaltered-based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers.
Plaintiffs' arguments to the contrary are unpersuasive. For one, they point to the Ninth Circuit's decision in
Roommates.Com
as holding that requiring or encouraging users to provide
any
particular information whatsoever to the interactive computer service transforms a defendant into a developer of that information. The
Roommates.Com
holding, however, was not so broad; it concluded only that the site's conduct in requiring users to select from "a limited set of pre-populated answers" to respond to particular "discriminatory questions" had a content-development effect that was actionable in the context of the Fair Housing Act.
See
Plaintiffs also argue that Facebook develops Hamas's content because Facebook's algorithms make that content more "visible," "available," and "usable." Appellants' Br. at 45-46. But making information more available is, again, an essential part of traditional publishing ; it does not amount to "developing" that information within the meaning of Section 230. Similarly, plaintiffs assert that Facebook's algorithms suggest third-party content to users "based on what Facebook believes will cause the user to use Facebook as much as possible" and that Facebook intends to "influence" consumers' responses to that content. Appellants' Br. 48. This does not describe anything more than Facebook vigorously fulfilling its role as a publisher. Plaintiffs' suggestion that publishers must have no role in organizing or distributing third-party content in order to avoid "develop[ing]" that content is both ungrounded in the text of Section 230 and contrary to its purpose.
Finally, we note that plaintiffs also argue that Facebook should not be afforded Section 230 immunity because Facebook has chosen to undertake efforts to eliminate objectionable and dangerous content but has not been effective or consistent in those efforts. However, again, one of the purposes of Section 230 was to ensure that interactive computer services should not incur liability as developers or creators of third-party content merely because they undertake such efforts-even if they are not completely effective.
We therefore conclude from the allegations of plaintiffs' complaint that Facebook did not "develop" the content of the Facebook postings by Hamas and that Section 230(c)(1) applies to Facebook's alleged conduct in this case.
III. Whether Applying Section 230(c)(1) to Plaintiffs' Claims Would Impair the Enforcement of a Federal Criminal Statute
Plaintiffs also argue that Section 230(c)(1) may not be applied to their claims because that would impermissibly "impair the enforcement" of a "Federal criminal statute." Appellant's Br. at 52 (quoting
We agree with the district court's conclusion that Section 230(e)(1) is inapplicable in this civil action. Even accepting,
arguendo
, plaintiffs' assertion that a civil litigant could be said to "enforce" a criminal statute through a separate civil remedies provision, any purported ambiguity in Section 230(e)(1) is resolved by its title, "No effect on criminal law."
IV. Whether the Anti-Terrorism Act's Civil Remedies Provision,
Plaintiffs also argue that the ATA's civil remedies provision,
"[R]epeals by implication are not favored and will not be presumed unless the intention of the legislature to repeal is clear and manifest."
Nat'l Ass'n of Home Builders v. Defs. of Wildlife
,
V. Whether Applying Section 230(c)(1) to Plaintiffs' Claims Would Be Impermissibly Extraterritorial
Plaintiffs also argue that the presumption against the extraterritorial application of federal statutes bars applying Section 230(c)(1) to their claims because Hamas posted content and conducted the attacks from overseas, and because Facebook's employees who failed to take down Hamas's content were allegedly located outside the United States, in Facebook's foreign facilities. In response, Facebook contends that Section 230(c)(1) merely limits civil liability in American courts, a purely domestic application.
Under the canon of statutory interpretation known as the "presumption against extraterritoriality," "[a]bsent clearly expressed congressional intent to the contrary, federal laws will be construed to have only domestic application."
RJR Nabisco, Inc. v. European Cmty.,
--- U.S. ----,
If the statute is not extraterritorial on its face, then "at the second step we determine whether the case involves a domestic application of the statute, and we do this by looking to the statute's 'focus.' "
Id
. "The focus of a statute is the object of its solicitude, which can include the conduct it seeks to regulate, as well as the parties and interests it seeks to protect or vindicate."
WesternGeco LLC v. ION Geophysical Corp.
, --- U.S. ----,
The two-step framework arguably does not easily apply to a statutory provision that affords an affirmative defense to civil liability. Indeed, it is unclear how an American court could apply such a provision "extraterritorially." Even if it could be applied extraterritorially-say, by somehow treating the defendant's conduct rather than the lawsuit itself as the "focus" of a liability-limiting provision-the presumption against extraterritoriality primarily "serves to avoid the international discord that can result when U.S. law is applied to conduct in foreign countries." Id . at 2100. Allowing a plaintiff's claim to go forward because the cause of action applies extraterritorially, while then applying the presumption to block a different provision setting out defenses to that claim, would seem only to increase the possibility of international friction. Such a regime could also give plaintiffs an advantage when they sue over extraterritorial wrongdoing that they would not receive if the defendant's conduct occurred domestically. It is doubtful that Congress ever intends such a result when it writes provisions limiting civil liability.
The Ninth Circuit addressed this issue in
Blazevska v. Raytheon Aircraft Co.
,
merely eliminates the power of any party to bring a suit for damages against a general aviation aircraft manufacturer, in a U.S. federal or state court, after the limitation period. The only conduct it could arguably be said to regulate is the ability of a party to initiate an action for damages against a manufacturer in American courts-an entirely domestic endeavor. Congress has no power to tell courts of foreign countries whether they could entertain a suit against an American defendant.
Id . at 953. "Accordingly," the Ninth Circuit held, "the presumption against extraterritoriality simply is not implicated by GARA's application." Id .
The Supreme Court has left open the question of whether certain types of statutes might not be subject to the presumption against extraterritoriality.
See
WesternGeco
,
VI. Foreign Law Claims
Turning next to plaintiffs' foreign tort claims, the parties disagree as to the reach of Section 230 immunity. The district court held that Section 230 applies to foreign law claims brought in United States courts, but it did not address the basis for its exercise of subject matter jurisdiction over those claims. Before we can reach the merits of those causes of action, including the applicability of Section 230, we must independently ensure the basis for federal subject matter jurisdiction.
Ruhrgas AG v. Marathon Oil Co.
,
Plaintiffs allege that, under
Here, a substantial majority of the plaintiffs are alleged to be United States citizens domiciled in Israel.
In addition, "[i]t is well established that for a case to come within [ § 1332 ] there must be complete diversity,"
Cresswell
,
The joinder of Israel-domiciled U.S.-citizen plaintiffs requires us either to dismiss the diversity-based claims altogether, or exercise our discretion to: 1) dismiss those plaintiffs who we determine are "dispensable jurisdictional spoilers;" or 2) vacate in part the judgment of the district court and remand for it to make that indispensability determination and to determine whether dismissal of those individuals would be appropriate.
SCS Commc'ns, Inc. v. Herrick Co.
,
We decline to exercise our discretion to attempt to remedy these jurisdictional defects. This is not a case in which a small number of nondiverse parties defeats jurisdiction, but rather one in which-after multiple complaints have been submitted-most of the plaintiffs are improperly joined. Moreover, the case remains at the pleading stage, with discovery not yet having begun. Proceeding with the few diverse plaintiffs would be inefficient given the expenditure of judicial and party resources that would be required to address the jurisdictional defects. The most appropriate course is for any diverse plaintiffs to bring a new action and demonstrate subject matter jurisdiction in that action.
CONCLUSION
For the foregoing reasons, we AFFIRM the judgment of the district court as to plaintiffs' federal claims and DISMISS plaintiffs' foreign law claims.
As used here, the term "complaint" refers to both the allegations of the First Amended Complaint and those of the proposed second amended complaint, which sought to supplement the prior complaint.
According to Facebook, hundreds of millions of Facebook pages are maintained on its platform.
When we refer to "Hamas" as users of Facebook in this opinion, we mean individuals alleged to be Hamas members or supporters, as well as various Hamas entities that are alleged to have Facebook pages.
Plaintiffs' complaint relies extensively on, and incorporates by reference, Facebook's Terms of Service and Community Standards (together, "terms"). The publicly available terms are also subject to judicial notice.
See
Fed. R. Evid. 201(b)(2) ;
see also, e.g.,
23-34 94th St. Grocery Corp. v. N.Y.C. Bd. of Health
,
Facebook's sign-up webpage states that by clicking "Sign Up," prospective users agree to Facebook's Terms of Service, Data Policy, and Cookies Policy-all of which are hyperlinked from that page. Create a New Account , Facebook, https://www.facebook.com/r.php (last visited June 26, 2019). As indicated above, the Terms of Service also purport to incorporate Facebook's Community Standards.
Plaintiffs included this testimony in the appendix on appeal and attached and referred to the testimony in their brief responding to the district court's order to show cause for why their proposed second amended complaint was not futile. We recount such testimony only for the purposes described supra n.5.
Facebook has been criticized recently-and frequently-for not doing enough to take down offensive or illegal content.
E.g.
, Cecilia Kang,
Nancy Pelosi Criticizes Facebook for Handling of Altered Videos
, N.Y. Times (May 29, 2019), https://www.nytimes.com/2019/05/29/technology/facebook-pelosi-video.html; Kalev Leetaru,
Countering Online Extremism Is Too Important to Leave to Facebook
, FORBES (May 9, 2019), https://www.forbes.com/sites/kalevleetaru/2019/05/09/countering-online-extremism-is-too-important-to-leave-to-facebook; Julia Fioretti,
Internet Giants Not Doing Enough to Take Down Illegal Content: EU
, Reuters (Jan. 9, 2018), https://www.reuters.com/article/us-eu-internet-meeting/internet-giants-not-doing-enough-to-take-down-illegal-content-eu-idUSKBN1EY2BL;
see
Staehr v. Hartford Fin. Servs. Grp., Inc.
,
The parties moved jointly under
In the same opinion, the district court also dismissed for lack of Article III standing the claims brought in a separate action by 20,000 Israeli citizens who, according to the district court, claimed "to be threatened only by potential future attacks." S. App'x 3. The district court referred to those plaintiffs as the "Cohen Plaintiffs" and to the plaintiffs in this appeal as the "Force Plaintiffs." Id . at 1. The Cohen Plaintiffs did not appeal. Cohen v. Facebook , 16-cv-04453-NGG-LB (E.D.N.Y.).
We have jurisdiction over this appeal from a final judgment.
Plaintiffs do not distinguish their arguments between their First Amended Complaint, which the district court dismissed, and their proposed second amended complaint, which the district court determined was futile. We agree that the Section 230(c)(1) issues raised by both complaints are materially indistinguishable.
We refer to "content" and "information" synonymously in this opinion.
Plaintiffs argue that the district court prematurely applied Section 230(c)(1), an affirmative defense, because discovery might show that Facebook was indeed a "developer" of Hamas's content. However, the application of Section 230(c)(1) is appropriate at the pleading stage when, as here, the "statute's barrier to suit is evident from the face of" plaintiffs' proposed complaint.
Ricci
,
Section 230(c)(2), which, like Section 230(c)(1), is contained under the subheading "Protection for 'Good Samaritan' Blocking and Screening of Offensive Material,"
Because, as is discussed later in this opinion, plaintiffs' foreign law claims are dismissed on jurisdictional grounds, our discussion of Section 230(c)(1) immunity is confined to plaintiffs' federal claims.
Plaintiffs also argue that because publication is not an explicit element of their federal anti-terrorism claims, Section 230(c)(1) does not provide Facebook with immunity. However, it is well established that Section 230(c)(1) applies not only to defamation claims, where publication is an explicit element, but also to claims where "the duty that the plaintiff alleges the defendant violated derives from the defendant's
status or conduct
as a publisher or speaker."
LeadClick
,
"When a term goes undefined in a statute, we give the term its ordinary meaning."
Taniguchi v. Kan Pac. Saipan, Ltd.
,
To the extent that plaintiffs rely on their undeveloped contention that the algorithms are "designed to radicalize," Appellants' Br. 51, we deem that argument waived. In addition, this allegation is not made in plaintiffs' complaints.
While lacking precedential value, "[w]e are, of course, permitted to consider summary orders for their persuasive value, and often draw guidance from them in later cases."
Brault v. Soc. Sec. Admin., Comm'r
,
As journalist and author Tom Standage has observed, "[M]any of the ways in which we share, consume, and manipulate information, even in the Internet era, build upon habits and conventions that date back centuries." Tom Standage, Writing on the Wall: Social Media - The First 2000 Years 5 (2013). See also Tom Standage, Benjamin Franklin, Social Media Pioneer , Medium (Dec. 10, 2013), https://medium.com/new-media/benjamin-franklin-social-media-pioneer-3fb505b1ce7c ("Small and local, with circulations of a few hundred copies at best, [colonial] newspapers consisted in large part of letters from readers, and reprinted speeches, pamphlets and items from other papers. They provided an open platform through which people could share and discuss their views with others. They were, in short, social media.").
The dissent contends that our holding would necessarily immunize the dissent's hypothetical phone-calling acquaintance who brokers a connection between two published authors and facilitates the sharing of their works.
See
Dissent at 76. We disagree, for the simple reason that Section 230(c)(1) immunizes publishing activity only insofar as it is conducted by an "interactive computer service." Moreover, the third-party information must be "provided through the Internet or any other interactive computer service."
We do not mean that Section 230 requires algorithms to treat all types of content the same. To the contrary, Section 230 would plainly allow Facebook's algorithms to, for example, de-promote or block content it deemed objectionable. We emphasize only-assuming that such conduct could constitute "development" of third-party content-that plaintiffs do not plausibly allege that Facebook augments terrorist-supporting content primarily on the basis of its subject matter.
See supra , Discussion, Part I.
"[W]here the text is ambiguous, a statute's titles can offer 'a useful aid in resolving [the] ambiguity.' "
Lawson v. FMR LLC
,
We do not here decide whether the word "enforcing" in a different provision, Section 230(e)(3), necessarily has the same meaning as "enforcement" in Section 230(e)(1), given their different linguistic contexts.
See
Although "a finding of extraterritoriality at step one will obviate step two's 'focus' inquiry," courts may instead "start[ ] at step two in appropriate cases."
RJR Nabisco,
Because we conclude that the affirmative defense of Section 230(c)(1) applies, we need not reach Facebook's alternative argument that plaintiffs' complaint does not plausibly allege that, absent such immunity, Facebook assisted Hamas under the federal antiterrorism claims.
A representative of a decedent's estate is "deemed to be a citizen only of the same State as the decedent."
Plaintiffs do not assert supplemental jurisdiction under
Because plaintiffs' foreign law claims are dismissed on jurisdictional grounds, we express no opinion as to the district court's conclusion that Section 230 applies to foreign law claims brought in United States courts.
Concurrence in Part
I agree with much of the reasoning in the excellent majority opinion, and I join that opinion except for Parts I and II of the Discussion. But I must respectfully part company with the majority on its treatment of Facebook's friend- and content-suggestion algorithms under the Communications Decency Act ("CDA").
As to the reasons for my disagreement, consider a hypothetical. Suppose that you are a published author. One day, an acquaintance calls. "I've been reading over everything you've ever published," he informs you. "I've also been looking at everything you've ever said on the Internet. I've done the same for this other author. You two have very similar interests; I think you'd get along." The acquaintance then gives you the other author's contact information and photo, along with a link to all her published works. He calls back three more times over the next week with more names of writers you should get to know.
Now, you might say your acquaintance fancies himself a matchmaker. But would you say he's acting as the publisher of the other authors' work?
Facebook and the majority would have us answer this question "yes." I, however, cannot do so. For the scenario I have just described is little different from how Facebook's algorithms allegedly work. And while those algorithms do end up showing users profile, group, or event pages written by other users, it strains the English language to say that in targeting and recommending
these writings to users-and thereby forging connections, developing new social networks-Facebook is acting as "the
publisher
of ... information provided by another information content provider."
It would be one thing if congressional intent compelled us to adopt the majority's reading. It does not. Instead, we today extend a provision that was designed to encourage computer service providers to shield minors from obscene material so that it now immunizes those same providers for allegedly connecting terrorists to one another. Neither the impetus for nor the text of § 230(c)(1) requires such a result. When a plaintiff brings a claim that is based not on the content of the information shown but rather on the connections Facebook's algorithms make between individuals, the CDA does not and should not bar relief.
The Anti-Terrorism Act ("ATA") claims in this case fit this bill. According to plaintiffs' Proposed Second Amended Complaint ("PSAC")-which we must take as true at this early stage-Facebook has developed "sophisticated algorithm[s]" for bringing its users together. App'x 347 ¶ 622. After collecting mountains of data about each user's activity on and off its platform, Facebook unleashes its algorithms to generate friend, group, and event suggestions based on what it perceives to be the user's interests.
When it comes to Facebook's algorithms, then, plaintiffs' causes of action do not run afoul of the CDA. Because the court below did not pass on the merits of the ATA claims pressed below, I would send this case back to the district court to decide the merits in the first instance. The majority, however, cuts off all possibility for relief based on algorithms like Facebook's, even if these or future plaintiffs could prove a sufficient nexus between those algorithms and their injuries. In light of today's decision and other judicial interpretations of the statute that have generally immunized social media companies-and especially in light of the new reality that has evolved since the CDA's passage-Congress may wish to revisit the CDA to better calibrate the circumstances where such immunization is appropriate and inappropriate in light of congressional purposes.
I.
To see how far we have strayed from the path on which Congress set us out, we must consider where that path began. What is now
The action began in the Senate. Senator James J. Exon introduced the CDA on February 1, 1995. See 141 Cong. Rec. 3,203. He presented a revised bill on June 9, 1995, "[t]he heart and the soul" of which was "its protection for families and children." Id. at 15,503 (statement of Sen. Exon). The Exon Amendment sought to reduce the proliferation of pornography and other obscene material online by subjecting to civil and criminal penalties those who use interactive computer services to make, solicit, or transmit offensive material. Id. at 15,505.
The House of Representatives had the same goal-to protect children from inappropriate online material-but a very different sense of how to achieve it. Congressmen Christopher Cox (R-California) and Ron Wyden (D-Oregon) introduced an amendment to the Telecommunications Act, entitled "Online Family Empowerment," about two months after the revised CDA appeared in the Senate. See id. at 22,044. Making the argument for their amendment during the House floor debate, Congressman Cox stated:
We want to make sure that everyone in America has an open invitation and feels welcome to participate in the Internet. But as you know, there is some reason for people to be wary because, as a Time Magazine cover story recently highlighted, there is in this vast world of computer information, a literal computer library, some offensive material, some things in the bookstore, if you will, that our children ought not to see.
As the parent of two, I want to make sure that my children have access to this future and that I do not have to worry about what they might be running into on line. I would like to keep that out of my house and off my computer.
Id. at 22,044-45. Likewise, Congressman Wyden said: "We are all against smut and pornography, and, as the parents of two small computer-literate children, my wife and I have seen our kids find their way into these chat rooms that make their middle-aged parents cringe." Id. at 22,045.
As both sponsors noted, the debate between the House and the Senate was not over the CDA's primary purpose but rather over the best means to that shared end. See id. (statement of Rep. Cox) ("How should we do this? ... Mr. Chairman, what we want are results. We want to make sure we do something that actually works."); id. (statement of Rep. Wyden) ("So let us all stipulate right at the outset the importance of protecting our kids and going to the issue of the best way to do it."). While the Exon Amendment would have the FCC regulate online obscene materials, the sponsors of the House proposal "believe[d] that parents and families are better suited to guard the portals of cyberspace and protect our children than our Government bureaucrats." Id. at 22,045 (statement of Rep. Wyden). They also feared the effects the Senate's approach might have on the Internet itself. See id. (statement of Rep. Cox) ("[The amendment] will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the Internet ...."). The Cox-Wyden Amendment therefore sought to empower interactive computer service providers to self-regulate, and to provide tools for parents to regulate, children's access to inappropriate material. See S. Rep. No. 104-230, at 194 (1996) (Conf. Rep.); 141 Cong. Rec. 22,045 (statement of Rep. Cox).
There was only one problem with this approach, as the House sponsors saw it. A New York State trial court had recently ruled that the online service Prodigy, by deciding to remove certain indecent material from its site, had become a "publisher" and thus was liable for defamation when it failed to remove other objectionable content.
Stratton Oakmont, Inc. v. Prodigy Servs. Co.
,
The House having passed the Cox-Wyden Amendment and the Senate the Exon Amendment, the conference committee had before it two alternative visions for countering the spread of indecent online material to minors. The committee chose not to choose. Congress instead adopted both amendments as part of a final Communications Decency Act.
See
Telecommunications Act of 1996, §§ 502, 509, 110 Stat. at 133-39;
Reno
, 521 U.S. at 858 n.24,
Section 230 overruled
Stratton-Oakmont
through two interlocking provisions, both of which survived the legislative process unscathed. The first, which is at issue in this case, states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
The legislative history illustrates that in passing § 230 Congress was focused squarely on protecting minors from offensive online material, and that it sought to
do so by "empowering parents to determine the content of communications their children receive through interactive computer services." S. Rep. No. 104-230, at 194. The "policy" section of § 230's text reflects this goal.
See
None of this is to say that § 230(c)(1) exempts interactive computer service providers from publisher treatment only when they remove indecent content. Statutory text cannot be ignored, and Congress grabbed a bazooka to swat the
Stratton-Oakmont
fly. Whatever prototypical situation its drafters may have had in mind, § 230(c)(1) does not limit its protection to situations involving "obscene material" provided by others, instead using the expansive word "information."
II.
With the CDA's background in mind, I turn to the text. By its plain terms, § 230 does not apply whenever a claim would
treat the defendant as "a publisher" in the abstract, immunizing defendants from liability stemming from any activity in which one thinks publishing companies commonly engage.
Contra
ante
, at 65, 66, 70. It states, more specifically, that "[n]o provider or user of an interactive computer service shall be treated as
the
publisher or speaker
of any information provided by another
information content provider."
For this reason, § 230(c)(1) does not necessarily immunize defendants from claims based on promoting content or selling advertising, even if those activities might be common among publishing companies nowadays. A publisher might write an email promoting a third-party event to its readers, for example, but the publisher would be the author of the underlying content and therefore not immune from suit based on that promotion.
See
Accordingly, our precedent does not grant publishers CDA immunity for the full range of activities in which they might engage. Rather, it "bars lawsuits seeking to hold a service provider liable for its exercise of a publisher's traditional editorial functions-such as deciding whether to publish, withdraw, postpone or alter content" provided by another for publication.
LeadClick
,
This case is different. Looking beyond Facebook's "broad statements of immunity" and relying "rather on a careful exegesis of the statutory language,"
Barnes
,
It is true, as the majority notes,
see
ante
, at 70, that Facebook's algorithms rely on and display users' content. However, this is not enough to trigger the protections of § 230(c)(1). The CDA does not mandate "a 'but-for' test that would provide immunity ... solely because a cause of action would not otherwise have accrued but for the third-party content."
HomeAway.com, Inc. v. City of Santa Monica
,
If a third party got access to Facebook users' data, analyzed it using a proprietary algorithm, and sent its own messages to Facebook users suggesting that people become friends or attend one another's events, the third party would not be protected as "the publisher" of the users' information. Similarly, if Facebook were to use the algorithms to target its own material to particular users, such that the resulting posts consisted of "information provided by" Facebook rather than by "another information content provider," § 230(c)(1), Facebook clearly would not be immune for that independent message.
Yet that is ultimately what plaintiffs allege Facebook is doing. The PSAC alleges that Facebook "actively provides 'friend suggestions' between users who have expressed similar interests," and that it "actively suggests groups and events to users." App'x 346 ¶¶ 612-13. Facebook's algorithms thus allegedly provide the user with a message from Facebook. Facebook is telling users-perhaps implicitly, but clearly-that they would like these people,
groups, or events. In this respect, Facebook "does not merely provide a framework that could be utilized for proper or improper purposes; rather, [Facebook's] work in developing" the algorithm and suggesting connections to users based on their prior activity on Facebook, including their shared interest in terrorism, "is directly related to the alleged illegality of the site."
Fair Housing Council of San Fernando Valley v. Roommates.Com, LLC
,
Moreover, in part through its use of friend, group, and event suggestions, Facebook is doing more than just publishing content: it is proactively creating networks of people. Its algorithms forge real-world (if digital) connections through friend and group suggestions, and they attempt to create similar connections in the physical world through event suggestions. The cumulative effect of recommending several friends, or several groups or events, has an impact greater than the sum of each suggestion. It envelops the user, immersing her in an entire universe filled with people, ideas, and events she may never have discovered on her own. According to the allegations in the complaint, Facebook designed its website for this very purpose. "Facebook has described itself as a provider of products and services that enable users ... to find and connect with other users ...." App'x 250 ¶ 129. CEO Mark Zuckerberg has similarly described Facebook as "build[ing] tools to help people connect with the people they want," thereby "extending people's capacity to build and maintain relationships."
Another way to consider the CDA immunity question is to "look ... to what the duty at issue actually requires: specifically, whether the duty would necessarily require an internet company to monitor[, alter, or remove] third-party content."
HomeAway.com
,
In reaching this conclusion, I note that ATA torts are atypical. Most of the common torts that might be pleaded in relation to Facebook's algorithms "derive liability from behavior that is identical to publishing or speaking"-for instance, "publishing defamatory material; publishing material that inflicts emotional distress; or ... attempting to de-publish hurtful material but doing it badly."
Barnes
,
The fact that Facebook has figured out how to target material to people more likely to read it does not matter to a defamation claim, for instance, because the mere act of publishing in the first place creates liability.
The ATA works differently. Plaintiffs' material support and aiding and abetting claims premise liability, not on publishing
qua
publishing, but rather on Facebook's provision of services and personnel to Hamas. It happens that the way in which Facebook provides these benefits includes republishing content, but Facebook's duties under the ATA arise separately from the republication of content.
Cf.
For these reasons, § 230(c)(1) does not bar plaintiffs' claims.
III.
Even if we sent this case back to the district court, as I believe to be the right course, these plaintiffs might have proven unable to allege that Facebook's matchmaking algorithms played a role in the attacks that harmed them. However, assuming arguendo that such might have been the situation here, I do not think we should foreclose the possibility of relief in future cases if victims can plausibly allege that a website knowingly brought terrorists together and that an attack occurred as a direct result of the site's actions. Though the majority shuts the door on such claims, today's decision also illustrates the extensive immunity that the current formulation of the CDA already extends to social media companies for activities that were undreamt of in 1996. It therefore may be time for Congress to reconsider the scope of § 230.
As is so often the case with new technologies, the very qualities that drive social media's success-its ease of use, open access, and ability to connect the world-have also spawned its demons. Plaintiffs' complaint illustrates how pervasive and blatant a presence Hamas and its leaders have maintained on Facebook. Hamas is far from alone-Hezbollah, Boko Haram, the Revolutionary Armed Forces of Colombia, and many other designated terrorist organizations use Facebook to recruit and rouse supporters. Vernon Silver & Sarah Frier,
Terrorists Are Still Recruiting on Facebook, Despite Zuckerberg's Reassurances
, Bloomberg Businessweek (May 10, 2018), http://www.bloomberg.com/news/articles/2018-05-10/terrorists-creep-onto-facebook-as-fast-as-it-can-shut-them-down. Recent news reports suggest that many social media sites have been slow to remove the plethora of terrorist and extremist accounts populating their platforms,
Of course, the failure to remove terrorist content, while an important policy concern, is immunized under § 230 as currently written. Until today, the same could not have been said for social media's unsolicited, algorithmic spreading of terrorism. Shielding internet companies that bring terrorists together using algorithms could leave dangerous activity unchecked.
Take Facebook. As plaintiffs allege, its friend-suggestion algorithm appears to connect terrorist sympathizers with pinpoint precision. For instance, while two researchers were studying Islamic State ("IS") activity on Facebook, one "received dozens of pro-IS accounts as recommended friends after friending just one pro-IS account." Waters & Postings, supra , at 78. More disturbingly, the other "received an influx of Philippines-based IS supporters and fighters as recommended friends after liking several non-extremist news pages about Marawi and the Philippines during IS's capture of the city." Id. News reports indicate that the friend-suggestion feature has introduced thousands of IS sympathizers to one another. See Martin Evans, Facebook Accused of Introducing Extremists to One Another Through 'Suggested Friends' Feature , The Telegraph (May 5, 2018), http://www.telegraph.co.uk/news/2018/05/05/facebook-accused-introducing-extremists-one-another-suggested.
And this is far from the only Facebook algorithm that may steer people toward terrorism. Another turns users' declared interests into audience categories to enable microtargeted advertising. In 2017, acting on a tip, ProPublica sought to direct an ad at the algorithmically-created category "Jew hater"-which turned out to be real, as were "German Schutzstaffel," "Nazi Party," and "Hitler did nothing wrong." Julia Angwin et al., Facebook Enabled Advertises to Reach 'Jew Haters , ' ProPublica (Sept. 14, 2017), https://www.propublica.org/article/facebookenabled-advertisers-to-reach-jew-haters. As the "Jew hater" category was too small for Facebook to run an ad campaign, "Facebook's automated system suggested 'Second Amendment' as an additional category ... presumably because its system had correlated gun enthusiasts with anti-Semites." Id.
That's not all. Another Facebook algorithm auto-generates business pages by scraping employment information from users' profiles; other users can then "like" these pages, follow their posts, and see who else has liked them. Butler & Ortutay, supra . ProPublica reports that extremist organizations including al-Qaida, al-Shabab, and IS have such auto-created pages, allowing them to recruit the pages' followers. Id. The page for al-Qaida in the Arabian Peninsula included the group's Wikipedia entry and a propaganda photo of the damaged USS Cole, which the group had bombed in 2000. Id. Meanwhile, a fourth algorithm integrates users' photos and other media to generate videos commemorating their previous year. Id. Militants get a ready-made propaganda clip, complete with a thank-you message from Facebook. Id.
This case, and our CDA analysis, has centered on the use of algorithms to foment terrorism. Yet the consequences of a CDA-driven, hands-off approach to social media extend much further. Social media can be used by foreign governments to interfere in American elections. For example, Justice Department prosecutors recently concluded that Russian intelligence agents created false Facebook groups and accounts in the years leading up to the 2016 election campaign, bootstrapping Facebook's algorithm to spew propaganda that reached between 29 million and 126 million Americans. See 1 Robert S. Mueller III, Special Counsel, Report on the Investigation Into Russian Interference in the 2016 Presidential Election 24-26, U.S. Dep't of Justice (March 2019), http://www.justice.gov/storage/report.pdf. Russia also purchased over 3,500 advertisements on Facebook to publicize their fake Facebook groups, several of which grew to have hundreds of thousands of followers. Id. at 25-26. On Twitter, Russia developed false accounts that impersonated American people or groups and issued content designed to influence the election; it then created thousands of automated "bot" accounts to amplify the sham Americans' messages. Id. at 26-28. One fake account received over six million retweets, the vast majority of which appear to have come from real Twitter users. See Gillian Cleary, Twitterbots: Anatomy of a Propaganda Campaign , Symantec (June 5, 2019), http://www.symantec.com/blogs/threat-intelligence/twitterbots-propaganda-disinformation. Russian intelligence also harnessed the reach that social media gave its false identities to organize "dozens of U.S. rallies," some of which "drew hundreds" of real-world Americans. Mueller, Report , supra , at 29. Russia could do all this only because social media is designed to target messages like Russia's to the users most susceptible to them.
While Russia's interference in the 2016 election is the best-documented example of foreign meddling through social media, it is not the only one. Federal intelligence agencies expressed concern in the weeks before the 2018 midterm election "about ongoing campaigns by Russia, China and other foreign actors, including Iran," to "influence public sentiment" through means "including using social media to amplify divisive issues." Press Release, Office of Dir. of Nat'l Intelligence, Joint Statement from the ODNI, DOJ, FBI, and DHS: Combatting Foreign Influence in U.S. Elections, (Oct. 19, 2018), https://www.dni.gov/index. php/newsroom/press-releases/item/1915-joint-statement-from-the-odni-doj-fbi-and-dhs-combating-foreign-influence-in-u-s-elections. News reports also suggest that China targets state-sponsored propaganda to Americans on Facebook and purchases Facebook ads to amplify its communications. See Paul Mozur, China Spreads Propaganda to U.S. on Facebook, a Platform It Bans at Home , N.Y. Times (Nov. 8, 2017), https://www.nytimes.com/2017/11/08/technology/china-facebook.html.
Widening the aperture further, malefactors at home and abroad can manipulate social media to promote extremism. "Behind every Facebook ad, Twitter feed, and YouTube recommendation is an algorithm that's designed to keep users using: It tracks preferences through clicks and hovers, then spits out a steady stream of content that's in line with your tastes." Katherine J. Wu, Radical Ideas Spread Through Social Media. Are the Algorithms to Blame? , PBS (Mar. 28, 2019), https://www.pbs.org/wgbh/nova/article/radical-ideas-social-media-algorithms. All too often, however, the code itself turns those tastes sour. For example, one study suggests that manipulation of Facebook's news feed influences the mood of its users: place more positive posts on the feed and users get happier; focus on negative information instead and users get angrier. Adam D. I. Kramer et al., Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks , 111 PNAS 8788, 8789 (2014). This can become a problem, as Facebook's algorithm "tends to promote the most provocative content" on the site. Max Fisher, Inside Facebook's Secret Rulebook for Global Political Speech , N.Y. Times (Dec. 27, 2018), http://www.nytimes.com/2018/12/27/world/facebook-moderators.html. Indeed, "[t]he Facebook News Feed environment brings together, in one place, many of the influences that have been shown to drive psychological aspects of polarization." Jaime E. Settle, Frenemies: How Social Media Polarizes America (2018). Likewise, YouTube's video recommendation algorithm-which leads to more than 70 percent of time people spend on the platform-has been criticized for shunting visitors toward ever more extreme and divisive videos. Roose & Conger, supra ; see Jack Nicas, How YouTube Drives People to the Internet's Darkest Corners , Wall St. J. (Feb. 7, 2018), https://www.wsj.com/articles/how-youtube-drives-viewers-to-the-internets-darkest-corners-1518020478. YouTube has fine-tuned its algorithm to recommend videos that recalibrate users' existing areas of interest and steadily steer them toward new ones-a modus operandi that has reportedly proven a real boon for far-right extremist content. See Kevin Roose, The Making of a YouTube Radical , N.Y. Times (June 8, 2019), http://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html.
There is also growing attention to whether social media has played a significant role in increasing nationwide political polarization.
See
Andrew Soergel,
Is Social Media to Blame for Political Polarization in America?
, U.S. News & World Rep. (Mar. 20, 2017), https://www.usnews.com/news/articles/2017-03-20/is-social-media-to-blame-for-political-polarization-in-america. The concern is that "web surfers are being nudged in the direction of political or unscientific propaganda, abusive content, and conspiracy theories." Wu,
Radical Ideas
,
supra
. By surfacing ideas that were previously deemed too radical to take seriously, social media mainstreams them, which studies show makes people "much more open" to those concepts. Max Fisher & Amanda Taub,
How Everyday Social Media Users Become Real-World Extremists
, N.Y. Times (Apr. 25, 2018), http://www.nytimes.com/2018/04/25/world/asia/facebook-extremism.html. At its worst, there is evidence that social media may even be used to push people toward violence.
While the majority and I disagree about whether § 230 immunizes interactive computer services from liability for all these activities or only some, it is pellucid that Congress did not have any of them in mind when it enacted the CDA. The text and legislative history of the statute shout to the rafters Congress's focus on reducing children's access to adult material. Congress could not have anticipated the pernicious spread of hate and violence that the rise of social media likely has since fomented. Nor could Congress have divined the role that social media providers themselves would play in this tale. Mounting evidence suggests that providers designed their algorithms to drive users toward content and people the users agreed with-and that they have done it too well, nudging susceptible souls ever further down dark paths. By contrast, when the CDA became law, the closest extant ancestor to Facebook (and it was still several branches lower on the evolutionary tree) was the chatroom or message forum, which acted as a digital bulletin board and did nothing proactive to forge off-site connections.
Whether, and to what extent, Congress should allow liability for tech companies that encourage terrorism, propaganda, and extremism is a question for legislators, not judges. Over the past two decades "the Internet has outgrown its swaddling clothes,"
Roommates.Com
,
I agree with the majority that the CDA's exception for enforcement of criminal laws,
However, as detailed
post
, § 230 was designed as a private-sector-driven alternative to a Senate plan that would allow the FCC "either civilly or criminally, to punish people" who put objectionable material on the Internet. 141 Cong. Rec. 22,045 (1995) (statement of Rep. Cox);
accord
It helped that the Cox-Wyden Amendment exempted from its deregulatory regime the very provisions that the Exon Amendment strengthened, see Telecommunications Act of 1996, §§ 502, 507-508, 509(d)(1), 110 Stat. at 133-39, and that Congress stripped from the House bill a provision that would have denied jurisdiction to the FCC to regulate the Internet, compare id. § 509, 110 Stat. at 138 (eliminating original § 509(d)), with 141 Cong. Rec. 22,044 (including original § 509(d)).
The policy section of the statute also expresses Congress's desire "to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation."
This point-that Congress chose broader language than may have been necessary to accomplish its primary goal-should not be confused with the Seventh Circuit's rationale for § 230(c)(1)'s general application: that "a law's scope often differs from its genesis."
See
Chi. Lawyers' Cmte. for Civil Rights Under Law, Inc. v. Craigslist, Inc.
,
Many of Facebook's algorithms mentioned in the PSAC, such as its third-party advertising algorithm, its algorithm that places content in a user's newsfeed, and (based on the limited description in the PSAC) its video recommendation algorithm, remain immune under the analysis I set out here.
See, e.g. , Gregory Waters & Robert Postings, Spiders of the Caliphate: Mapping the Islamic State's Global Support Network on Facebook 8, Counter Extremism Project (May 2018), http://www.counterextremism.com/sites/default/files/Spiders%20of%20the%20Caliphate%20%28May%202018%29.pdf; Yaacov Benmeleh & Felice Maranz, Israel Warns Twitter of Legal Action Over Requests to Remove Content , Bloomberg (Mar. 20, 2018), http://www.bloomberg.com/news/articles/2018-03-20/israel-warns-twitter-of-legal-steps-over-incitement-to-terrorism; Mike Isaac, Twitter Steps Up Efforts to Thwart Terrorists' Tweets , N.Y. Times (Feb. 5, 2016), http://www.nytimes.com/2016/02/06/technology/twitter-account-suspensions-terrorism.html; Kevin Roose & Kate Conger, YouTube to Remove Thousands of Videos Pushing Extreme Views , N.Y. Times (June 5, 2019), http://www.nytimes.com/2019/06/05/business/youtube-remove-extremist-videos.html.
See, e.g. , Sarah Marsh, Social Media Related to Violence by Young People, Say Experts , The Guardian (Apr. 2, 2018), https://www.theguardian.com/media/2018/apr/02/social-media-violence-young-people-gangs-say-experts; Kevin Roose, A Mass Murder of, and for, the Internet , N.Y. Times (Mar. 15, 2019), https://www.nytimes.com/2019/03/15/technology/facebook-youtube-christchurch-shooting.html; Craig Timberg et al., The New Zealand Shooting Shows How TouTube and Facebook Spread Hate and Violent Images-Yet Again , Wash. Post (Mar. 15, 2019), https://www.washingtonpost.com/technology/2019/03/15/facebook-youtube-twitter-amplified-video-christchurch-mosque-shooting; Julie Turkewitz & Kevin Roose, Who Is Robert Bowers, the Suspect in the Pittsburgh Synagogue Shooting? , N.Y. Times (Oct. 27, 2018), https://www.nytimes.com/2018/10/27/us/robert-bowers-pittsburgh-synagogue-shooter.html.
See Caitlin Dewey, A Complete History of the Rise and Fall-and Reincarnation!-of the Beloved '90s Chatroom , Wash. Post (Oct. 30, 2014), http://www.washingtonpost.com/ news/the-intersect/wp/2014/10/30/a-complete-history-of-the-rise-and-fall-and-reincarnation-of-the-beloved-90s-chatroom; see also Then and Now: A History of Social Networking Sites , CBS News, http://www.cbsnews.com/pictures/then-and-now-a-history-of-social-networking-sites (last accessed July 9, 2019) (detailing the evolution of social media sites from Classmates, launched only "as a list of school affiliations" in December 1995; to "the very first social networking site" Six Degrees, which launched in May 1997 but whose networks were limited "due to the lack of people connected to the Internet"; to Friendster, launched in March 2002 and "credited as giving birth to the modern social media movement"; to Facebook, which was "rolled out to the public in September 2006").
See, e.g.
, Danielle Keats Citron & Benjamin Wittes,
The Problem Isn't Just Backpage: Revising Section 230 Immunity
, 2 Geo. L. Tech. Rev. 453, 454-55 (2018) ; Jeff Kosseff,
Defending Section 230: The Value of Intermediary Immunity
,
See, e.g. , Tarleton Gillespie, How Social Networks Set the Limits of What We Can Say Online , Wired (June 26, 2018), http://www.wired.com/story/how-social-networks-set-the-limits-of-what-we-can-say-online; Christiano Lima, How a Widening Political Rift Over Online Liability Is Splitting Washington , Politico (July 9, 2019), http://www.politico.com/story/2019/07/09/online-industry-immunity-section-230-1552241; Mark Sullivan, The 1996 Law That Made the Web Is in the Crosshairs , Fast Co. (Nov. 29, 2018), http://www.fastcompany.com/90273352/maybe-its-time-to-take-away-the-outdated-loophole-that-big-tech-exploits; cf. Darrell M. West & John R. Allen, How Artificial Intelligence Is Transforming the World , Brookings (Apr. 24, 2018), http://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world ("The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology.").
