*This piece was originally published on Transnational Litigation Blog on 17 October 2023.
In the recent case of Twitter, Inc. v. Taamneh, the U.S. Supreme Court held that the plaintiffs failed to demonstrate that Facebook, Twitter, and Google knowingly provided assistance to the Islamic State of Iraq and the Levant (ISIS) in connection with its attack on the Reina nightclub in Istanbul, Turkey in 2017. The plaintiffs, family members of a victim killed in the attack, brought a civil claim against the companies under §2333(d)(2) of the Antiterrorism Act for aiding and abetting an act of international terrorism. They claimed that the companies’ social media platforms and recommendation algorithms supported ISIS’ efforts to recruit members and raise funds. They also alleged that the companies “knew that ISIS was using their platforms but failed to stop it from doing so.”
In a unanimous opinion by Justice Thomas, the Court held that the plaintiffs failed to establish that the companies knowingly provided substantial assistance to the Reina attack, thus failing to state a claim under the Antiterrorism Act. While this result may be correct, the Court mischaracterized the companies’ alleged assistance as “passive nonfeasance.” The alleged assistance, according to the Court, was not provided through affirmative conduct but stemmed from the companies’ failure to stop ISIS from using the platforms. The Court erred by interpreting the plaintiffs’ claim in this way. By providing services to ISIS, which included hosting ISIS content, the companies carried out continuous acts. This important distinction could impact how liability is assigned in future cases involving automated services.
What the Court Said about the Assistance
According to the Court, the social media companies provided services that are “generally available to the internet-using public with little to no front-end screening.” ISIS was able to upload content to the platforms “just like everyone else.” The Court compared social media platforms to other communication tools, namely “cell phones, email, or the internet generally,” and noted that internet and cell service providers do not usually face liability “merely for providing their services to the public writ large.”
The plaintiffs claimed that the platforms’ recommendation algorithms provided active and substantial assistance to ISIS. But the Court disagreed, describing the algorithms as simply part of the platforms’ infrastructure through which all content was filtered. The algorithms, said the Court, “appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content.” The Court continued, “[t]he fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.”
The Court concluded that “[o]nce the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.” Throughout the opinion, the Court referred to the companies’ alleged conduct as “passive assistance,” “passive nonfeasance,” a “failure to stop,” and a “failure to act.” Put differently, the alleged assistance was provided through omissions. The companies created and provided services that, once operating, were widely available to the public. The provision of generally available platform services did not amount to active assistance. Instead, the Court found that the allegations of assistance were based on the companies’ failure to cease activities that supported ISIS.
In general, omissions are not culpable unless they constitute a breach of a legal duty or are directly criminalized in statute. The Court held that “[b]ecause plaintiffs’ complaint rests so heavily on defendants’ failure to act, their claims might have more purchase if they could identify some independent duty in tort that would have required defendants to remove ISIS’ content.” By characterizing the companies’ assistance as a failure to act, or an omission, the Court required the plaintiffs to demonstrate that the companies had breached a legal duty to act. This showing would not have been necessary if the companies had performed an affirmative act that contributed to the harm.
Was the Conduct Actually “Passive Nonfeasance”?
The Court mischaracterized the companies’ assistance as passive nonfeasance. This error is based on the premise that the provision of services through openly available infrastructure constitutes an act that occurs at a fixed point in time, as opposed to continuing acts. In the Court’s view, the companies acted positively by creating and providing social media platforms and algorithms. Once available, the operation of these services no longer constituted an affirmative act. Instead, the Court reasoned that the companies would act if they stopped providing services or removed content. Thus, it was the companies’ failure to act that constituted the alleged assistance to ISIS. And, in the absence of any duty to act, liability could not be assigned for assistance through omission.
But social media platforms and the algorithms which underlie their functionality do not exist independently of the companies that provide them. The companies operate, maintain, and update their platforms, and they rely on recommendation algorithms to automize certain activities. If the companies were to truly do nothing, then the platforms would cease to function, perhaps not immediately but still inevitably. By hosting users’ content on their platforms, social media companies act continuously until the content is removed or until they stop providing services to users. A company could be held liable for aiding and abetting if it becomes aware that it is substantially assisting a crime by hosting content and, despite this knowledge, continues to host that content. The company would cease its contributory activities only when it removes the content or restricts the offending users from its services.
The opinion also raises another important question: would a social media company act by removing content or ceasing services? Suppose the operation of a platform prevents crimes from occurring. Perhaps protesters use a platform in an authoritarian state to share information that allows them to evade persecution and arbitrary detention, both of which are international crimes. If the company running the platform decides to stop providing services to the protesters, would the company actively assist the state’s authorities in committing crimes? If the Court believes that not stopping services is a failure to do something, then stopping would be an act. Companies could be compelled to continue providing services to avoid liability for actively assisting crimes.
There are some interesting cases concerning the cessation of ongoing services that could have informed the current case. In Barber v. Superior Court, the California Court of Appeal found that “the cessation of ‘heroic’ life support measures is not an affirmative act but rather a withdrawal or omission of further treatment.” To reach this finding, the court noted that “[e]ven though these life support devices are, to a degree, ‘self-propelled,’ each pulsation of the respirator or each drop of fluid introduced into the patient’s body by intravenous feeding devices is comparable to a manually administered injection or item of medication.” The Court of Appeal concluded that disconnecting mechanical devices was comparable to withholding manually administered medication. By characterizing the withdrawal of treatment as an omission, doctors would not be prosecuted for murder in the absence of a duty to maintain ineffective treatment.
Both Barber and the present case involved the use of “self-propelled” technologies: life support devices and recommendation algorithms. Social media companies use technological tools, like algorithms, to carry out acts automatically. But the fact of automation should not change our understanding that social media algorithms perform acts comparable to manual, or human, acts every time they determine how content should be displayed or organized. The conduct and results of algorithms should be attributable as acts to the companies that utilize them. Accordingly, companies would not simply omit to do something if they “stood back and watched” their algorithms operate.
The Court’s decision to characterize the social media companies’ conduct as a failure to act stems, in part, from a fear that “a contrary holding would effectively hold any sort of communication provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them.” The Court denounced that such a “conclusion would run roughshod over the typical limits on tort liability and take aiding and abetting far beyond its essential culpability moorings.”
But this fear is unfounded. Given the nature of aiding and abetting liability, any type of act can assist the principal wrongdoing. All businesses face the possibility of liability for aiding and abetting if they knowingly provide substantial assistance through their goods or services to crimes or torts, and communication services are not unique in this respect. Even services provided to the public at large could constitute aiding and abetting. The requirements of knowledge and a nexus between the assistance and wrongdoing prevent excessive liability, not the characterization of the act. In the present case, the outcome would probably not have changed even if the Court found that the social media companies provided active assistance to ISIS. It seems, at least from the Court’s analysis, that the companies were unaware of the Reina attack, and there was not a sufficient nexus between the companies’ general assistance to ISIS and the specific act of terrorism.
Conclusion
The Court erred in Twitter, Inc. v. Taamneh by characterizing the alleged conduct as inaction rather than continuing acts. The social media companies did not assist ISIS by failing to remove content or by failing to stop services. They assisted ISIS by continuing to host content and by continuing to provide services.
In a concurrence, Justice Jackson stated that the opinion was limited to the specific allegations in the complaints and the application of §2333(d)(2) of the Antiterrorism Act. Hopefully Justice Jackson’s concurrence proves persuasive. Otherwise the opinion in Twitter, Inc. v. Taamneh sets a bad precedent for future cases involving automated technologies. It would allow companies to use algorithms and other automated technologies, like artificial intelligence, to carry out acts on their behalf, while providing a degree of additional protection for the companies when the technologies cause or contribute to wrongdoing. If the companies’ involvement in the resulting wrongdoing is construed as “failure to act”, it will be difficult to assign liability in the absence of any legal duties, even if the companies knowingly and substantially contribute to crimes or torts. Automation will provide a shield from liability, requiring prosecutors and plaintiffs to demonstrate that the companies had a duty to stop their creations from carrying out or assisting harms.
©Marc Tiernan ℗TLB 2022