A. Overview

In recent years, there has been a steady increase in online disinformation and extremism, the effects of which have been felt offline. Through various online platforms, politicians and military leaders incite violence, far-right groups organise riots and violent attacks, forums inspire domestic terrorists to carry out mass shootings, and conspiracy theories spread like wildfire. From these examples, a common theme emerges – people use online platforms, primarily social media, to help and influence others to commit crimes offline.

How should criminal law address this issue? The approach explored in this blog is not a panacea, but it may provide accountability for assistance to crimes in some cases of online extremism: secondary criminal liability. This liability applies to individuals who help or influence the criminal activities of others. It includes aiding and abetting, instigating, facilitating, soliciting, inducing, and more.1

Despite its suitability on paper, in practice secondary liability is not easily assigned to those who assist crimes through online platforms. Unique features of online communication complicate the assessment of liability. This blog provides examples of online platform users assisting offline crimes in extremist contexts, outlines the main elements of secondary criminal liability, and explores issues arising from the application of this liability to the examples.

B. People assist crimes through online platforms

Over the past several years, governments and civil society have voiced increasing concerns with a rapid rise in online extremism, particularly far-right nationalism, racism, and conspiracy theories. The platforms used to carry out these activities are varied, from social media websites, such as Facebook and Twitter, to messaging applications such as Telegram and WhatsApp, to live-streaming sites, such as Twitch and DLive, to websites with more direct ties to far-right movements, such as 4chan, 8chan, Gab, and Parler.2 These platforms all share a central characteristic: they are online social and communication tools. They allow users to connect with others and share media either privately or publicly. In a never-ending game of cat-and-mouse, when one platform clamps down on extremist activity or is forcibly taken down, users migrate to a different platform or become savvier at hiding their activities.

Increasingly, online extremist activities can be linked to crimes that occur offline. The following two examples highlight possible occurrences of online assistance to offline crimes. These examples typify the conduct which this blog seeks to examine. In part D, a separate assessment will be carried out to determine whether the acts in these examples could trigger criminal liability.

i. 8chan and mass shootings

8chan, now known as 8kun, is an imageboard website and bastion of extremist content related to white supremacism, neo-Nazism, racism, antisemitism, hate crimes, and more. Users of 8chan share content anonymously, and site administrators provide minimal oversight. The worst of this content was shared on 8chan’s /pol/ board, where many users took pride in radicalising others to commit acts of violence in the real world.3

In 2019, 8chan was linked to three mass shootings: the Christchurch mosque shootings in New Zealand, the Poway synagogue shooting in the U.S., and the El Paso shooting also in the U.S. In each case, the perpetrators allegedly announced their attacks beforehand on 8chan’s /pol/ board, sharing hate-filled manifestos.

Before killing 51 people on 15 March, the Christchurch shooter posted to 8chan’s /pol/ board. He thanked other /pol/ users for their friendship through a ‘long ride’ and asked them to share his manifesto. He also provided a link to his live-stream of the attack on Facebook Live. After the attack, the /pol/ board widely celebrated the Christchurch shooter, and subsequent shooters referred to him as a source of inspiration.

Just over a month later, on 27 April, the Poway shooter allegedly posted his manifesto and live-stream link to /pol/ minutes before his attack. The live-stream link apparently did not work. The shooter thanked the /pol/ board for ‘everything’, noting that what he learnt from the forum was ‘priceless’ while ‘lurking for a year and a half’. The first user to comment underneath this post encouraged the shooter to ‘get the high score’. Following the ­attack, the FBI applied for a search warrant of 8chan’s premises for information related to the shooter’s post. The FBI noted that ‘posters [who commented in response to the original post] may be potential witnesses, co-conspirators, and/or individuals who are inspired by the subject posting.’

A few months later, on 3 August, the El Paso shooter allegedly posted his manifesto to /pol/ minutes before killing 23 people and injuring 23 others. One user replied to the shooter’s post before the shooting began with ‘every shabbat’; an apparent hope to see similar attacks occur weekly.

8chan /pol/ users celebrated these shootings and encouraged similar attacks. Users of the site’s newer iteration, 8kun, continue to share content aimed at inspiring and encouraging violence. The content is often coded and passed off as irony, making it difficult for even discerning observers to interpret. 8chan and 8kun users have also been known to share practical information on weapons and bombmaking. Commentators have described this online encouragement of live-streamed shootings and competition over ‘kill counts’ and ‘high scores’ as the ‘gamification of terror’.

ii. The U.S. Capitol attack

The recent attack on the U.S. Capitol had a strong online presence. In the weeks leading up to the attack, supporters of Donald Trump and members of far-right groups used various online platforms to plan and organise the 6 January protests with increasing calls for violence.

Users of Gab and Parler allegedly shared information on how to avoid the police, and what tools should be used to pry open doors. Leaders of neo-fascist group the Proud Boys used Parler to rally supporters and to request equipment and donations. On a far-right message board known as TheDonald.win, members openly discussed plans for violence and how to bypass the capital’s strict gun laws. On 8kun, users shared conspiracy theories related to the elections and debated plans of attack, including which politicians to target and how to find them.

On 6 January, the day of the attack, rioters used online platforms to broadcast and publicise their activities in real time. Persons who were not physically at the Capitol could watch the attack unfold while simultaneously interacting with the mob through online platforms.

Tim Gionet, a white nationalist who took part in the attack that day, live-streamed his activities while inside the Capitol building through DLive. Gionet now faces charges of violent entry and disorderly conduct on Capitol grounds, and knowingly entering or remaining in a restricted building or grounds without lawful authority. Gionet’s followers communicated with him through his live-stream while he carried out the acts for which he is now charged. They encouraged him, advised him on how to evade the police, and even donated money to him. Similar encouragement was communicated to other rioters through various online platforms.

It remains to be seen whether, and to what degree, authorities will extend their investigations and prosecutions beyond persons who were physically present at the Capitol on 6 January. Currently, criminal investigations and charges appear focused on those who physically took part in the attack.4 Like Gionet, many of those physically involved have been since charged with crimes including assault, disorderly conduct, conspiracy, aiding and abetting, and more.5 The charges in some indictments, such as here and here, are based, in part, on acts of planning and co-ordinating the Capitol attack through online platforms. But the activities of those who physically carried out the attack are only part of the picture of what happened on 6 January. Many others did not travel to the Capitol that day but still provided support for the attack through online platforms. Whether this support can trigger criminal liability will be explored in part D.

iii. Characteristics of online assistance

The above examples of 8chan-inspired mass shootings and the U.S. Capitol attack demonstrate some of the ways that online actors can assist perpetrators who commit crimes offline.  Mostly, this assistance takes the form of encouragement or calls for others to commit crimes, but it can also amount to practical assistance. The assistance can come moments before or during the principal crime, or it can build-up over weeks and months prior to the crime’s commission. Individuals provide the assistance with varying degrees of knowledge regarding the details of the principal crime and the likelihood of its commission.

People can provide similar types of assistance through other means of communication, such as telephone calls, written documents, and verbal communication. However, several features of online platforms complicate applying criminal liability to cases of online assistance. First, online platforms enable people to spread information easily and efficiently over great distances. Second, online platforms are cheap and effective mass communication tools, accessible to anyone with an internet connection. Third, people can communicate and distribute information anonymously through online platforms. Finally, online platforms enable people with ordinarily fringe beliefs to connect and organise in one place, creating echo chambers that can foster criminality and nurture extremist viewpoints. So, online assistance is often provided anonymously, distributed to vast audiences, undirected towards specific individuals or acts, and made up of numerous individual contributions to a larger group effort.

C. A brief overview of secondary liability

Most domestic (e.g., U.K., U.S., the Netherlands, and France) and all international(ised) jurisdictions (e.g., ICC, ICTY, and ECCC) have laws that impose criminal liability on individuals, referred to as accomplices, who knowingly or intentionally assist crimes perpetrated by others. This secondary liability includes aiding and abetting, instigating, facilitating, soliciting, inducing, and more. While different jurisdictions use different terminology to describe their modes of liability, the spirit of secondary liability remains the same. It applies to those who help or influence others to commit a crime. This part of the blog briefly outlines the main elements of secondary criminal liability.6

Secondary liability requires establishing a conduct element (actus reus) and mental element (mens rea). The question of what acts constitute the actus reus element of secondary liability is open ended and must be assessed on a case-by-case basis. Linked to this is the question of whether an act of assistance must have a causal effect on the principal crime. Legal systems differ over this requirement. Some jurisdictions do not require any proof that an accomplice’s act positively affected a principal’s actions or the crime, so long as an accomplice provided realistic assistance or encouragement (e.g. U.K.).7 Other jurisdictions have a low-threshold causal requirement, whereby an act of assistance must have some effect on the principal offence (e.g., the Netherlands) or a minimal effect on at least part of the crime’s phases or elements (e.g. U.S.). In contrast, some international courts and tribunals require a ‘substantial contribution’ to the principal crime and are likely to rule out marginal contributions. Notably, none of the jurisdictions mentioned in this blog require sine qua non contributions, and the principal crime can already be underway when support is provided.

The mens rea element of secondary liability is twofold. The accomplice must: (i) intend their own underlying act of help or influence; and (ii) know or intend that their act will contribute to the commission of a crime. Jurisdictions differ over the mens rea required for this second part. Some require only foreseeability of a contribution to criminality (e.g., the Netherlands), others require knowledge (e.g., U.K., U.S., France, ICTY), and some require purposive intent (e.g., potentially the ICC through Article 25(3)(c)).

D. Applying secondary liability to online assistance

This part considers general issues that may arise from applying the elements of secondary liability to the examples outlined in part B. Given the limitations of a blog, there are no conclusive findings as to whether liability should apply. It bears mentioning here that assigning secondary criminal liability to individuals is a serious response. Not all cases of online assistance to crimes will trigger liability, only those in which the necessary legal elements are proven to be present to the required standard.

Regarding the actus reus element, the acts of assistance described in the examples in part B could fall within the scope of secondary liability. In some cases, online platform users practically assisted crimes by providing financial assistance and information useful to criminality. More frequently, users morally supported and encouraged criminality. Given the broad actus reus of secondary liability, comments, likes, shares, and other online interactions can, at least in theory, constitute moral support or encouragement for crimes. Additionally, users can instigate, solicit, or induce crimes online. Prominent conspiracy theorists, such as Q of QAnon, could potentially be liable for inducing others to commit crimes given their significant influence over their followers.

However, not every supportive comment or like will have a causal effect on a crime. The causal effect of assistance should be assessed on an individual, case-by-case basis. Minor interactions such as likes or upvotes are less likely to affect a crime positively than an effusively supportive comment. The proximity of a secondary act to the principal crime will also factor into this assessment. And usually, certain types of assistance, like encouragement or moral support, will only affect a crime if the principal is aware of it.

The support given to Tim Gionet during the U.S. Capitol attack represents an interesting examination of causality. Supporters sent Gionet money on DLive as he live-streamed his now-charged crimes. While this money might not have practically assisted Gionet’s alleged crimes-in-progress (DLive later froze the money), the cash flow might have encouraged and spurned him to continue his acts in the Capitol. Still, it may be difficult to determine how, if at all, each of Gionet’s supporters individually affected his crimes through their donations or encouraging comments considering the stream peaked at 17,000 viewers.

Likewise, the 8chan example prompts some reflection on causality. Moments before and during each attack, users of /pol/ commented on the shooters’ posts in support of their crimes. It is unclear whether any shooter saw these comments. But if a shooter did see a supportive comment before or even during their attack, the encouragement gleaned from that comment might have affected their crimes. This possibility existed at the very least. Put differently, a /pol/ user might have affected a crime if they posted encouragement for a particular shooting, in a manner accessible to the shooter, before or during that shooting.

It is more difficult to establish whether /pol/ users causally affected the shootings with their general calls for violence in the days, weeks, and months before each shooting. On the one hand, users encouraged crimes in a broad sense, often undirected towards specific events or individuals. The remoteness of these calls for violence from the shootings and perpetrators may negate any causal effect on the crimes. On the other hand, users often encouraged specific types of criminal acts, like shootings, and promoted violence against specific groups.

Notably, two of the shooters thanked 8chan’s /pol/ before their attacks, so we can assume they garnered some encouragement from the board. However, New Zealand’s Royal Commission of Inquiry into the Terrorist Attack on Christchurch Mosques noted the complexity of this issue. Commenting on the Christchurch shooter’s use of online platforms, including 8chan, the Commission of Inquiry found “[h]is exposure to such content may have contributed to his actions on 15 March 2019 - indeed, it is plausible to conclude that it did. We have, however, seen no evidence to suggest anything along the lines of personalised encouragement or the like.”

The nature of online assistance raises questions about causality. How should we quantify the impact of individual users’ comments or posts? Should we examine whether each comment directly affects the principal, or should we consider how each comment contributes to an overall environment conducive to encouraging criminal conduct? Any causal assessment is made especially difficult by the number of users encouraging violence and the coded language used in forums, like /pol/.

Next, the nature of online communication complicates the mens rea assessment for secondary liability. The ability to communicate remotely and anonymously can provide platform users with an excuse, or shield, of ignorance. They can act online without seeing, or fully understanding, the consequences of their actions. They may not know who they communicate with or how this communication will affect its recipients. Moreover, the potential for anonymity and misrepresentation can cast general doubt on the veracity of information provided online, even when persons ostensibly identify themselves and their intentions. Consequently, platform users can provide assistance with both physical and psychological distance from the principal crime. In some cases, this distance will make it difficult to establish the degree to which a platform user knew of or intended their contribution to a crime.

In the 8chan example, those users who commented encouragingly on a shooter’s post moments before or during each attack might have known that crimes were imminent or underway. They could read the shooter’s post and manifesto and, in the case of the Christchurch attack, view the crimes through a live-stream. Granted, they might claim that they believed the shooter’s post was fake at the time of their comments.

It is more difficult to assess the contribution of users in /pol/ who encouraged violence before knowing the details of a particular shooting. Those users could claim that they were unaware a particular crime would occur and, therefore, did not have the adequate mens rea to assist that crime. They could claim that they did not know the principal and were not privy to his criminal intent. Indeed, there are no reports that any /pol/ users knew of the shooters’ intentions prior to the manifestos appearing online. However, one could argue that users of 8chan’s /pol/ or similar forums should know of, or at least foresee, their contribution to crimes when they direct encouragement or support for criminal activities towards an online community with links to extremism.

Other times, the mens rea element will be more apparent. In the example of the U.S. Capitol attack, there is considerable evidence that members of extremist groups and others might have planned, assisted, and encouraged criminal activity through online platforms. If the attack was planned and discussed beforehand online, it is conceivable that individuals, including those who were not physically present at the Capitol, could have knowingly or intentionally contributed to criminality by providing encouragement and other forms of support online. Conversations and posts saved to online platforms may reveal that persons who did not physically travel to the Capitol on 6 January knowingly assisted or influenced crimes which took place that day.

On a final note, live-streamed crime, an emerging phenomenon in recent years, somewhat overcomes the barriers to liability of remoteness and anonymity. Live-stream viewers can watch crimes as they occur and, in some cases, communicate directly with the principal perpetrator. Those viewing a principal’s live-stream of a crime in-progress will almost certainly know that their words of encouragement or advice can contribute to criminality.

E. Some final thoughts

Most people use online platforms to connect with others in positive ways. Unfortunately, some are using online platforms to stoke division and hatred. Increasingly, the effects of online extremism are spilling over into the ‘real world’ as physical violence. Before creating new criminal laws to address this harm, we should consider whether existing criminal provisions can adequately do so. If not, we may have an accountability gap.

While the novel and unique features of online communication present new challenges to tackling extremism, at least one of these challenges resembles something we have faced for years offline: individuals help and influence others to perpetrate crime.

Most domestic and international criminal legal systems have well-established principles of secondary criminal liability. Utilising this liability to tackle online extremism has two major benefits: it exists already, and it is well tested. Thus, we have a suitable accountability tool that is available and founded on well-established principles of criminal law. As we search for ways to prevent and reduce online extremism, secondary liability provides an appropriate legal framework to hold individuals accountable for their online contributions to criminality. By holding such individuals criminally accountable, we might deter others from engaging in similar activities.

However, the nature of communication through online platforms presents several challenges to the application of secondary liability:

  • Online platforms enable people to spread information easily and cheaply, across great distances. There are more opportunities to engage in criminality online, and the burdens normally associated with in-person engagement are greatly reduced. Users can interact easily with people and events they may not ordinarily encounter, including criminality. They can also act without seeing or fully comprehending the consequences of their actions. Consequently, platform users can provide criminal assistance with physical and psychological distance from the principal crime. 
  • Online platforms enable people to spread information to vast audiences. Online platforms provide cheap and effective mass communication tools to anyone with internet connection. It is easier to assign liability to assistance provided through direct, interpersonal communication than assistance provided through mass communication. When a platform user mass communicates advice or encouragement for crimes to a broad audience, and someone in that audience commits a crime, it can be difficult to establish the elements of liability, particularly the mens rea and causal effect on the principal crime.
  • Online platforms enable people to act anonymously. Not all internet interactions are carried out anonymously, and many of those that are ostensibly ‘anonymous’ can be traced. Nonetheless, the internet affords people greater anonymity than most other forms of communication. Anonymity presents two distinct challenges to the application of secondary criminal liability. First, it creates a relationship between accomplices and principals that complicates applying the mens rea element of liability. Platform users can interact with one another in grey areas, without certainty of the impact or veracity of their interactions. Second, anonymity presents evidentiary barriers to prosecution.8
  • Online platforms allow people to act collectively. Individuals with fringe beliefs can find commonality with others through online platforms. They can gather in one place, form online networks, and foster environments which further promote their belief systems. Some of these networks and environments are conducive to extremism and criminality. Numerous platform users can contribute to overall environments that encourage or support crimes. This complicates individual assessments of the mens rea and causal elements of liability. When large groups collectively support crimes, it can be difficult to establish the causal effect of each individual’s contribution.
  • People act differently through online platforms than they would in-person. Psychologists have attributed this to the online disinhibition effect. In part, online disinhibition stems from the characteristics of online communication described above. People have lower inhibitions when acting online because of their ability to act anonymously and detached from the immediate consequences of their actions. Online communication involves fewer social costs and confrontations than those associated with in-person communication. It is beyond the scope of this blog to explore this phenomenon in depth, but it is worth highlighting that online disinhibition may influence the likelihood and ways that platform users assist crimes.

The above challenges are broadly applicable to cases of online assistance. They do not entirely frustrate the application of secondary criminal liability to situations involving online assistance, but they certainly complicate it. The question of whether online assistance triggers secondary liability should be assessed on a case-by-case basis. Given the seriousness of criminal liability, accomplices should only fall within the scope of secondary liability when all the necessary elements are present and proven to the required standard.

As life moves increasingly online, it will be interesting to see how courts consider certain online acts in comparison to in-person acts. How different is a comment on a post to words spoken in-person? How different is a like on a live-stream video to a thumbs-up in-person? Or how different is physical presence to virtual presence?

In conclusion, people use online platforms to assist offline crimes. This is a growing issue in situations involving online extremism. Secondary criminal liability is an appropriate tool for holding individuals accountable for assisting criminality. When it comes to addressing online criminal assistance, we may not need to reinvent the wheel. Perhaps, instead, we can apply well-established principles of criminal law to new situations. Still, it remains to be seen if wheels are enough for these new roads.

 

 

1 Other modes of liability, such as incitement or conspiracy, may address similar conduct, and some jurisdictions directly criminalise the creation and dissemination of certain content. Nevertheless, this blog discusses secondary liability only, given its prevalence in international and domestic jurisdictions and its suitability for addressing the conduct in question.

2 The term ‘online platform’ is used broadly in this blog. In the EU’s proposed Digital Services Act, an ‘online platform’ is defined, in part, as ‘a provider of a hosting service which, at the request of a recipient of the service, stores and disseminates to the public information, unless that activity is a minor and purely ancillary feature of another service […]’. This definition includes social media, imageboard websites, blogs, video-sharing services, and more. Messaging applications, such as WhatsApp and Telegram, do not fall within this definition given they provide private communication services. Nevertheless, for the purposes of this blog, references to ‘online platforms’ include such messaging applications given their shared characteristics as online communication tools and their prevalent use in the examined activities.

3 Since rebranding as 8kun, the website no longer has a /pol/ board. Nonetheless, other boards on 8kun and other online platforms continue to host similar content to /pol/.

4 The recently formed U.S. House Select Committee on the January 6 Attack will undertake a broader investigation. Among other matters, the Committee will investigate the involvement of members of the Trump administration and other Republican politicians in coordinating the attack. Additionally, the Committee will investigate “influencing factors that contributed to the domestic terrorist attack on the Capitol and how technology, including online platforms, […] may have factored into the motivation, organization, and execution of the domestic terrorist attack on the Capitol.” Of note, on 27 August 2021, the Committee demanded records related to the Capitol Hill attack from 15 social media companies.

5 At the time of writing this blog, hundreds of people were charged in connection with the attack on 6 January in what has become one of the largest criminal investigations in U.S. history. A small number of defendants pled guilty, and some received sentences. Further developments in these cases are expected or may have occurred by the time of this blog’s publication.

6 Part C provides a broad overview of secondary criminal liability that necessarily overlooks the nuances of particular laws and jurisdictions given the limitations of a blog. Nonetheless, it hopefully provides a generally useful legal framework to examine online acts of assistance in part D.

7 In cases of procurement, a causal link is necessary under U.K. law.

8 Police and prosecutors face many practical challenges when it comes to investigating and prosecuting online activities. Anonymity is one of these challenges. Another challenge is the fact that platform users can support crimes across borders.