Coming of rage

A new understanding of digital hatred.

William Moursund

May 3, 2024

In 1984, 12 years before Google, 20 years before Facebook, and 32 years before TikTok, Louis Beam knew that the internet would change the world. That year, Beam, armed with a Commodore 64 computer, proclaimed a new era for white nationalists: “Imagine, if you can, a single computer to which all leaders and strategists of the patriotic movement are connected. Imagine further that any patriot in the country is able to tap into this computer at will… We hereby announce the Aryan Liberty Net.” To Beam and his followers, the internet represented the future. In an America still locked in the Cold War, Beam spoke of a new era in which “lone wolves,” free from the traditional organizational structures of old groups like the Ku Klux Klan, would act whenever they felt the time was ripe, terrorizing the country with random acts of violence. 

Those outside white-nationalist communities didn’t take Beam seriously. Few predicted the internet would totally transform our lives. Fewer thought domestic extremism would pose such a far-reaching threat. In 1985, the American Civil Liberties Union, or ACLU, reassured the public, writing that “there is little to suggest that this represents a great leap forward in the spread of anti-Semitic and racist propaganda.”

The following decades would prove the ACLU wrong. In 2017, the Charlottesville “Unite the Right” rally catapulted the topic of online extremism into the public eye. In the aftermath of the murder of counterprotester Heather Heyer, people who had never heard of Louis Beam and thought that organizations like the KKK would never make headlines again sought answers. A wave of op-eds and news articles attempted to trace the outlines of online radical communities and popularized a specific explanatory model of online extremism. According to this model, vulnerable young men could be clearly distinguished from the online extremists who radicalized them. 

Perhaps no article better exemplified this model of online radicalization than Joanna Schroeder’s 2019 op-ed “Racists are Recruiting. Watch Your White Sons.” Schroeder, who typically writes about parenting and relationships, framed online radicalization as the desired result of tech-savvy extremists preying on unwitting young men who crave fellowship and are too naïve to understand what is really happening. “Participating in the alt-right community online ‘offers the seductive feeling of being part of a brotherhood, which in turn validates their manhood,’” Schroeder wrote, quoting Jackson Katz, an educator and author known for his work on masculinity and online media literacy.

It’s no wonder that Schroeder’s writing and the opinions of commentators like her dominated headlines in the years following Charlottesville. By portraying online radicalization as a problem that could be solved by more diligent parenting, columnists assured the public that the threat of domestic extremism could be curbed without the need for more dramatic solutions and the painful, national reflection that would accompany them. 

Of course, there’s some truth to this conventional model. By the time that it entered public consciousness, Aryan Liberty Net and other early message boards designed to connect and inform already-radicalized individuals were a thing of the past. They had been replaced by new websites, such as 8chan and Stormfront. This new generation of extremist sites placed a greater emphasis on community. However, this narrative of radicalization fails to account for the complexities that often accompany the spread of extremist content online. In recent years, researchers have noticed a wave of extremists practicing what they call “salad bar ideology.” These individuals often lack formal connections to extremist groups. They write their own manifestos, choose the elements of radical ideology that most appeal to them, and often use extremist rhetoric to justify a preexisting fascination with violence. The existence of “salad bar” extremists, from the Christchurch shooter to the would-be Biden assassin Alexander Treisman, defies the dominant narrative of radicalization. These individuals were not seduced by the promise of brotherhood, and their desire to enact violence often predated their radicalization. 

To understand the link between online content and extremism, we must move past the neat categories of ardent extremists and naïve targets. We must instead recognize that the very structure and culture of the internet allow extremist content and terrorist apologia to seep into the mainstream. Unlike Louis Beam and the op-ed writers and commentators, each of whom viewed the internet as a mere tool, we must acknowledge that the internet has distinct mores, and that the internet’s structure and mechanisms change the nature of the conversations that it hosts. 

From John Perry Barlow’s 1996 “A Declaration of the Independence of Cyberspace” to Mark Zuckerberg’s famous adage “move fast and break things,” early adopters of the internet shared the convictions that information ought to be free of regulation and innovation should be pursued regardless of the trade-offs or consequences.The culture of permissiveness in communication, along with freedom of information, led early internet adopters to tolerate envelope-pushing humor. 

Nowhere was this more exemplified than in ROFLCon, a biennial conference hosted at MIT that served as the locus of internet culture in the late 2000s. ROFLCon attracted authorities from prestigious universities and advertising firms, as well as internet celebrities like Christopher “moot” Poole, the creator of the anonymous imageboard 4Chan. Poole served as both an organizer and panelist for the 2010 and 2012 ROFLCon events. His own offensive sense of humor was evident in his ROFLCon website bio, which contained a reference to a viral video of a white man attacking a black man. Even as Poole himself enjoyed mainstream acceptance amongst the early architects of the internet, the platform he created played a central role in propagating the edgy, ironic humor that became the mainstay of internet culture. Because message boards on 4Chan continuously refresh and posts cannot be saved, 4Chan users would often upload their memes, artwork, and jokes to other sites to record them for posterity. By the mid-2010s, it became clear that irony and a permissive attitude toward edgy content formed pillars of the internet’s newly cemented culture.

White nationalists were able to take advantage of internet culture, employing irony to normalize their ideas and attract new members. Several months after the Charlottesville rally, HuffPost obtained and published a style guide distributed to writers for The Daily Stormer, a white supremacist news website. In it, site editor Andrew Anglin encouraged writers to keep in mind that the “tone of the site should be light. Most people are not comfortable with material that comes across as vitriolic, raging, non-ironic hatred.” The uninitiated “should not be able to tell if we are joking or not.” Anglin continued, explaining that there “should also be a conscious awareness of mocking stereotypes of hateful racists. I usually think of this as self-deprecating humor—I am a racist making fun of stereotypes of racists because I don’t take myself super-seriously.” By disguising earnest racism with the half-joking edgy culture of the internet, The Daily Stormer could, in Anglin’s words, reach the “target audience [of] people who are just becoming aware of this type of thinking.” The success of The Daily Stormer and other publications in attracting new readers may mirror the dynamics of the conventional model of online radicalization, in that there remains a boundary between the indoctrinated and unindoctrinated. However, these publications could only enjoy their success within an online culture that already flirted with humorous racist language. Anglin could only boast that his “site continues to grow month by month” because his target audience already held tolerant attitudes toward the racial slurs and conspiratorial thinking that fill The Daily Stormer’s articles. Because the conventional model of radicalization does not account for a pre-existing culture of racist or “edgy” content, any solutions based on this model will fall short.

Policymakers wondering how to distinguish between serious and disingenuous radical content can find an interesting historical analogue: US domestic surveillance during World War II. In Warfare State, historian James Sparrow discusses the US government’s efforts to monitor the spread of rumors amongst the civilian population. During the war, the Office of War Information developed a sophisticated network of informants and pollsters to document the spread of rumors across the country. This network recorded rumors with incredible precision, tracking racist and anti-Semitic songs and conspiracy theories that could prove damaging to morale. However, when the government tried to contextualize or act on this information, they faced difficulties. As Sparrow notes, “the meaning of rumors proved difficult to pin down… They were expressive, ambiguous, usually told for curiosity’s sake rather than to establish clear beliefs or points of view… A willingness to pass along a story did not necessarily indicate absolute belief in its contents.” The government, in short, faced difficulty assigning meaning to the spread of rumors because plenty of factors besides earnest belief could motivate people to repeat them. 

Similarly, there is reason to believe that many internet users who propagate racist or extremist content do so in spite of, not because of, their earnest convictions. Instead, because the internet’s culture encourages irony and pushing the envelope, and because they often enjoy anonymity, users may simply relish the titillating feeling of spreading taboo information or offending others. This possibility complicates the conventional model’s central claim that the internet is dominated by naïve innocents and hardened radicals. This fact makes efforts to surveil and censor radical content online much more difficult. When spreading extremist content, earnestly or not, is commonplace, it is difficult for authorities to identify who poses a threat to public safety, just as the OWI had trouble determining whether Americans actually believed the rumors they spread. When radical content becomes an accepted part of internet discourse, it becomes trivial for potential “salad bar” extremists to express or experiment with radical views. They are no longer forced to seek out white-supremacist platforms. The taboo becomes normalized, and it becomes much easier to internalize elements of radical thought without straying from the bounds of acceptable discussion. By contributing to an online environment in which content like The Daily Stormer articles don’t seem immediately out of place to unassuming readers, those who peddle racist language for humorous or disingenuous reasons normalize the very views they claim to satirize.

Recently, a new sort of meme started appearing on my Instagram feed. The meme typically contained text along the lines of “me adding ‘in Minecraft’ to the end of my Google search so I don’t get put on a watchlist,” coupled with some unrelated image. Memes about government surveillance are a common enough trope—remember the endless stream of jokes that followed the formula, “My FBI agent watching while I…”? Still, the substance of this new meme mirrored the words of the Ponway Synagogue shooter, who reminded his followers that “it is so easy to log on to Minecraft and get away with burning a synagogue (or mosque) to the ground if you’re smart about it.” I have no idea if there was a direct line of inspiration from these memes back to the shooter’s post. Even so, the similarities between the two are uncanny. They point to an uncomfortable fact with which many are already familiar: people love saying awful things online. It’s hardly a novel observation, but it’s one that is largely absent from discussions in mainstream publications or among policymakers concerned with online extremism. I have neither the expertise nor the credibility to offer solutions to the threat posed by online radicalization. However, I do know this: until the public abandons its comfortable, moralized binaries and acknowledges the extent to which radical thought pervades all online discourse, we will be unable to understand or address the continuing threat of extremism.

William Moursund is a contributing editor at The Harper Review and a second-year undergraduate at the University of Chicago studying history and public policy.