Society has largely come around to accept the fact that internet activity and search results are being heavily influenced by bots and algorithm curation, reducing organic activity and replacing it with non-organic content focused on the “consumer experience”. Although there is no consensus on the start date of the manipulation, there was a time when discussing how bots were flooding social media and how search engine results were being manipulated was a taboo topic. Maybe because the idea initially gained notoriety around the early 2010s on 4Chan—a website that has been considered as the breeding ground for some of the worst content available outside of the dark web.
The “dead internet” theory was initially considered a conspiracy theory back in the early- to mid-2010s, because of how it gained notoriety and the extremes to which users would stretch that “theory”. At the same time, university professors were teaching learners that algorithms were in fact affecting the “online” experience, but the outside world was not paying attention to any of that. It was only after the 2016 US Presidential election that society realized that the internet experience that most had been introduced to and grown up with had quietly passed away.
Today, it is almost impossible to find photos, videos, and other content that was once retrievable with specific searches less than a decade ago. Now those same searches often return unrelated content—the benefactor of clever optimization techniques. Had the “dead internet” theory originated anywhere else outside of 4Chan then maybe it would not have taken two consecutive US Presidential Elections for the rest of the world to see what 4Chan users had been writing about. A trip down memory lane serves as a reminder that societies will continue to get things wrong, and why it took us so long to come terms with the passing of the internet.
4Chan is the Doorstep to the Dark Web.
Almost all the early media reporting related to 4Chan would describe the website as a gathering point for individuals living on the fringes, until 2008. During the 2008 US Presidential Election, Republican Vice President nominee, Sarah Palin, had her email breached, with the hackers posting a screenshot on 4Chan. Then news broke that challenged everyone’s understanding of the internet: it was revealed that the FBI and Secret Service had questioned the son of a Democratic Tennessee State Representative in relation to the hack attack. Shortly after that hack attack, a Fox-affiliated news channel would introduce the world to 4Chan and the hacker group, “Anonymous”, calling them “hackers on steroids”.
As the internet began to evolve, 4Chan users eventually began sharing ways to procure illicit goods and pirate, everything from procuring counterfeit money and narcotics to pirating movies and software. Eventually film production companies decided to go after some of the same pirating services that 4Chan users were championing, hiring tech companies to launch cyberattacks against the likes of The Pirate Bay. As a result, 4Chan users initiated “Operation Payback”, targeting the likes of Motion Picture Association of America and the Recording Industry Association of America before going after anti-piracy firms.
One of the best ways to think about 4Chan would be as if a person found themselves at the doorstep of the dark web, and a place where fringe internet sub-cultures flourish. What makes 4Chan so unique is that it was designed to serve as an alternative to traditional bulletin-styled websites by providing anonymity to posters. As a result, “doing it for the lulz” became the slogan most associated with the website, obtaining amusement at another person’s expense and with total anonymity. That is probably why nobody gave any credence to the “dead internet” theory.
Legitimizing the “Dead Internet” Theory.
What shocked the masses after the US Presidential Election in 2016, and again in 2020, had to do with the realization of how powerful social network platforms had become and the extent to which they were being exploited for the purposes of disseminating misinformation and disinformation. Many of the most popular social network pages and profiles in the US that were related to faith, race, and activism were identified as being the creation of social network bot farms located on the other side of the world. These same bot farms were all-in on trying to socially engineer public opinions and to create political tensions around the world.
Within the US, some of the most popular Black American and Christian American pages on Facebook were being run by Eastern European bot farms. Pages like these were able to reach nearly half of all Americans by exploiting Facebook’s engagement algorithms. Once the bot farms had established a network of fake pages and profiles, they began to mass shared divisive messages, elevating other fake pages and profiles as well as amplifying real radical voices.
Perhaps the most shocking realization regarding the fake pages and profiles is that they eventually began to send and receive money “legitimately”, opening accounts with money sending services by using stolen identities. One of those money sending services was PayPal, to send money elsewhere, but also to receive money after the bot farms started to monetize more of their services. Many of these profiles were also successfully posing as genuine activists and donating to radical causes and socially divisive issues. These bot farms managed to leverage the fake pages and profiles to recruit around 100 US activists to organize different events across the country, from fundraisers to protests.
Another way that the fake pages and profiles were leveraged to disseminate misinformation and disinformation was by helping to prop up non-existent digital media outlets like “BlackMattersUS”. This website described itself as a nonprofit news outlet that came into existence because of changes in society. The domain name was registered through a proxy in 2015, with the website branding itself as a digital media outlet that focused on Black matters and employing Black writers. Over $100,000 would be spent across different social network platforms to further boost the visibility of “BlackMattersUS”, and similar amounts of money on ads over Google, YouTube, and Gmail. When news finally broke of the websites shady background, many still struggled to understand how a website that was so popular and seemed so real could be so fake.
All the major social network platforms would end up deciding to purge their platforms of suspected fake pages and profiles, but the way the purges were executed impacted many real accounts. What seemed to complicate the process was that these purges focused on wiping fake pages and profiles, but they also wiped real accounts that cross-promoted and interacted with the fake ones. Many real accounts that were being used for community outreach efforts that were important tools for racial justice organizers and advocacy workers at the community level were purged, alongside the fake ones.
After each of the elections, and as more information was shared with the public, the scope and prevalence of the bot farm exploitations was something that few could have imagined. For starters, social network platforms shared how they were blocking 100s of thousands of logins attempts per day and taking down around one million fake accounts each day. Eventually a whole bunch of social network platform information was disclosed, and many were left stunned at the scope of the activity.
On Facebook, bot activity peaked post-2016, reaching approximately 140 million accounts per month, with 75% of the users reached being those who were not following pages and bots. On Google, around 1100 videos were identified as being disinformation and misinformation, with a “catch”. Google executives attempted to argue that videos were viewable by everyone and although users could create videos intended for certain audiences, there was no way for anyone to target views based on race. It was a stretching of the truth, disregarding the fact that there were different strategies and techniques that made it possible to optimize content to get organic and unpaid traffic, which could be amplified if users paid for boosting services. That optimization process could be leveraged to focus on topics, issues, and incidents that were relevant to the interests of specific communities, including Black Americans at-large.
An analysis of 14 million tweets and 400,000 articles shared over the span of ten months (2016-2017) identified that bots were responsible for a disproportionate role in spreading misinformation and disinformation. Accounts that had a higher following were leveraged to legitimise the misinformation and disinformation, leading other users to believe, engage, and reshare bot-posted content. Then halfway into 2022, after Elon Musk had purchased Twitter, Musk came out stating that, after analyzing data across the platform that he believed that up to 20% of all accounts on Twitter were likely fake bot accounts. A startling revelation considering all the time and resources that went into “cleaning” up Twitter in the years prior to Musk’s acquisition.
More of the Same
Shortly after that assassination attempt on former US President Donald Trump, X (formerly Twitter) exploded with tweets related to the shooting. A few weeks after the assassination attempt, X’s “For you” section was still going off the rails and recommending rather extreme tweets from other users, “for me”. The only problem was that I never followed or interacted with any of these accounts. So, I started clicking on the recommended tweets, selecting “not interested in this post”, then either “show fewer posts from …” or “this post isn’t relevant”. I found myself flagging tweets, rotating between the two options, but the same accounts or other accounts reposting the same tweets kept popping up, over and over again, as if the flagging option was broken.
Not to underestimate the magnitude of the assassination attempt, but how do such extreme tweets populate for someone who practically only follows universities, human rights organizations, policing and public safety organizations, international organizations, scientific journals, and Hollywood stuff? Maybe the issues that were at the heart of the bot farming in the mid-2010s and onward were overlooked, or maybe there is a financial disincentive for them to get fixed.
Either way, what made matters worse was that I got notified by a reader of The Voice Magazine who follows me over X, telling me that my tweets, largely sharing articles to our magazine, were shadow banned (sent over screen recording). Other accounts I checked in with also confirmed the issue. But how could that be given that I pay for premium X and have verified my identity with X by submitting IDs?
Canada’s Best Practices Against the Dead Internet
Looking back at how the evolution of the internet seemed to catch everyone by surprise, academic discussions related to the “dead internet” theory never suggested that all internet activity was being manipulated. Instead, they explained how the internet was no longer organic like it once was. Yet somehow those academic discussions were drowned out by 4Chan posts that somehow managed to transform a real issue into being perceived as a conspiracy theory. Perhaps it is a reminder that people often do not know what they do not know, and that the digital world is now the new battleground for the hearts and minds.
What limited success has been had in the fight against the dissemination of misinformation and disinformation over social media is largely thanks to Canada’s leadership while at the helm of the G7. During Stage 1 of the Public Inquiry into Foreign Interference in Federal Electoral Processes and Democratic Institutions (Foreign Interference Commission), the former Minister of Democratic Institutions gave testimony on how all Western democracies were caught off-guard by the bot farming operations. However, there was also the mention of how Canada helped establish a playbook on how democracies could counter misinformation and disinformation attacks that originated over social network platforms.
Whichever way we look at the issue of the “dead internet” theory, lax digital laws have allowed social network platforms to become the sole arbiters of digital activity across national domains, and it requires new legislation. This needs to happen before generative AI potentially floods the internet with AI-generated content, drowning the remaining human-created content.