In April of this year, the year of our lord, two thousand and twenty-four, a YouTube content creator named Kat Blaque was presented with an opportunity by her management firm to appear on reality TV star Travis Kelce's podcast. As part of the preparation for the upcoming appearance, Kat Blaque participated in a technology preparation call to ensure that she could be interviewed for Taylor Swift's boyfriend Travis Kelce's podcast without technical problems or poor audio quality. During this call, Kat Blaque was asked to add a permission to her Facebook business page that would allow the podcast livestream to take place via Facebook. After concluding the technical preparation call, Kat Blaque went back to work on her content creation and awaited a scheduling call to finalize the details for her appearance on barbecue sauce entrepreneur Travis Kelce's podcast.
Several days later, however, she noticed that her Facebook account was no longer under her control. She had been scammed. The scammers had managed to trick her management team into believing they were actually from mustache enthusiast Travis Kelce's podcast, and now they were in control of her sizable Facebook audience with her verified Facebook account. The scammers have been posting AI generated spam content ever since and Kat Blaque has yet to regain control of her account due to inaction from Meta. As it turns out, it's extremely difficult to regain control of an account, even a fully verified account of a public person such as a content creator, because Meta lacks sufficient staff and legal obligation to intervene. And so, AI generated spam content is being handed out to two hundred thousand Facebook followers from a verified account, which means the Facebook algorithm amplifies those posts and suggests them to even more Facebook users. From there, other search and AI algorithms have picked up those posts due to their assumed importance and propagated the content to other platforms. Like a stone tossed into a calm lake, the ripples are spreading ever wider across the internet waters.
In 2015, a group of super rich tech bro weirdos founded OpenAI, a research and development collective intent on creating a set of open standards for practical artificial intelligence software. Brimming with altruism, the cabal of fleece vest enthusiasts set out on a course to establish a framework for artificial intelligence that would guide humanity safely into a robot-enhanced future without leading to a Terminator movie universe apocalypse. Nine years later - after in-fighting and breakups, CEO Sam Altman being fired by the corporate board and then rehired mere days later, Elon Musk quitting the group and then suing them for not wanting to cosplay as characters from the movie Hackers with him, billions upon billions of dollars of investment capital from nearly every notable tech company in the world, and a stock market boom sequel 25 years in the making - OpenAI has become a foundational piece of the modern internet. The OpenAI flagship product ChatGPT and the competing products it has inspired from Google and Meta are now literally hardwired into nearly everything we touch on the internet. Data centers are being completely redesigned to accommodate the massive processing needs of large language models that serve as the basis for generative AI. AI image generators have exploded across the web and are readily accessible for free. You can now have AI music and video generated in seconds with just a few typed prompts. The growth, adoption, and application of these new tools has been incredibly fast over the last two years. So fast, that most people probably haven’t even noticed how their online experience has changed.
Right now, if you open your Instagram app and start reading the comments on a given Reel or feed post, you will get a generated and curated subset of those comments at the top of your list. The person next to you on the train will likely get a completely different subset of comments. If you open your Youtube app and begin to peruse your new list of recommended videos, you will see content created for your specific interests and designed to get you to smash those like and subscribe buttons. If you use Tik Tok or Threads, the “for you” sorting will give you content designed for maximum engagement based on your server-side profile that has been built by carefully crafted and refined algorithmic logic. This part isn’t really new and has been lurking in the bloodstream of the internet for many years. What is new, however, is that much of that content isn’t created by other human users. Finding exact numbers for this is next to impossible, but estimates provided by industry groups have stated that 20% of all social media content is now purely non-human, AI-generated content. If you have decided to pay attention to these sorts of things in the wild, that figure seems to be a very low estimate. The actual likelihood that any conversation you have online involves AI bots would shock most users. It’s almost as if the internet is no longer alive with human content. You could almost say the internet… has died.
The dead internet theory began showing up on forums a few years ago to address the displacement of genuine, first person of origin, human content with content generated from AI and LLMs. The term puts a dramatic but accurate label on the digital window of our internet devices. The impetus for the dead internet theory is how different the internet “feels” to users who are old enough to remember the early days of social media. As we click, tap, and scroll through various apps and websites, there is a sense of being corralled and herded into different directions by unseen forces. Each video, image, and comment delivers a payload of intention that is nearly invisible, but somehow perceptible. Browsing has become an exhausting mental task of evaluating and intuiting each detail while feeding an ever-growing mistrust of anything and everything and everyone. And yet, even in that awareness, the engagement continues. The human need for expression, validation, and interaction is inexhaustible. It’s a digital Weekend At Bernie’s and we just don’t want the party to end, even if the host was murdered by the mob. We just pretend that the internet, much like Bernie Lomax, is still alive and well, even if it has to be propped up by two bumbling, horny accountants at all times. Yes, that’s correct, I’m saying that Weekend At Bernie’s is a AI allegory on the level of The Matrix and I, Robot.
To take this ridiculous but not inaccurate film interpretation to it’s logical conclusion, the dead internet, much like Bernie Lomax, will start to stink and criminals will reanimate it with voodoo to try to claim whatever sunken treasure they can find in the Virgin Islands before being turned into goats after the voodoo backfires on them while the rest of us party on a yacht like none of the events that lead to this were totally bonkers.
Postscript:
In the interest of full disclosure, I generate all of the silly pug images for these posts with an online image generator. The text content, however, is fully human generated content. How can you trust that I am not a filthy, lying robot? You can’t. That’s a problem and we need authenticating protocols for all content. That’s the solution I should have pitched in the main body of the post, but I got tired of writing and ran out of steam. So, yeah. Enjoy the rest of your day.
Comments
Post a Comment