AI, or, Shifty Epistemics and Shaky Ontologies
Fictional girlfriends and the suss dudes who made them possible
It is an irony too fitting that the first thing we did with AI imagining technology capable of replicating the Great Works of the Ages was to create sexy self-portraits. This was particularly prominent in the New Age space, where just about every person who maintains an online presence posted pictures of themselves looking like hot elves, superheroes, flawless models, erotic anime goddesses—you get the picture. The vibe was of Narcissus staring at his reflection in the pond, lost in the illusory grandeur of material aesthetics.
Similarly, as far as I can tell, the people most excited about LLMs (Large Language Models, e.g., Chat GPT) are the ones most concerned with wealth generation. This is not to say the technology isn’t groundbreaking; LLMs are redoubtably powerful tools. I use GPT to clarify different philosophical theories and I have even toyed with it for prose revision. It’s impressive. (Though, for now, it’s painfully obvious which writers rely on it too heavily).
After a few months of reflection on the surge of AI technology (and its coverage!), the biggest threat I see here is on epistemic grounds—how we know what we know, and how we determine whether our beliefs are reasonable and reliable. To put my thesis in plain language: an AI (approaching AGI, aka “full AI” and sentience) in the context of an addictive digital economy—amid exponentially eroding institutional trust—is likely to make a large minority of people go insane.
And when I say insane, I’m not talking about “freaking out” or even the fine line between insanity and spiritual awakening. I mean psychosis, self-harm, violence, confusion—all the ingredients of chaos within the consciousness of the collective and, if not monitored and curbed, the stuff of dystopian destruction. The Post-Truth Epistemic Breakdown is already very much upon us. Fox News, CNN, social media, bots, and, yes, Donald Trump, were more than sufficient to start that game and take us through the first quarter. AI, with genuine bad actors and gung-ho business dupes, I fear, will wind down the clock. DeepFakes will reach new levels of seeming authenticity. Videos of terrorist attacks on American cities—or Arab mosques—that never happened will be interpreted as real-world catalysts for violence by hoodwinked factions. And the possibility for manipulation by State agents is profound.
It starts with not knowing what (or how) to believe—and shifts into not knowing what’s real. And I mean this in a literally ontological fashion: citizens will, in their bones, disagree on whether there is (or is not) a War or a Right or a puddle or a miracle. The implications are unfathomably staggering. From epistemolgy to ontology, from How to What … is the road down which all of us lose the Why.
In America, half the country already believes the prior election was stolen (and some of them stormed the capitol), while the other half too often derisively believe such citizens to be “deplorable” and otherwise racist ignoramuses. As such, AI is likely to be weaponized to fan the kindling on these extreme ends of all ideologies—and extremism is intrinsically unstable and destructive.
Longtime readers of mine will know that I believe the breakdown in sensemaking and cultural bifurcation from Covid was merely a dress rehearsal for existential threats with far more consequential risk factors. Do I think AGI is that very risk, the thing that will destroy us in a decade? No, I don’t. Is it, generally speaking, in a peak hype cycle in our cultural milieu? Yes, for sure. But I do think that unregulated AI will make it more likely for us to destroy each other, which has always been the predominant threat when God-like technology is paired with a tribal consciousness. Unfortunately, worldwide, we are still stuck at this stage of human development, despite all our progress.
Moreover, I don’t find the men (yes, it’s mostly dudes) behind these AI companies to be trustworthy. Sure, there are some exceptions here—think dharmic-driven LLM engineers who practice mettā, some of them at Open AI, the company that built Chat GPT. But I can also safely say that many of these other dudes have truly scary worldviews1: cyber-libertarians, worshippers of capitalistic individualism and technological progress without the guidance of Soul. I’ve critiqued Sam Altman, the CEO of Open AI, before in one of my more popular early essays following a viral tweet in which I dunked on him (alas, I was spending way more time on Twitter then, in the parlance of my youth).
In the essay, I attempted to expose the falsehood and dangers of promoting a highly reductionist, rationalist worldview that eschews humans as a bunch of random neurons strung together (so, we might as well merge with the machines sooner rather than later, a belief Altman has publicly stated). I’ll repeat what I wrote then:
“A living, breathing, conscious human being capable of abstraction and emotional kindness is an emergent property. Its totality cannot be reduced to parts [in the way an LLM can]. To do so denies the wild mystery of emergent phenomena.
Why is it that our universe seems to possess an evolutionary impulse that creates more and more elegantly ordered complexity out of nothing? The universe literally behaves as if it knows a more strategic sequencing of parts will lead to new and advantageous creation. The “You” we are talking about here is one of those creations.”
Even scarier: many of these AI engineers have done the so-called “work” (I know, eye roll)—taking psychedelics, doing therapy, shadow work exercises, etc.—and they are still operating under these highly mechanistic, reductionist conclusions devoid of Spirit. So, please forgive me if I don’t currently have much hope for an eleventh-hour change of mind that will help steer this technology in a more humane direction. Humaneness feels a lot to ask of those who build billionaire bunkers in New Zealand for the perceived inevitable doom of humankind.
In equally scary, but at least far more entertaining news, I saw an advertisement the other day for a fully customizable online AI girlfriend—indeed, I’ve been promised she’ll be the woman of my dreams.
If you want to understand more about why these men are suss, and the risk of global weirding that will come from the AI boom, please read this phenomenal essay from none other than the godfather of psychedelic weirdness himself,
. I also recommend reading everything written on AI by (another Erik :).
Spotted your piece from FDB’s August list, really enjoyed it. An odd number of parallels between what we both submitted. Zeitgeist.
https://open.substack.com/pub/corsonfinnerty/p/the-age-of-disbelief
The question I’m grappling with now is what to do, how to stay sane, grounded, connected. Any insights?
The preeminent worry I have about A.I., is not that we are in the midst of creating it, but rather the purpose we are producing it for. Which, as you laid out, seems to predominantly be for the manifestation of free and rightless labor; the Ultimate Capitalist fantasy where those at the top of the economic food chain no longer have to worry about the burden of those pesky and expensive human rights.
And nothing breeds predjudice quite as potently as enslavement.