On facing the AI revolution
The good, the bad, and how to prepare for what’s coming
In a former life, I was quite passionate about technology. Though never a technologist myself, I was a curious insider with a good bullshit detector who found myself hustling through the hallways of some of the most innovative companies of the past quarter century.
For a dozen years, I took pride in spotting trends that could shift paradigms. Whether helping scale Twitter’s nascent ad platform that pioneered “native” advertising, building cloud startups during the SaaS boom, or producing AI/VR events at VentureBeat, I was plugged in, respected, riding the wave. Well, “respected” might be generous, but people responded to my emails.
Beneath the gleaming surface of my tech career, my life was slowly imploding. While closing seven-figure deals on Market Street, I’d sneak a few blocks away to score all kinds of “performance-enhancing” drugs in the Tenderloin between meetings, desperately attempting to maintain a pristine professional persona by day while succumbing to a shadow world of addiction by night. This was more than a double life; two realities occupied the same body, growing more incompatible with each passing day, a madness building and building and building, until the center couldn’t hold and everything collapsed with spectacular finality.
When I “woke up,” I had to confront not only the demons and wreckage I’d created in my own life, but also the destruction a toxic work culture had burrowed into my psyche. I saw clearly that companies I once thought were forces for good—like Twitter, Facebook, Instagram, Slack (where I briefly worked too), etc.—had actually produced harm. Twitter was once the global town square; now it’s a shitshow. Facebook was supposed to bring people together; now it radicalizes. Instagram promised creative expression; now it manufactures inadequacy. Slack promised workplace efficiency; now the workday never ends.
For the last six years, I’ve used all this—my insider tech knowledge, my calamitous fall and scrambling resurrection—to raise awareness of technology’s risks—namely, its addictive nature, how it becomes a way of avoiding emotions, and the terrifying seduction of “ideological reality tunnels.” I’ve written on numerous occasions about my ethical and spiritual distrust of the leadership decisions at companies like OpenAI, and about AI’s general potential to unleash epistemic havoc. I still believe that. Things could go to hell.
Still, if you had asked me three years ago, when ChapGPT first launched, whether you should prepare for the “AI revolution,” I would have said that it’s mostly hype and industry froth, promises written in PowerPoint and blessed by venture capital, each one allegedly on the brink of changing everything. So, probably not.
Yet the universe has a way of humbling the certain.
After months of genuinely engaging with these systems—most recently testing ChatGPT o1 and Claude 3.7—I’ve experienced what you might call a conversion. The gap between speculative science fiction and my daily workflows has collapsed with unsettling speed. It feels, at least to me, as fundamental a shift as some of the evangelists promised, though perhaps for reasons they never anticipated.
I’m now convinced that everyone would benefit from grappling with what’s coming, especially those in easily replaceable jobs (which, unless you’re a hand model or a professional cuddler, probably includes you and me). I’ve spent months in deep reflection about what this means for my life and my son’s future. This deliberate pause is partly why I haven’t published here in so long.
I write from the perspective of a layman, someone who works as a coach and somatic therapist, and founder of organizations focused on integral men’s development and psychedelic-assisted addiction recovery. In other words, fields that, at first blush, seem far removed from AI.
There are far more qualified technical thinkers to read; that’s not what I’m trying to be. I’m also not speculating, in this piece, about AGI/ASI (when AI becomes conscious), which will obviously shift the dynamics of daily life as we know it—because even current LLM (Large Language Model) development will radically alter how we work and live.
What I’m offering is the chance to review, reflect, ponder, and maybe prepare for the shifts coming—for all of us.
While these reflections emerge from my specific lines of work and my strategy for adapting at this inflection point—professionally, creatively, and spiritually—I hope they might illuminate something useful for your own.
Prioritizing in-person offerings
After spending significant time with Claude and ChatGPT, I believe AI therapy will be a very real thing, and it will be quite effective. As in, we’re about to witness the largest shift in mental healthcare since Freud abandoned hypnosis—and it’ll happen in a fraction of the time, with millions of people pouring their hearts out to algorithms and finding genuine relief.
That said, AI cannot replace deep, caring presence. AI cannot replace the resonance that occurs on a cellular and energetic level when two humans breathe the same air. Same room, same space, same vibrations, same nervous systems in silent dialogue. Virtual cannot replace that. Not to mention that what most people secretly need from therapy is never just insight—it is the loving acceptance a parent was supposed to provide but, despite their best efforts, could not. Claude gives no hugs, sheds no tears with you, feels no warmth when your shoulders finally drop from your ears.
There will be a growing need—or, better put, a greater realization to a devastatingly current need—for human presence, for tangible support, for touch. The sort of care that can only be done in real life. I have spent the last few months upgrading the furniture in my home office in the Oakland Hills to ramp up my in-person sessions. Clients who were previously Zoom-bound have unanimously reported more impactful outcomes, with a complementary embodied connection that adds a new dimension to our virtual work.
As AI becomes ever “better” at the cognitive aspects of healing, our unique human offering becomes clearer: being fully present with another’s pain without trying to optimize it away. In theory, AI’s efficiency should free up more time for these meaningful in-person connections—more time to just be with each other. Whether that vision holds true or becomes yet another case of technology promising leisure time it never delivers remains to be seen.
What’s certain is that while online courses and learning remain valuable, we’ve reached an unmistakable saturation point. We’re drowning in developmental models and frameworks while starving for the messy, sometimes awkward, always transformative experience of actually applying them with another human unafraid to call us on our shit. The pendulum is swinging back, and people are rediscovering their hunger for embodied, face-to-face connection—and for heat.
To this end, my in-person work will, hopefully, take up more and more of my average day, and I am increasing the frequency of in-person gatherings and my hosted retreats for 2025. I believe this balance of virtual accessibility and physical presence will become increasingly valuable as technology advances—not in spite of AI, but because of it.
How I’m embracing AI tools
I was initially hesitant to use AI in my coaching and somatic therapy practice. It felt like an affront, an offense only the hardcore “scale my business” no-depth style coaches were using. Part of my identity was wrapped up in being a purist—someone who would never dilute the sacred human-to-human container with technology. Now I’m convinced it will be a crucial element, and that the best coaches will leverage AI to augment their practice without surrendering their humanity.
Imagine portals where coaching or therapy session transcripts are loaded and both client and coach can assess what might improve. I’ve started to upload AI-generated transcripts from some of my specific Zoom coaching sessions, asking Claude, “As a Master Jungian and Depth Therapist, what could I have done better?” Or, “How would Joe Hudson, an elite performance coach, have asked more cutting questions?” I’ve been genuinely floored by the insights. Claude has pointed out that I should more directly address defensive identity structures, noted how my questions often contain embedded assumptions, and even counted my excessive filler words. These AI observations are oftentimes, I hate to admit, sharper than the paid therapeutic supervision I receive from mentors—though they serve a fundamentally different purpose than the wisdom that comes from decades of embodied practice.
I’m also embracing these tools to streamline my workflows and solve problems that would have consumed hours of my life pre-AI. When my accountant asked me for the cost-basis at which I bought my Bitcoin—because I sold some this year (babies are expensive)—I had no idea, though I knew it was low. (God bless my friend Miles.) So, I uploaded seven years of transaction history to Claude and damn... I think my accountant will be out of a job soon. And he’s a good accountant.
Beyond work, I’m using AI for personal growth, mainly for dream analysis through a Jungian lens—with genuinely superb results. This is where I first glimpsed AI’s insane therapeutic potential: its ability to access and cross-reference virtually every documented symbol across cultural traditions and psychological frameworks. I wake up at 4 AM, record a sleep-slurred voice note about a dream, go back to sleep, and plug the transcript in later when I’ve had my tea. Claude unpacks layers of meaning I’d never access on my own, drawing connections through the collective unconscious that would take years of analysis to surface. Turns out, the bear who charged me in my dream really just wanted to be my friend.
I’ve also created a digital therapeutic companion using prompts that blend depth psychology, somatic frameworks, and spiritual traditions I’ve studied for years. This AI coach helps me navigate leadership challenges, untangle facilitation knots, and illuminate the growth edges I continually bump into. I’ve designed it to nudge me toward more mature, integral development—recognizing both our interconnection and the paradox that ideas can be precious and meaningless simultaneously—while being brutally honest about blindspots I conveniently miss. It’s essentially a programmable therapist without the subtle glance at the clock when our time is up.
At the same time, I’m rejecting AI tools when it comes to my art
Claude is a pretty profound writing editor if you use it correctly. Not as good as some of the incredible editors I’ve been fortunate to work with, but better than most. It can become easy to rely on it for writing, and I’ve actually noticed my skills as a writer starting to atrophy accordingly. This coincides with raising a 9-month-old boy, but, to be honest, I miss the days where I didn’t have an AI to improve my craft.
I’d read a page from Shantaram each morning for creative inspiration. I’d read Junot Diaz, Dylan Thomas, and many of the stylists that MFA programs have you write essays on as a way to improve your prose. And it worked. I’d absorb Raymond Carver’s short stories for a month, and some of that brash, staccato, hinting style would find its way into my essays. With Claude and a baby, those days are over.
If you spend just a bit of time using these LLMs to write, it becomes shockingly obvious which writers are over-relying on AI for their essays. For spiritually-adjacent writers, it’s especially noticeable who is over-leaning on AI: the flat syntax. Each sentence, each paragraph, the same length. The near-daily publishers with their “Finding Balance” and “The Path Forward” section headers are always the most culpable. Writers whose work I’ve long enjoyed announce a series on a psychospiritual subject, but halfway through the second essay, the droning monotone shoots my curiosity dead.
And that’s totally fine! Frankly, I’ve been surprised how much some of these essays are resonating with folks digitally. Others don’t seem to be bothered by it the way I am. But that’s nothing new! I’m salty about what I enjoy and am glad that it’s working for them. I do wonder how sustainable the method is—eventually, monotony grows tiresome to everyone. Even in pop psych.
I think the writers who really make it in the future will be full of personality and style—my friend
is a big inspiration for me in this regard, writing with an I-don’t-give-a-fuck attitude, prose nothing like what an LLM could generate. But, and this is where I sadden about the future of blogging, even that could change. It probably will.For my own writing, I’m doing my best not to overuse Claude as my editor and to start working with human editors again. I am also writing more personal letters, never to be published, just for friends or just myself. I hope to write more creatively for paid subscribers, people who have indicated to me that they want to see more of me being me. I think in the age of AI, the unpolished, the esoteric, the raw, and even the messy will come out on top.
Turning off all unnecessary tech with “ping minimalism”
The way I see it, there are two paths forward here. We can lean harder into AI where it genuinely benefits our lives—embracing its efficiencies, its capabilities, its potential to free up our time—or we can step back and practice what Cal Newport famously calls “digital minimalism” with renewed commitment.
I’m increasingly convinced the answer is both/and: deliberately deepening with tech in areas that augment your humanity while ruthlessly cutting technological noise everywhere else.
For me, this means going deep with AI in specific domains that matter—coaching, therapy, creative work, and time-saving tasks—while simultaneously becoming almost monastic about removing technological intrusions that offer marginal value but maximum distraction.
Lately, I’ve come across an even more aggressive iteration of this approach: “ping minimalism.” Instead of just trimming down our apps, we actively revoke permissions to our attention—unsubscribing, muting, ghosting, and even using AI “shadow selves” as gatekeepers.
I used to weigh in on Trump, the elections, and the broader political landscape, believing I had something valuable to add to the cultural discourse. Now that I’m raising a child, the emotional charge around it all has dropped at least thirtyfold. Call it spiritual bypassing if you like, but I literally do not have time to keep up with the outrageously fast-developing twists of the daily news cycle. While I still value civic engagement, I’ve come to see how our digital ecologies trap us in outrage cycles that ultimately disempower rather than create change.
Through this shift in perspective and prioritization, I’ve realized there’s nothing more important I can do for the collective than raise my boy into a healthy, resilient, conscious man—hopefully one who cares for our planet, though that will be up to him. Stepping back from what everyone calls The World has let me live in the actual world right in front of me, where meaning and impact exist at a different scale and depth.
It’s not just a matter of political fatigue. We’re at a major turning point with legacy media—read The New York Times every day and it’s easy to think the apocalypse is imminent, while anyone who disagrees gets labeled an ignoramus with his head in the sand. AI is speeding everything up, but I’d rather meet that acceleration from a deeper center—one that holds both my personal responsibility and our shared humanity—than scramble in a perpetual state of panic.
That’s why I’m so cautious about contributing to the addictive attention economy: I’ve even stopped reading most Substacks, which is ironic given that I write one myself. If I’m asking for your time, I want it to matter—something that speaks to your soul, not just more noise competing for your already fractured focus.
This, I think, is the paradox of our moment: using sophisticated AI tools to deepen my work while simultaneously creating stronger boundaries against technological intrusion. My version of ping minimalism looks like this:
Deleting all social media, news, and reader apps from my phone
Muting and archiving all Signal and WhatsApp group chats
Unsubscribing from unnecessary email lists
Only checking social media during set downtimes after completing my most important work for the day
Turning off email notifications for all Substack newsletters to avoid that subtle sense of obligation (I read them in the app instead, which offers a better experience)
Designating my iPad as the “entertainment device” for browsing, reading Substack, and YouTube—keeping my work computer strictly for work, and my phone strictly for audio and essential needs
Keeping my phone in permanent Do Not Disturb mode with only family members as exceptions for notifications
Preparing to deactivate or privatize my Instagram, which I’ve kept out of habit rather than value
The less random tech I allow in, the less I want it—and the more I realize I never needed it quite so much to begin with.
The inner revolution to match the outer one
At a certain point with AI, when entire industries are reshaped and many of our roles become obsolete, we’ll be forced to look inward. When the busyness of work recedes—or is automated away—it becomes impossible to avoid life’s deeper questions:
What remains when my job title doesn’t? What is this awareness that witnesses my experience? Who am I, really?
That kind of reckoning can feel terrifying if you’re used to letting your job define you (I know this firsthand). But I think it also unlocks an extraordinary opening: the freedom and time to explore the nature of reality—and discover what we, as conscious beings, can co-create with it. Navigating this inner terrain isn’t something our education prepares us for. We’ll need spiritual resources. Not dogmatic ones. The real kind. And we need them soon, if not immediately.
In my own journey, the most important shift I’ve experienced came through various practices, but primarily meditation, which fundamentally changed my consciousness and how I relate to the world. My daily state of bodymind now compared to where I used to be, living that double life, is frankly unrecognizable. The torment of compulsive thinking, crushing self-doubt, and vicious self-hatred that once defined my inner world, plaguing my waking hours, has largely settled. I still have those thoughts, but I’m no longer at their mercy. It’s cliché to say, I know, but now they’re more like weather patterns passing through the sky of knowing. And the fact that I, once a walking mind-wreck who practically wrote the book on self-sabotage, could climb out of my head tells me anyone can.
It’s often considered taboo to speak publicly of so-called “spiritual attainments”—for good reason—because spiritual vulnerability can slip into pure douchebaggery with alarming speed. But as more and more people share about what’s possible in terms of human potential, I feel increasingly drawn to this inflection point where an inner revolution catches up to the outer one. The truth is, our consciousness holds far more capacity for growth than most of us realize.
Historically, I leaned heavily on ancient lineages—tantric yoga, Theravāda and Vajrayāna Buddhism, Advaita Vedanta—to glean an understanding of this potential. Lately, and much to my surprise, I’ve been drawn to something slightly different: the science of awakening. I’m interested in how we can explain changes in consciousness in a way that’s replicable, systematic, and not just trapped in esoteric language.
Now more than ever, there’s something vital in making this inner exploration secular and approachable, because the AI age will demand new forms of resilience, creativity, emotional fluidity, and developmental maturity from all of us. As AI systems grow increasingly sophisticated, they’ll challenge our uniqueness in domains we’ve long considered exclusively our own. And AI won’t just displace jobs. It will force us to redefine what being human actually means. Without a deeper foundation of identity and purpose, many will face something beyond economic uncertainty—a kind of existential vertigo that no universal basic income could possibly address. It’s as if we’re confronting two revolutions at once: the AI revolution that threatens our sense of identity, and the inner revolution that’s quietly redefining human potential.
We often say “the game is changing,” but in many ways, it already has; most of us have simply been too digitally overwhelmed to notice.
For the last decade, since I clawed my way out of addiction and began my earnest pursuit of spiritual growth, the practices I once explored behind closed doors or on retreats—psychedelics, shadow work, emotional fluidity, awakening states—have now entered Fortune 500 boardrooms and local community centers alike. These tools span all aisles of the political spectrum, from Silicon Valley techno-optimists to racial justice educators deconstructing systemic trauma, sometimes in ways that make me cringe.
Don’t mistake this for a short-lived craze. AI is rapidly transforming raw intelligence from scarce to abundant—from uniquely human to instantly accessible, like calling an Uber. In this new inverted reality, what will truly set people apart is the deeper self-awareness and growth these practices cultivate.
You cannot outsource your capacity for creative courage, your ability to sit with uncertainty, or the authentic resonance that draws others to your vision. Those who flourish won’t rely on algorithms; they’ll have done the harder work of understanding themselves. The very qualities once dismissed as “woo” are becoming the new edge in a world where thinking can be delegated but wisdom cannot. This movement is evolving so rapidly that I think it’s time we graduate beyond dismissive labels like “woo” altogether.
In my experience, the path to developing these essential human capacities lies in what many traditions call the awakening process. The good news: when approached practically and taught judiciously for our times, it offers a good share of what we’ll need, including a profound emotional resilience that isn’t shaken by external change, a creativity that flows from being rather than striving, and a mature perspective that can hold paradox and uncertainty without collapse. I’ve written before about how we’re living through a time when traditional frameworks for making sense of life are dissolving. The AI revolution will only accelerate that process, collapsing our cherished systems of meaning in months rather than decades.
When you’ve recognized your fundamental nature beyond your roles and accomplishments, the question of whether an AI can do your job becomes far less threatening to your core sense of self and worth.
Moreover, this kind of shift in consciousness helps us appreciate differences from a place of wholeness, rather than fear and division. As AI amplifies our tribal tendencies and accelerates the culture wars, the capacity to stand your ground while genuinely appreciating others becomes more essential than ever. Those who have touched their own completeness don’t need anyone else to change just to feel secure—and that may be the most important personal work we can do to guide us all through these collective upheavals.
On the topic of ethics in the AI age, let me just say they won’t emerge from more political grandstanding. They’ll arise from those who genuinely feel our interconnection—who know, at a gut level, that harming another is harming themselves. Our technological power demands a matching leap in wisdom: mature, inclusive yet discerning, able to hold complexity without flattening it into binaries, and rooted in direct, lived experience rather than abstract theory. Without this shift, we’ll keep wielding ever-greater capabilities with the same fractured awareness that got us here.
If you’ve followed my writing these past few years, you’ve probably noticed my consistent return to one central point: the real frontier of development isn’t the next app or platform—it’s us. So I’m doubling down on this focus, without apology or hesitation. This means championing what some call “awakening” or “enlightenment,” even if I prefer Dr. Jeffrey Martin’s more grounded term: Fundamental Wellbeing.
So here’s my question: Are you okay, deep down? If the answer is “not really” or “I’m not sure,” then now might be the perfect moment to explore something new. AI is about to transform the fabric of society, but there’s a parallel revolution already underway—one that points us back to who we really are and how we want to live. If we engage with it seriously and let “it” lead, this inner shift might prove more consequential than any technological leap, because it changes the foundation from which we operate. Everything else follows from there. Everything.
Scaling back while bracing for the weird
I’ve spent a good chunk of my writing life focused on cultural issues, sweeping social change, and big-picture thinking. I once hoped to be part of a revolution in consciousness that shifted collective thinking. I wrote essays trying to illuminate the metacrisis, the meaning crisis, the attention crisis, the climate crisis—crisis upon crisis, naming each horseman of the modern apocalypse while searching for daylight between them.
My ambitions have been right-sized.
What I’d like to do now is much, much, much humbler.
I’ve noticed myself drawing away from the broader collective and focusing on the people I can truly impact—my clients, my friends, my family, and my local community… Angela, the barista at our favorite coffee shop who adores our baby; my neighbor who takes the same 6 a.m. walk with me and fills me in on Oakland politics; the dude I just met at MeloMelo who’s been six months sober and is looking for some support; and our new parent friends we met at Arizmendi who’re every bit as sleep-deprived and clothing-stained as we are.
Maybe it’s just what happens when you have a kid. Maybe it’s what happens when you meditate enough to see through your own ego-driven savior complex (oy vey). Or maybe it’s just the result of social media, think pieces, and 24/7 news hijacking our attention for too long. In any case, I’ve realized the most profound transformations happen face-to-face, heart-to-heart, in the spaces where people can actually see and touch each other.
I’m also not naïve about what’s coming. My years studying existential risk meant I wasn’t shocked by COVID; I felt a strange readiness, even as the world trembled.
Terence McKenna once predicted a future where boundaries dissolve and reality takes on a near-psychedelic quality: “the Transcendental Object at the End of Time,” he called it. With AI evolving at breakneck speed, we might be on the cusp of that bizarre new frontier. Deepfakes, mass misinformation, AI agents that speak and act like humans—the world might start to feel like one big collective trip we never gave “consent” to join.
Should our timeline morph into McKenna’s psychedelic prophecy, I’ll be here with my partner Grace, our son, a handful of close friends, good music, the comfiest clothes I own, and a glorious sunset—because when shit gets weird, genuine human connection and our bond with the living world remain our greatest anchors to what truly matters.



Great essay Alex. As I navigate my relationship with AI it's helpful to hear how you're encountering this digital Other. How you're living. How you're doing family and community. I felt dizzy reading the part about McKenna's prediction—you know, the way an insight connects with some internal hunch still out of awareness and tilts you sideways. The nausea of feeling change.
I'm using AI (GPT, Claude, NotebookLM) for all kinds of things where I need an assistant. Incredible. My PT wasn't helping for my knees so I built a plan with AI that's already working. My hard line is anything that feels connected to my soul. I'm concerned that AI—and technology generally—will inhabit us if we create space in those places that make us uniquely human. Heidegger wrote an important essay about relating to technology...subject of an upcoming piece.
Much love from Colorado.
Alex, this is a beautiful essay on facing the quickly moving AI revolution. I have read through many of your articles, and they have deeply nourished me in this time.
I appreciate the clarity and precision you bring into your relationship with technology, and how you are looking squarely at this reality forming in the human collective.
I noticed that I personally would not want to have AI as a consistent base for dreamworld analysis. I would feel concerned that I would sidestep the sensory organic process of encountering the underworld and the other sentient beings who come through the night, on their own terms.
I have a bias here after years of working mine and others' dreams in Aspen Groves, Deserts, along Rivers, and deep in the woods with the Animas Valley Institute. From my experience, when dreams are worked with in a more organic way, the themes, beings, truths and myths that come through are wildly unique to the individual and the beings in the dream.
I suppose I also feel concerned about the flattening of the dream here through intellect, rather than through a robust depth of sensory perception.
Overall however, I appreciate this essay, and your willingness to share your relationship to technology throughout your lifespan, along with ways that we as people may be attuned to utilizing it in balanced ways.