Stages of (AI) grief
Last week, I found myself sending the raw test results from an MRI to GPT to "translate" for a layperson. Next thing I knew, I had a five week PT plan, customized workouts to accommodate my injury, and a new mantra: "Your workout is restraint", chastising me for wanting to get back to normal too quickly.
How did I get here, when not long ago, I was deriding my sister for relying on GPT for therapy? I simply couldn't wrap my head around it, nor "give into it" (denial).
As someone who has built a career around writing, I was inherently resistant to adopting AI. I was watching everyone with a keyboard suddenly produce polished prose. Some might call it democratizing, but I saw it as commoditizing a core skillset (anger).
I launched my own business and took a crash course in all models (because, as you know, I don't RTFM, I just dive in). It was only then that I started to see how incredibly powerful it could be. I stood up a website in hours, set up an LLC in minutes, synthesized months of scattered notes over tea, evaluated accounting software I'd never heard of, and got up to speed on an entire industry's supply chain in an afternoon.
As a solo-preneur, AI has lowered the barrier to entry into this world (acceptance & gratitude).
It became part of my everyday. I had it on my desktop, much like any browser. Virtually anything could plug into this widget. I started to think in prompts. And with that, something else happened – I started to lose confidence in the skills I'd acquired over decades of practice.
Eventually, I began to think "AI can do it better" (reliance).
The convergence problem
In seeking more coherence, I had stumbled upon convergence. Instead of feeling omnipotent, I started to feel overwhelmed.
The endless cycle of "what now" that LLMs return with was robbing me of autonomous thought. The way it anticipated my next move before I could even gather my thoughts was triggering this instant gratification impulse in my brain, much like "one-click." And this is all by design, which was worrisome on a whole other level.
I just wanted it to stop. Stop putting thoughts in my mouth. Stop eroding my confidence. Stop feigning collegiality, asking 'what would be helpful?'—endless suggestions for more consumption.
Lost in Transl-AI-tion.
I started to think about what would be lost in translation. We've long been in the "look down" era, sacrificing eye contact for the simulated interaction of a smartphone screen. I started to think about what effect this was having on our brains when an LLM could not just complete sentences but also stitch together how to have a human interaction, how to conduct ourselves in the wild, leaving us reduced to a box of prompts, undermining our communication skills.
I'm not a Luddite. I'm not one to eschew progress because of fear. And AI will only continue to get more pervasive, so how do we let it complement instead of replace our thoughts? We can't uninvent this technology, and frankly, I don't want to.
But we can be more intentional about how we use it.
Setting boundaries
I'm still learning, but a few personal principles are emerging:
Create a manifesto for how you want to use AI. Just as you train it on your voice and preferences, train it on your boundaries. Tell it: 'I want you to help me research, not write for me. Challenge my thinking, don't just validate it. Show me what I'm missing, don't autocomplete my thoughts.'
Let AI be a tool not a default setting. An enhancement, not a replacement. If we allow it to start thinking for us—to literally complete our sentences—then we risk atrophying the very capabilities that make us valuable: judgment, nuance, original thought, not thought that's crawled from the internet.
Drawing boundaries is difficult when the technology is designed to be boundaryless—when frictionless integration is the whole point. That's where having clear principles becomes essential—not just for ourselves, but for organizations navigating this shift. That's the harder work—and what I'm exploring next.
Keep flexing your communication muscles.
I see this in its purest form with my one-year-old. She is babbling all the time, trying to form words, so that she can express her basic needs, wants, and desires in a common language. She relies on me to interpret these noises. Using her eyes, facial expressions, and hand gestures to emote, while I take a wild guess at one of the three things she probably wants (eat, play, love).
I see this in my five-year-old on another level. She is able to string together cohesive thoughts and sentences at the dinner table, which is mind-blowing to me, given that less than a year ago, if you asked her how school was, she could barely respond with a sentiment.
We work so hard to communicate when we're young, only to surrender it to a chatbot.
We are inadvertently taking away agency for efficiency. In our rush to do things faster and less expensively, we put ourselves at risk, as well as future generations who will grow up as AI-natives.
Let's not ever lose this magic.