I can still feel the tiny jolt of relief that used to arrive when the red squiggle under a word in Microsoft Office disappeared.
It’s almost funny, the way a tool can make a judgment for me and my body relaxes as if something truly difficult has been averted. A word gets fixed. A sentence gets smoothed. The page looks clean again. And I keep moving.
Somewhere along the way, that relief turned into a habit, and the habit turned into a quiet trade. When Microsoft Word introduced spellcheck, I for sure got worse at spelling to the point where today, spelling is my worst skill. I just deemed it as not needed because machine was so good at it. That choice didn’t feel like a choice at the time. It felt like progress. It felt like letting a better system handle a smaller problem.
But I can see now how quickly “small” becomes “gone.”
I don’t mean gone in the dramatic sense—no sudden collapse, no crisis, no moment where the lights flicker and I realize I’ve lost something essential. It’s more like muscle. If it isn’t asked to do work, it doesn’t complain. It just softens. And the softness becomes the new normal.
That’s the place where Friedrich Nietzsche keeps showing up in my mind, even though he lived long before computers, long before anything that could plausibly be called modern AI. He was watching a different kind of softening. He could tell that the old sources of meaning and authority—religion, tradition—were losing their power in the modern world. “God is dead,” he proclaimed, meaning that the idea of a single, ordained truth or fate was dissolving. This wasn’t a celebration but a warning: without the guiding stars of old, humanity might drift into a kind of existential void, a crisis of purpose.
His answer wasn’t to rebuild the old scaffolding. It was to dare something harder. The creation of new values, by humans themselves. He imagined the Übermensch (often translated as “Overman” or “Superman”) as a future figure who would transcend the old human limitations and craft meaning afresh—a being who would say yes to life, embrace the truth of a world without predetermined purpose, and impose their own will upon existence to create something great. The Übermensch was not about mystical powers or genetic superiority; it was about self-mastery and creativity, a symbol of human potential to overcome nihilism by becoming self-determining.

And then, almost as a counterweight, he offered a figure that feels less like a villain and more like a risk. Der letzte Mensch: the Last Man. The Last Man is the opposite of the Übermensch—a soul who seeks comfort over challenge, who prefers ease and safety over striving and risk. “We have invented happiness,” says the Last Man, blinking, as he lives a life of mediocre contentment, devoid of ambition or deeper meaning. Nietzsche feared that in the absence of higher aspirations, society would default to the Last Man—a collective fizzle into comfortable irrelevance.
Think of the Übermensch and Der letzte Mensch as a lens for the optics of the relationship between AI and humans.
Now consider our algorithmic age through this lens. The world of 2026 would have fascinated and repelled Nietzsche. On one hand, he would see evidence of the will to power—his idea of life’s fundamental drive to grow, to assert, to overcome—reflected in our technologies. Our AIs ceaselessly improve themselves through relentless training, iteration, and optimization. A chess-playing program like AlphaZero teaches itself from scratch in a matter of hours to defeat grandmasters, showing an almost merciless drive to victory. Machine learning models grow from millions to billions to trillions of parameters, each generation more powerful than the last. There’s a brute, inhuman version of “self-overcoming” in that process: the machine doesn’t rest or get complacent; it just keeps getting better, as if fulfilling some bizarre technological imitation of Nietzschean evolution.
Yet AI has no will of its own, no desires or values—it’s executing code. We are the ones who set its goals, at least for now. And Nietzsche might ask: by handing so many decisions to these unthinking algorithms, are we inadvertently becoming his dreaded Last Men?
That sentence hits me differently now because I can map it onto my own little surrender with spelling. I didn’t lose spelling because I became lazy in some moral sense. I lost it because the environment changed. The incentives changed. The machine did the work, and I stopped rehearsing the skill that once made the work possible. And the scariest part was how reasonable it felt.
Research keeps circling this same pattern of cognitive offloading—outsourcing a mental process to a tool—and the results are rarely simple. Some of it looks like augmentation. Some of it looks like atrophy. The difference often seems to depend on whether the tool is used as a scaffold that still asks something of me, or as a solution engine that lets me stay passive.
The same mental atrophy appears in navigation. A study on habitual GPS use reports that people with greater lifetime GPS experience showed worse spatial memory during self-guided navigation, and in a small follow-up, greater GPS use over time was associated with a steeper decline in hippocampal-dependent spatial memory. I can feel the truth of that even without a lab. The moment a route becomes step-by-step instructions, the world turns into a sequence of commands. Turn left. Turn right. Continue. The city stops being a map in my mind and becomes a corridor I move through. [bic.mni.mcgill.ca]

Reading is where I start to feel genuinely uneasy, because reading has always been one of the last places that still demanded patience from me. It’s slow. It resists shortcuts. It makes me sit with difficulty long enough for something to change inside my own thinking.
And yet, even here, the tools are starting to insert themselves as intermediaries. Education research on how students use tools like ChatGPT keeps returning to this fork: quick answers versus deeper engagement, convenience versus learning. A report described many students using generative AI for fast solutions with minimal effort unless guided toward more learning-oriented use. That distinction matters to me because it names something I can feel in my own hands: the difference between using a tool to extend my reach and using it to avoid the reach entirely. [today.usc.edu]
This is where the question I actually care about begins to sharpen.
It isn’t simply whether AI is “good” or “bad.” It’s whether the most capable systems become the Übermensch in practice—an “artificial superintelligence” leaping beyond human limitations—while the average human pattern drifts toward der letzte Mensch, blinking into comfort. It’s a controversial analogy (and Nietzsche himself probably would have shuddered at the thought of a machine Overman), but it speaks to this tantalizing possibility: that AI might become more as we become less.
I can’t ignore how easily “I need help” turns into “I should be replaced.”
A world optimized for comfort and convenience can be a double-edged sword. When Spotify’s algorithm feeds me music that perfectly fits my existing taste, I’m delighted—but I also stop seeking out new music that might challenge or surprise me. When Netflix auto-plays the next episode, I effortlessly slip into another hour of the same show rather than discovering something outside my comfort zone. When social media feeds me news it thinks I’ll agree with, it keeps me engaged—quietly reinforcing my existing worldview and insulating me from anything that might provoke deeper reflection or change. It is as if the algorithm is saying, “Don’t worry about striving or deciding—I’ll keep you fed and entertained. I’ll keep you happy.” Nietzsche’s Last Man would approve; the Übermensch would not.
So which is it going to be? Does AI herald a descent into comfortable complacency and eventual obsolescence, or might it be a ladder to new forms of greatness?
I don’t think this ends with a simple verdict about whether AI is making humanity smarter or dumber. It feels more like a fork that appears over and over again, disguised as convenience. A tool can be a ladder or a couch. Sometimes it’s both in the same day.
And I can’t shake the feeling that the real test isn’t whether an “artificial superintelligence” can become something like an Übermensch.
It’s whether I can notice, in time, when comfort starts blinking back at me as a philosophy.






