I’ve caught myself doing it without thinking: reaching for the phone and letting it tell me what the next move should be.
Not in a dramatic, sci‑fi way. In the most ordinary way possible. A turn here. A tap there. A quiet trust in the little arrow on a screen.
One morning it hit me as I was standing in the kitchen, coffee still sputtering through, eyes half-open. The phone buzzed before I even sat up. 7:00 AM. A route suggestion was already waiting, optimized for an 8:30 meeting, complete with traffic I hadn’t seen yet. I hadn’t asked. It had simply decided this was what I needed.
Then the music started—one of those playlists that feels almost rude in how well it matches your mood. As if a machine could read the temperature of my mind. My calendar was ready. My news feed had already been arranged into a particular order. Even the tiny choices—what I might eat, what I might buy—were being gently pre-shaped by recommendations that arrived wearing the disguise of convenience.
It’s hard to complain when it works.
It’s also hard to ignore the other feeling that rides along with it.
A thin, creeping doubt.
How much of what I’m doing is actually mine?
That question has an older echo than any smartphone notification. I keep seeing an image from Greek myth: a single thread glinting in darkness, pulled taut across a loom by three ancient sisters, robed in shadow. The Fates. In the story, they aren’t just watching life unfold; they are weaving it. Every human life is already in the fabric. Triumph. Tragedy. The rise and fall of kings. All of it laid down in advance, pattern by pattern.
No mortal can see their own fate on the thread.
No god can cut it.
Even Zeus—king of the heavens, the one you’d think could override anything—has to bow to what the Three Sisters have spun. The myth doesn’t leave much room for bargaining. In that vision, free will is an illusion. The plot was written before anyone drew breath.
That’s the ancient version of the fear: that we’re living inside a script.

The modern version comes in a pocket-sized rectangle and feels friendly.
Some days it even feels like help.
But the similarity is unsettling. The Fates once held the thread. Now the thread looks like code. Fate used to belong to gods and demons; now it gets described—sometimes casually, sometimes with a little dread—as the domain of algorithms.
I don’t think “algorithm” needs to be mystical. At its core it’s just a method: a set of steps that takes inputs and produces outputs. A procedure. A recipe. The part that changes everything is what today’s algorithms are fed, and what they’re designed to optimize.
A navigation app doesn’t just calculate a route in a vacuum; it draws on sprawling streams of data—traffic patterns, historical congestion, live reports, maybe even the behavior of other drivers right now. So when it nudges me onto a particular street, I turn. Often without knowing why it prefers that route. I trust it has information I don’t.
That trust becomes a habit.
And then it spreads.
Social media algorithms decide which posts I see. Market algorithms influence which stocks get traded and when. Systems increasingly shape how resources are allocated. Some organizations use models to filter job applicants before a human ever sees a name. These aren’t just calculators crunching numbers in the background; they’re shaping the flow of information and opportunity itself.
That’s where the agency question stops being abstract. It’s no longer just about whether my playlist understands my taste. It’s about how the world is being arranged around all of us—quietly, continuously—by systems that learn from patterns at a scale and speed no human can match.
And it keeps accelerating.

In just the past few years the leap has been dizzying. These systems are beginning to write code. They can draw art. They draft legal documents. They hold conversations that feel eerily lifelike—so lifelike that the boundary between “tool” and “something else” starts to blur in the mind, even when I know intellectually how the trick works.
A lot of what used to sound like science fiction is now just… Tuesday.
Machines holding conversations. Autonomous cars navigating busy streets. Software that can predict illnesses. Tools that try to parse human emotions. It’s all real now, though still raw and imperfect, still capable of getting things wrong in ways that can be funny or terrifying depending on the stakes.
The speed is what rattles me. Not because I think novelty is automatically dangerous, but because speed makes reflection harder. It shrinks the gap between invention and deployment, between “this seems possible” and “this is shaping the lives of millions.”
That’s probably why, alongside excitement, there’s been a growing undertow of unease. In 2023, a group of researchers and well-known tech figures published an open letter calling for a pause in advanced AI development. A pause. Just the word sounds almost unrealistic in a world that rewards whoever moves first. But the letter wasn’t written like marketing—it was a flare, a warning that we might be sprinting into unknown dangers.
Around the same time, Dr. Geoffrey Hinton—one of the pioneers of modern AI—resigned from Google so he could speak more openly about his fears that AI could spiral out of control. He tried to describe how strange this moment is with a line that’s both funny and chilling: “It’s as if aliens have landed and people haven’t realized because they speak very good English.”
That’s the thing. Fluency is disarming. When a system speaks in clean sentences, when it mirrors the cadence of a human mind, it can feel like a mind is looking back at you. Even when you know, technically, that what you’re seeing is pattern-matching at scale—statistics stacked into a towering engine—it still lands emotionally as something else.
Then Yoshua Bengio—another towering name in AI research—put a number on the anxiety in a way that I can’t shake. He said he gives a significant chance, maybe 1 in 5, that in the coming decades AI could lead to “catastrophic” outcomes for humanity. One in five is not a casual number when the word “catastrophic” is sitting next to it. It’s the kind of probability that changes how you hold the whole topic in your hands.
When the people who built the foundations of this field sound that alarm, the unease stops feeling like paranoia. It starts feeling like an honest reaction to an honest uncertainty.
And out of that uncertainty, Silicon Valley coined a grim little shorthand: p(doom)—the estimated probability that AI will cause human extinction or an irretrievable catastrophe. It apparently started as a dark joke, the way humans sometimes cope when they’re staring at something too big. But by the mid‑2020s it had become a serious topic of debate. Surveys found AI experts, on average, putting the chance of human extinction from unchecked AI at several percent.
Several percent is small until you remember what it’s a percentage of.
And some people pushed that number into truly surreal territory. One leading AI theorist publicly stated a personal p(doom) above 90%, basically certain that if the current trajectory continues, we’re writing our own ending.

Then, almost violently, the perspective flips. Others—like a chief scientist at one of the biggest tech companies—have scoffed at the apocalypse narrative, putting the probability near zero and arguing that the doomsayers are “less realistic than an asteroid wiping us out.”
I don’t know what to do with a divide like that except sit with it.
Because it’s not a disagreement between outsiders and insiders. It’s a split among people who understand the machinery deeply. Some are profoundly afraid. Some see the fear as overblown, even irresponsible. And that tension bleeds into everything: policy conversations, corporate races, personal choices, the subtle ways people talk about the future over dinner.
So I end up back at the same fork in the road, over and over.
Extraordinary promise on one side.
Apocalyptic dread on the other.
If I let my mind run ahead, I can imagine a future where intelligent machines don’t just assist decisions—they replace them. Not necessarily through a dramatic takeover, but through a thousand tiny substitutions: suggestions that become defaults, defaults that become norms, norms that become constraints. A world where the space for human choice shrinks until it feels like being a character in a story that’s already been written by code.
And then I can imagine a different future, almost in the same breath: where these systems remain tools—powerful, yes, and risky, yes, but still tools. Tools that can free time, extend creativity, widen what a person can build or understand or express. Tools that could, paradoxically, make it easier to be more human—more curious, more inventive—if we keep our hands on the steering wheel.
The hard part is that both futures feel plausible.
That’s why the question that keeps returning for me isn’t “is AI good or bad,” because that’s a lazy frame. The real question feels older and deeper: in a world increasingly run by artificial intelligence, is human agency doomed? Or is this the catalyst for an evolution of what agency even means?
In other words: is this the end of free will, or the beginning of something new?
I can’t answer that. I don’t think anyone can. If I’m going to take the fear seriously, I have to take the details seriously too—philosophy, technology, history, and the hard facts of how these systems actually work. Otherwise it’s too easy to slip into mythology again, swapping robed sisters at a loom for invisible models in a server rack and calling it insight.
So I’m forcing myself to start at the root. The old argument that never really went away, now lit up by screens and machine logic: determinism versus free will.
That debate has been sitting under everything the whole time.
I just couldn’t feel it until my phone started buzzing.






