Quiet Quitting and AGI
By Jatin Khosla ยท February 5, 2026
tl;dr
We're quiet quitting on our own thinking. AI is just the final step. The real AGI isn't machines reaching human-level intelligence - it's humanity quietly stepping down.
The Core Idea
Quiet quitting - doing the minimum, withholding discretionary effort - became a cultural moment a few years ago. But there's a parallel happening that no one's naming. Every time we offload our thinking to AI, we're making a conscious choice. The quiet quitter says "I won't give extra effort to my employer." The AI offloader says "I won't give extra effort to my own thinking." The difference is who we think we're shortchanging.
But here's the twist: quiet quitters think they're withholding from the company, when really they're shortchanging themselves. Skills stagnate. The habit of excellence erodes. The company moves on; we're the ones who spent years not growing. And AI offloading? Same destination, different path. We think it's just efficiency, but we're letting our own capabilities decay.
I'm sure many of us have seen someone prompt ChatGPT for five minutes to write a two-sentence email. The recipient did the same. Two humans reduced to middlemen between AIs.
A high-agency person wouldn't quiet quit. They see themselves as the author of their outcomes - if they're unhappy, they do something about it. They don't shrink to fit a situation. And they understand something crucial about tools: they should fill skill gaps, not replace existing skills. A calculator for complex math we can't do - that's filling a gap. AI writing an email we're perfectly capable of writing - that's atrophy by choice. High-agency people use AI to extend their reach, to access capabilities they don't have. Low-agency people use AI to avoid thinking.
One is leverage. The other is abdication.
Everyone worries about superintelligence, about machines taking over. But what if the real danger is quieter? Not AI becoming too powerful - us becoming too weak. Not machines outthinking us - us forgetting how to think. The dystopia isn't Terminator. It's a slow, comfortable decline. A billion small choices - each one reasonable in isolation - compounding into something we never chose. The outputs will look fine. The emails get sent. The code ships. Everything appears to work. But underneath, human capability hollows out like a tree rotting from within.
Maybe we've been looking for AGI in the wrong place. We scan the horizon for artificial general intelligence - the moment machines reach human-level thinking. But what if AGI is what's happening to us? Not the machine rising up, but humanity quietly stepping down. The singularity isn't machines waking up. It's us going to sleep.
How We Got Here
We built this world.
The insatiable hunger for growth, for conversion, for engagement created an entire economy that celebrates removing friction. "Frictionless onboarding." "Zero learning curve." "Don't make the user think." We reward product teams for making things effortless. GPS means we don't navigate. Google means we don't remember. Autocomplete means we don't spell. Each step made sense. Each step made things "better." But struggle is how capability is built. We've systematically engineered it out of daily life.
And then there's the narrative around Gen-Z - that they don't put in the hard yards, don't want to go through the grind. But isn't that an outcome of the world they were born into? They didn't choose to grow up with GPS, Google, one-click everything. They were born into a world where struggle had already been engineered out. The muscle of effort was never developed because we never asked them to use it. It's not a character flaw. It's an environment we created.
The same founders who brag about "0.5 second onboarding" turn around and complain that young employees "don't want to figure things out." But we trained them not to. Every app, every service, every interaction taught them: if it's hard, something is broken. Someone will fix it. Just wait for the easier version. We built the world, then blamed them for living in it.
The Trust Gap
And it's not just individuals opting out. There's a trust gap forming between companies and employees. Companies push employees to build AI solutions, but the outcomes don't always translate into real skill growth or job security โ sometimes they even reduce the need for the same roles. Build the tool that replaces you. Employees aren't stupid. They notice.
So they hesitate to fully commit. They disengage. They "vibe code" โ going through the motions without meaningful innovation. Not because they're lazy. Because the incentive structure is broken. Why pour yourself into something that might make you redundant?
And entry-level graduates? They don't even get the chance. No opportunities to learn, to struggle, to build capability. Then they get tagged as "not willing to put in the work." But they were never given the work to put in.
It's a mutual quiet quitting spiral. Companies quietly quitting on employees. Employees quietly quitting on companies. Neither side fully investing because neither side trusts the other to reciprocate.
And then we look around and think โ the machines are taking over. This is what dystopia looks like. Institutions are collapsing. But maybe it's not the machines. Maybe it's us, quietly stepping back from each other, one broken incentive at a time.
And now consider what we're pouring into AI: billions of dollars, entire power grids, massive data centers, lakes of water for cooling, the best engineering talent on the planet. But the AI most people use day to day is mainly internet search-based pattern recognition. And we're outsourcing our thinking to it. We're burning through the planet's resources so we can avoid the one thing that costs nothing: using our own minds.
We've been quiet quitting on ourselves for decades. Companies on employees. Employees on companies. All of us on our own thinking. AI didn't cause this. It just made it easier.
Thanks to Constance Tan K, Shashank Pareta, Vishal Badrinarayanan for reading the draft and helping make the essay better.