The Question AI Can't Answer
(And Why Your Career Depends On Knowing It)
You used to be dangerous in a room.
Not the loud kind of dangerous. The other dangerous. The kind of dangerous where you’d sit quietly through the first twenty minutes of a meeting, letting the obvious ideas burn off like morning fog, and then say the thing that made everyone’s face change. You had a feel for when data was lying. You could smell a bad decision before anyone finished proposing it.
That was you.
So what the hell happened?
Last Tuesday, you asked ChatGPT to tear apart a competitor’s strategy. It handed you four pages of structured brilliance. SWOT analysis. Market positioning gaps. Three strategic recommendations with implementation timelines. Beautiful stuff.
You forwarded it to your team in eleven minutes and thought to yourself, “Damn…I'm good.”
Somewhere around minute nine, a thought tried to surface in real-time. Something about whether you’d actually thought about any of this, or just... read it. You pushed that thought down…way, way down. The analysis was solid. Better than solid. Probably better than what you’d have produced grinding away for three hours.
That’s the point, right? Work smarter?
But here’s the thing about that thought you pushed down: it’s going to keep coming back. It came back when your CFO challenged one of the recommendations, and you found yourself stalling, reaching for your phone like a gunfighter reaching for a pistol that isn’t there anymore. It came back when you realized you couldn’t remember the last insight you had that was actually yours.
That thought is your brain trying to tell you that something’s been stolen. And you’re the one who handed it over like a hot potato.
“I’m faster than ever, and I feel dumber.”
I keep hearing this exact sentence, said differently depending on the conversation, but it’s the same confession. Senior people. Sharp people. People who built careers on being the one who sees what others miss.
They’re experiencing the early symptoms of something I’m calling cognitive outsourcing, and it works exactly like physical atrophy. Stop using a muscle, the muscle disappears. Keep not using it, and you forget you ever had it and start saying things like you feel dumber.
Researchers recently tracked three groups writing analytical essays: one group solo, one with search engines, one with AI doing the heavy lifting.
The AI group finished fastest. Reported highest confidence. Understood the least. Remembered almost nothing. And the part that should make you put down your coffee? Many of them couldn’t identify which ideas in their own essays had come from them.
The scientists called it “cognitive debt.” I call it trading your judgment for convenience and not reading the fine print.
Everyone reading this is running an experiment on themselves right now. You don’t get to opt out. The only question is which result you’re engineering.
Door One: AI handles your information processing. Your thinking muscles go soft from disuse. You get faster at producing and worse at evaluating. The judgment that made you valuable becomes a fading memory, like a language you used to speak as a kid.
Door Two: AI handles your information processing. You take the cognitive energy you just freed up and pour it into the thinking AI can’t do. Your judgment gets sharper because you’re spending more time on hard problems, not less.
The difference is one question, and you’re either asking it or you’re not…
“What do I actually think about this?”
I'm not talking about what does the output say. Not what’s the consensus. Not what would a smart person think. What do you think? And if someone puts a gun to your head and tells you to defend that position with your own reasoning, can you?
That question is becoming the velvet rope between people who will command premium value and people who will spend the next few years discovering exactly how replaceable they’ve become and start claiming they were victims.
AI is scary good at the lower levels of thinking: Commodity Thinking. Remembering. Understanding. Applying known frameworks to familiar problems. It’s getting better at those things faster than any of us are comfortable with but love playing with.
But there’s a penthouse floor of Premium Thinking that AI can’t reach. It’s where:
The data points one direction, and your gut screams the opposite, and you have to sit & spin in that tension until you figure it out
A solution seems technically perfect and politically suicidal, and someone needs to navigate that mess without blowing everything up
The room shifted, no one’s acknowledged it, and you need to read the tea leaves and call the thing that everyone’s thinking, but nobody’s saying
The real problem is hiding behind what's most obvious, and someone needs to reframe the whole damn conversation before everyone wastes another quarter solving the wrong thing
This is judgment and judgment isn’t more information. Judgment is what you do with information when the stakes are real and the answer isn’t obvious. It’s a muscle, and muscles need resistance to grow.
The people pulling ahead right now aren’t the ones using AI hardest, they’re the ones using AI to clear the runway so they can bring sharper thinking to the moments that matter. They’re trading efficiency on commodity cognition for intensity on the premium stuff that actually pays.
If you saw a glimmer of yourself in the opening of this piece, relax. You’re not destined to be labeled a ‘Copy-Paster.’ You’ve drifted into a pattern that felt like productivity but was actually erosion. Happens to smart people precisely because they’re smart enough to recognize efficiency when they see it. The way back to your premium Thinking brain is almost stupidly simple.
Before you send, present, or decide based on any AI output, answer three questions:
What’s my take?
Form a position before you read what the machine wrote. Even a half-assed position. The act of intentionally committing to something - anything - keeps the brain firing the good stuff.How does this break?
AI gives you the best version of one frame. What’s the frame it didn’t consider? What makes this wrong? If you can’t find the weakness, you don’t understand the argument and you're a poser.Would I bet my reputation on this reasoning?
Not the conclusion…the reasoning. If someone dismantled this in front of whoever tou’re presenting this to, could you rebuild it from scratch without sneaking a look at the original output?
Ninety seconds is all this costs. Ninety seconds to keep the thing that makes you valuable from quietly disappearing.
The World Economic Forum says premium thinking is the number-one skill employers are desperate for. Seventy percent call it essential. Meanwhile, the gap between what companies need and what people can actually deliver is twenty-one points and widening.
That gap isn’t about access to information. Everyone has access to information. A kid with a phone has access to information.
The gap is about what happens after the AI stops talking, and the room goes quiet, and everyone’s looking at you for a decision.
I’ve spent decades training teams at companies whose names you’d recognize on the thinking that transfers across contexts. The need has never been more acute. The question used to be how do we learn faster. Now it’s how do we preserve the capacity to think at all.
When teams start the reversal protocol, something shifts faster than expected and leaders notice it without being able to name it: “People are showing up different. They’re not waiting to be told. They’re bringing positions.”
That’s the brain coming back online.
That thing you pushed down last Tuesday - the flicker about whether you’d actually evaluated anything - that wasn’t paranoia. That was your cognitive immune system sending up a warning flare.
You used to be dangerous in a room.
You can be again.
But only if you start asking the question that AI can’t answer for you: What do I actually think?
Start asking.
Subscribe if this landed. I write about the neuroscience of thinking, influence, and communication - the human skills that AI makes more valuable, not less. One framework a week you can use before your next meeting.




Curious about the way you mention nine through your work, Rich? I know everything you do has a purpose!
I find I am increasingly in the selective AI adoption camp for the very reasons you mention here. I don't want to automate my life and thus my brain! I'm exploring friction maxxing in a post next week - such a fun experiment to try.
This hits “The people pulling ahead right now aren’t the ones using AI hardest, they’re the ones using AI to clear the runway so they can bring sharper thinking to the moments that matter.”
Is ‘premium thinking’ a thing now? I am so glad that people are becoming aware of what AI takes away as much as they are grateful for what it adds…