Skip to main content
Back to Resources
AI in Education

Is AI Making Your Students Dumber? The Science Says It's Complicated

By Nathan Critchett · September 24, 2025

Maya got an A on her history essay about the French Revolution. She typed one prompt into ChatGPT, changed a few words, and went to bed. Her teacher didn't flag it. The essay was coherent, well-structured, factually accurate.

Maya also learned nothing.

This is not a story about cheating. Maya didn't think she was cheating. The problem is deeper than academic integrity. Maya outsourced the one process that would have made her smarter: the struggle to organize her own thoughts.

And the research says that's a much bigger deal than most parents and teachers realize.

The Brain Grows Through Friction. AI Removes It.

Here's the uncomfortable truth from cognitive science: learning requires confusion. Not suffering. Not misery. But the specific discomfort of encountering something your brain can't yet handle.

Jean Piaget called this "equilibration." When a student hits information that doesn't fit their mental model, their brain has to physically reorganize. New synaptic connections form. This process burns glucose. It generates what we experience as frustration, confusion, brain fog.

That feeling IS the learning. Not a side effect of it. The mechanism of it.

Neuroscientist Karl Friston formalized this with the Free Energy Principle: your brain is a prediction machine. When reality violates its predictions, neural entropy spikes. Heart rate rises. Prefrontal cortex demands more fuel. The brain is literally restructuring itself.

Now here's the catch. AI, by design, eliminates this friction.

When Maya asks ChatGPT for the answer, she never hits the prediction error. She never experiences the disequilibrium that forces her brain to restructure. She gets a smooth, coherent output that feels like understanding but is actually consumption. She consumed structure rather than constructing it.

Researchers call this cognitive offloading (outsourcing the thinking process to a device). We all do it in small ways (nobody memorizes phone numbers anymore). But when you offload the process of thinking itself, you don't just save effort. You prevent growth.

We explore the full science behind this in our whitepaper Cognitive Offloading: How AI Is Simultaneously Enhancing and Eroding Student Thinking.

The Data Is Clear, and Alarming

A Harvard Business School study (Dell'Acqua et al., 2023, Harvard Business School Working Paper No. 24-013) tracked 758 BCG consultants using AI. The ones who relied on AI for tasks at the edge of its capabilities (where the AI looked confident but was actually wrong) were 19 percentage points less accurate than those who worked without AI.

Nineteen points. These weren't students. These were elite consultants at one of the most selective firms on earth. They saw confident, polished AI output and stopped questioning the logic. They handed their judgment to the machine.

This is what researchers call the "Jagged Technological Frontier." Inside AI's comfort zone, it boosts performance. Just outside that zone, where the AI sounds right but isn't, it actively degrades human performance. Because people stop thinking critically about outputs that look correct.

If it's happening to BCG consultants, it's happening to your kids.

Two Ways to Use AI: GPS vs. Mirror

Here's where the conversation gets more interesting than "AI bad."

There are two fundamentally different ways a student can interact with AI. The difference between them is the difference between atrophy and growth.

AI as GPS: The student asks for the answer. The AI gives the fastest route to a finished product. The student consumes it. This is how Maya used ChatGPT. It's efficient. It's also the cognitive equivalent of taking an elevator instead of using the stairs, every single day, for every single flight.

AI as Mirror: The student brings their own thinking to the AI and asks it to push back. Challenge assumptions. Ask harder questions. Force the student to defend, revise, and sharpen their reasoning.

Here's what this looks like in practice.

GPS mode: "Explain the causes of the French Revolution in 500 words."

Mirror mode: "I think the French Revolution happened because of inequality. Don't tell me if I'm right. Challenge my assumptions. Ask me questions that force me to figure out what I'm missing."

Same technology. Radically different cognitive outcomes. For the full framework behind designing AI interactions that build thinkers rather than dependents, see our whitepaper The Centaur Classroom: Designing Human-AI Learning That Builds Thinkers, Not Dependents.

In GPS mode, the brain gets no error signal. No error signal means no restructuring. No restructuring means no growth. The student is a passenger.

In Mirror mode, the AI becomes a sparring partner. Every pushback creates a prediction error, exactly the kind of productive struggle that forces the brain to build new connections. The student is the pilot. The AI is the co-pilot who keeps asking, "Are you sure about that heading?"

The Question Parents Should Ask Tonight

You don't need to understand neuroscience or read the BCG study to act on this. You need one question.

When your kid finishes homework that involved AI, ask them:

"Did the AI give you the answer, or did you use it to fight for the answer?"

That's it. That's the dividing line.

If the AI gave them the answer, they consumed someone else's thinking. If they used the AI to challenge, question, and sharpen their own thinking. They just got a workout that most students aren't getting.

A few follow-ups that sharpen the conversation:

  • "What did you think before you asked the AI? What changed?"
  • "Where did the AI push back on your idea? Were they right?"
  • "What part of this is YOUR thinking and what part is the machine's?"

These aren't gotcha questions. They're calibration. They teach your kid to notice the difference between using a tool and being used by a tool.

What Teachers Can Do This Week

You don't need a new curriculum or a committee vote. You need a reframe.

Split your next assignment into two parts. Part one: no AI. Write the rough draft yourself. Think through it. Struggle. Get confused. Good. Part two: now use AI to challenge your draft. Ask it to find the weaknesses. Ask it to argue the other side. Then rewrite.

Grade the revision. Not the polish. The thinking.

When you do this, you're not banning AI or surrendering to it. You're teaching students to build their cognitive muscles first, then use the machine to stress-test those muscles. That's the sequence that the science supports.

So Is AI Making Students Dumber?

No. But the default way students use AI is making them weaker thinkers. Not because the technology is harmful. Because the path of least resistance (asking for answers instead of fighting for them) bypasses the exact process that builds critical thinking.

The brain learns through struggle. When AI eliminates the struggle, it eliminates the learning.

But when AI is designed to increase the struggle (to challenge, question, and push back) it becomes one of the most powerful cognitive training tools ever built. The same technology that can atrophy thinking can accelerate it. The variable isn't the tool. It's how we design the interaction.

The question facing every parent and teacher right now isn't whether to allow AI. It's whether you're going to let AI be the GPS that does the thinking, or the mirror that demands better thinking.

The science says that choice matters more than most people realize.

Related Reading

Want to see this in action?

We'll walk you through a real report and recommend the right starting point for your team.