Skip to main content
Back to Resources
AI in Education

We've Trained 100+ Districts on AI. Here's What We Got Wrong at First.

By Nathan Critchett · December 3, 2025

We sat in the back of an auditorium in the Central Valley and watched a consultant charge $8,000 to read slides about ChatGPT to 200 teachers.

Slide three was a screenshot of a prompt. Slide seven was another screenshot of a prompt. By slide twelve, half the room was checking email. The other half was taking notes they would never look at again.

We know this because we used to do something uncomfortably similar.

The Expensive Mistake Everyone Is Making

There's a landmark study from TNTP called "The Mirage" (TNTP, 2015). The headline finding: the average school district spends $18,000 per teacher per year on professional development. Most teachers don't measurably improve.

Eighteen thousand dollars. Per teacher. Per year.

That number should make you angry. Not because teachers aren't trying. Because the system is spending real money on the wrong model. Most PD is designed to transfer information: here's a new tool, here's a new framework, here's a new mandate. Then everyone goes back to their classroom and teaches the way they always have.

This isn't a teacher problem. It's a design problem.

And when AI entered the picture, the PD industry did what it always does: bolted the new thing onto the old model. One-day workshops. Tool tours. Slide decks with screenshots. "Here's ChatGPT. Here's how to write a prompt. Go forth."

We did it too.

What We Got Wrong

Our earliest AI training sessions were heavy on tools and light on everything that actually matters.

We'd show teachers ten things AI could do. Generate a rubric. Write a parent email. Differentiate a lesson plan. Create a quiz. The demos were impressive. The teachers were interested. The post-session surveys were positive.

And then almost nothing changed.

Teachers went back to their classrooms and didn't use any of it. Not because they forgot. Not because they were resistant. Because knowing what a tool can do is not the same thing as knowing when, why, and whether to use it. And nobody had helped them think through that part.

We were teaching the cockpit. Not how to fly. As we explore in depth in our whitepaper The Complexity Gap: Why Your District's AI Strategy Is Solving the Wrong Problem, tool training fails because it targets the wrong layer of the problem entirely.

The Three Fears Nobody Talks About

When we stopped presenting and started listening, we heard three concerns from teachers. They came up in every district. Every grade level. Every subject.

Fear 1: "I don't know enough about AI." This is the surface fear. It sounds like a knowledge gap, and it's the easiest one to address. Show them the tools. Give them practice time. Check the box.

But this one was never the real problem.

Fear 2: "Students are cheating with AI." This one is louder. It dominates staff meetings and board presentations. It generates policy documents. It feels urgent.

It's still not the real problem.

Fear 3: "My expertise doesn't matter anymore." This is the one that changes everything. This is the fear that keeps experienced teachers awake at 2 AM. The quiet, corrosive worry that twenty years of content knowledge and pedagogical skill just got made irrelevant by a chatbot that can explain photosynthesis better than they can.

This is not a skills gap. This is an identity crisis.

And you cannot solve an identity crisis with a prompt engineering workshop.

The Gap Nobody Measures

Here's what the research on professional development tells us, and what we had to learn the hard way:

Knowledge gain does not equal attitude shift. Attitude shift does not equal practice change.

These are three different things. Most PD measures the first one. Post-session survey: "Did you learn something new?" Yes. Done. Success.

But learning that AI can generate a rubric is knowledge. Believing that AI-assisted rubric design could improve your workflow is an attitude. Actually redesigning how you build rubrics next Tuesday morning with 26 kids walking in. That's practice.

The distance between knowledge and practice is enormous. And almost nobody is designing PD to cross it.

We certainly weren't. Not at first.

What Changed Everything

The breakthrough wasn't a better slide deck. It wasn't a fancier demo. It was a complete inversion of the session structure.

We stopped starting with "Here's what AI can do."

We started starting with "Here's a real problem you face. Let's think about it differently."

A real problem. Not a hypothetical. Not "imagine a scenario where..." An actual challenge the teacher in the room was dealing with that week. A student who won't engage. A parent conference that went sideways. An assignment that's supposed to build critical thinking but actually just builds compliance.

Then we'd say: "Okay. Let's use AI as a thinking tool to approach this problem. Not to solve it. To think about it from angles you haven't tried."

The energy in the room changed immediately. Because now the AI wasn't the subject. The teacher's professional judgment was the subject. The AI was just the vehicle.

And here's the part that matters: when teachers used AI to think more deeply about their own practice, they stopped seeing it as a threat to their expertise. They started seeing it as an amplifier. Their twenty years of experience wasn't irrelevant. It was the thing that made the AI interaction productive. A first-year teacher and a veteran asking the same AI the same question get radically different value from the answer, because the veteran knows which parts to trust, which to push back on, and which to ignore.

That's when Fear 3 dissolves. Not through reassurance. Through experience.

Five Principles That Actually Work

After training 100+ districts, failing in ways we're describing here, and iterating based on what we observed in actual classrooms months later, here's what we know works.

1. Sustained beats single-shot. A one-day workshop is an event. It is not development. Practice changes when teachers have repeated cycles of trying something, reflecting on it, adjusting, and trying again. This takes weeks. Not hours.

2. Customized to workflow beats generic. A high school chemistry teacher and a third-grade reading specialist have almost nothing in common when it comes to AI integration. Stop putting them in the same session. Start with their actual daily workflow and build from there.

3. Hands-on beats lecture. If teachers spend more than 20% of a PD session listening to someone talk, you've already lost. They need to be doing the thing. In the room. With support. Making mistakes that someone can help them learn from in real time.

4. Community beats individual. The teachers who sustain practice change are the ones who have colleagues doing it too. Not a single "AI champion" evangelist that the district appointed. A group of peers who are all trying, all struggling, and all learning from each other. Build cohorts, not heroes.

5. Address identity, not just skill. This is the one everyone skips. Before you teach a single tool, you need to help teachers articulate what they believe their role is, and then show them that AI doesn't diminish that role. It demands more of it. The teacher who sees themselves as a "content deliverer" will always feel threatened by AI. The teacher who sees themselves as a "thinking architect" will see AI as the most powerful building material they've ever had.

The Audit You Should Do This Week

Pull up your district's AI professional development plan. If you don't have one, that's its own answer.

If you do, ask one question: What percentage of the plan is about tools, and what percentage is about thinking?

Tool-focused PD sounds like: "Here's how to use ChatGPT to write lesson plans."

Thinking-focused PD sounds like: "Here's how to evaluate whether an AI-generated lesson plan actually develops the cognitive skills your students need, and what to change when it doesn't."

If your plan is 80% tools and 20% thinking, flip it. The tools change every six months anyway. The thinking is what lasts.

Then ask your teachers one more question. Not in a survey. In a conversation.

"How has AI changed what you believe your role is?"

If they say "it hasn't," you haven't done PD. You've done a product demo.

If they say "I used to think my job was to deliver content, and now I think my job is to develop thinking." That's the shift. That's what $18,000 per teacher per year is supposed to buy.

What We Believe Now

We got it wrong at first because we made the same assumption everyone makes: that the hard part is the technology. It's not. The technology is the easy part. The hard part is helping experienced professionals redefine their own expertise in a way that feels expansive, not diminishing.

That's not a training problem. It's a human development problem. And it requires PD that's designed for humans, not for checkbox compliance. For the full framework behind what works (and why the horizontal-vertical distinction changes everything about PD design), see our whitepaper Horizontal vs. Vertical: Why AI Training for Educators Isn't Working.

We're still learning. But we're learning in the right direction.

Related Reading

Want to see this in action?

We'll walk you through a real report and recommend the right starting point for your team.