Skip to main content
Back to Resources
Whitepaper

Horizontal vs. Vertical: Why AI Training for Educators Isn't Working (And What to Do Instead)

By Nathan Critchett · November 12, 2025

A Whitepaper by Edapt


Last month's professional development was someone reading a slide deck about ChatGPT (a scene composited from PD sessions we've observed across California districts). Thirty educators sat in rows. A consultant showed them how to write prompts. There was a Q&A where three teachers asked the same question: "But will this replace us?" The consultant said no. Everyone went back to their classrooms and did exactly what they were doing before.

That professional development session cost the district approximately $8,000 and produced no measurable change in classroom practice.

We know this because we've been in those rooms, more than a hundred of them, across California school systems of every size. And the pattern is the same everywhere: well-intentioned districts invest in AI training that teaches educators about AI without changing how they think about their work, their students, or their own professional identity in an AI world.

The issue is not that the training lacks quality. Rather, it is horizontal, adding new knowledge on top of the same cognitive operating system. And what educators actually need is vertical: an upgrade to the operating system itself.

This is the difference between learning about a gym and actually getting stronger.


The Professional Development Crisis

The Data on What Doesn't Work

The research on professional development effectiveness has been damning for decades:

  • A widely cited finding from TNTP's "The Mirage" study (TNTP, 2015) showed that despite an average investment of $18,000 per teacher per year in PD, most teachers do not measurably improve their practice. Districts spend billions on training that feels productive but isn't.

  • Single-session workshops, the dominant format for AI PD, have the lowest transfer rate to classroom practice. Teachers report feeling "inspired" immediately after the session, but observational data shows minimal change in their actual instructional decisions.

  • The primary reason: one-shot PD teaches content (what AI is, what it does, which tools exist) but does not create the conditions for practice change. Practice change requires sustained engagement, feedback loops, and, crucially, a shift in how the educator understands their own role.

The AI-Specific Failure Mode

AI training for educators suffers from a unique additional problem: it teaches the technology as a tool to be adopted rather than a shift to be navigated.

When you teach a teacher a new curriculum framework, you're adding to their toolkit. When AI enters the classroom, it changes the nature of the work. The teacher's role is no longer primarily "deliver content and assess recall." Content is free. Recall is automated. The teacher's essential value shifts to designing cognitive experiences, evaluating thinking (not just answers), and building the student's capacity for complexity.

This is not a horizontal shift (learn a new tool). It is a vertical shift (rethink what teaching means). And no amount of prompt engineering tutorials will produce it.

What Teachers Are Actually Feeling

In our work with districts, we hear the same concerns from educators over and over:

  • "I don't know enough about AI to feel confident." (Horizontal anxiety: the feeling of not knowing enough about the tool.)
  • "I'm worried students are using it to cheat." (Surface-level concern masking a deeper one.)
  • "I feel like my subject expertise doesn't matter anymore." (Vertical anxiety: the feeling that the role itself has shifted.)

The third concern is the real one. And it's the one that horizontal AI training completely fails to address. A prompt engineering workshop does not address the underlying identity shift that educators are experiencing.


Current Approaches and Limitations

The "Tool Tour" Workshop

The most common format: a 2-4 hour session introducing educators to AI tools. "Here's ChatGPT. Here's Gemini. Here's Copilot. Here are some prompts." Educators leave with a list of tools and no framework for deciding when, why, or how to use them in ways that enhance rather than undermine learning.

This is like giving a pilot a tour of the cockpit without flight training. They can identify the instruments. They cannot fly the plane.

The "AI Policy" Approach

Many districts have responded to AI by drafting policies: acceptable use guidelines, academic integrity frameworks, AI disclosure requirements. This is necessary governance work, but it is management, not development. Writing rules about AI does not prepare educators to teach in an AI world. It tells them what they can't do. It doesn't build what they can.

The "Enthusiast Champions" Model

Some districts identify a few tech-savvy teachers and deploy them as AI mentors. This creates islands of competence in an ocean of uncertainty. The enthusiasts adopt AI quickly because they were already predisposed to it. The skeptics remain skeptical because the champions can't address their real concern: not "How does this work?" but "What does this mean for my role?"

The enthusiast model also creates a false narrative: that AI integration is an individual adoption problem. It is a systemic transformation problem that requires institutional support.

What All Three Miss

Every one of these approaches treats AI professional development as a knowledge transfer problem: educators lack information about AI, so we give them information.

But the barrier to effective AI integration is not information. It is cognitive complexity. The educator needs to think at a high enough level to:

  1. Evaluate when AI helps versus harms student learning
  2. Design assignments that use AI to increase cognitive demand
  3. Assess thinking processes, not just outputs
  4. Navigate ambiguity when there's no established best practice
  5. Hold the tension between "AI makes things easier" and "easy isn't always better for learning"

These are all Order 12+ cognitive tasks (metasystematic: comparing and coordinating between systems). You cannot teach them with a slide deck. They develop through practice, reflection, and the sustained experience of grappling with genuinely difficult questions.


Horizontal vs. Vertical Development for Educators

Horizontal Development: Adding Apps to the Same OS

Horizontal development adds new knowledge, skills, and tools to an educator's existing mental framework. It expands what they know without changing how complexly they think.

Examples of horizontal AI PD:

  • Learning what ChatGPT, Gemini, and Copilot do
  • Practicing prompt engineering techniques
  • Understanding AI hallucinations and limitations
  • Reviewing AI detection tools
  • Drafting classroom AI use policies

These are not worthless. They provide necessary baseline literacy. But they are insufficient because they leave the educator's cognitive operating system unchanged. An educator who completes horizontal AI PD knows more about AI but has not changed how they design learning experiences, assess student thinking, or understand their own professional identity.

Vertical Development: Upgrading the Operating System

Vertical development changes the structure of how an educator thinks. It moves them from one level of cognitive complexity to a higher one, from being a consumer of established practice to an architect of new practice.

Examples of vertical AI PD:

  • Grappling with the genuine tension between AI efficiency and learning depth
  • Designing an assignment where AI is available but the grading rewards reasoning, not output
  • Evaluating two competing perspectives on AI in education and constructing a synthesis
  • Reflecting on how their own teaching identity has been disrupted, and what it's becoming
  • Practicing the discomfort of not knowing the "right answer" about AI in their subject area

Vertical development is harder. It takes longer. It requires sustained engagement, not a single session. And it creates temporary discomfort, because the educator must let go of the certainty of their previous operating system before the new one has fully formed.

This discomfort is not a failure of the PD. It is the mechanism. In developmental psychology, this is called "The Smash," the phase transition between one level of cognitive complexity and the next (Kegan, 1994). It requires what the Center for Creative Leadership calls "heat experiences" (Petrie, 2014): situations that your current way of thinking cannot resolve, forcing you to restructure.

The Shift in Educator Identity

The most important outcome of vertical PD is not new knowledge. It is a transformed understanding of the educator's role:

DimensionBefore Vertical PDAfter Vertical PD
Self-concept"I am a content expert""I am a thinking architect"
Core skillDelivering informationDesigning cognitive experiences
Assessment focusDid students learn the content?Can students think with the content?
Relationship to AIThreat to be managedTool to be orchestrated
Student outcomeKnowledge acquisitionCognitive development

This is not a subtle shift. It changes every instructional decision. And it cannot be produced by a workshop. It requires an ongoing learning community, sustained practice, and reflection structures that allow the educator to process the discomfort of the transition.


Characteristics of Effective AI Professional Development

Based on our work with over 100 California school systems, here is what we've found moves the needle. Not what sounds good in a proposal. What actually changes classroom practice.

Principle 1: Sustained Over Single-Shot

One session doesn't change practice. Our most effective programs involve monthly engagement: live sessions combined with asynchronous practice, reflection, and peer discussion. The learning unfolds over a semester, not an afternoon.

The monthly cadence matters because vertical development is iterative. An educator encounters a new idea in session, tries it in their classroom, hits a problem, brings the problem back, processes it with peers, refines their approach, tries again. This loop, not the initial presentation, is where the growth happens. Research from the OECD confirms that sustained professional development programs of 50+ hours show significantly larger effects on teacher practice than shorter interventions (OECD, 2019).

Principle 2: Customized to Workflow, Not Generic

The most common complaint about AI PD: "That was interesting, but it doesn't apply to my subject." Effective AI training starts with the educator's actual workflow: the lesson they're planning next week, the assessment they're designing, the administrative task consuming their Saturday.

When a math teacher sees how AI can help them design a problem set that increases cognitive demand for their specific unit on quadratics, not a generic "math example," the learning transfers immediately because it connects to real practice.

Principle 3: Hands-On Over Lecture

Educators need to use AI in structured settings before they can use it effectively in classrooms. Not "here's what it does." Instead: "Here's a real planning challenge you face. Use AI to solve it. Now evaluate what the AI produced. What's good? What's hollow? What would you change? Why?"

This hands-on, evaluative practice builds the metasystematic muscle, the ability to judge AI output rather than accept it, that is the core competency of the AI-age educator.

Principle 4: Community Over Individual

The transition to AI-integrated teaching is uncertain. There is no established playbook. Educators need a community of practice where they can be honest about what's working, what's failing, and what they're afraid of, without judgment.

Our private community creates a space where teachers can say, "I tried using AI for this lesson and it completely bombed. Here's what happened" and receive support, not surveillance. This psychological safety is a prerequisite for the risk-taking that vertical development requires.

Principle 5: Address Identity, Not Just Skill

The deepest resistance to AI integration isn't technical. It's existential. "If AI can explain my subject better than I can, what am I for?"

Effective PD doesn't dodge this question. It walks educators through it, surfacing the anxiety, exploring the tension, and arriving at a new understanding of the educator's irreplaceable value: not as a content delivery system, but as a designer of thinking experiences, a reader of student cognition, and a human presence that no algorithm can replace.


Findings: The Practice Change Gap

What the Research Shows

The most consistent finding in PD research is the "Practice Change Gap," the difference between what teachers learn in training and what they actually do in classrooms. Meta-analyses consistently show:

  • Knowledge gain: Nearly all PD formats produce knowledge gain immediately after training. Teachers leave knowing more about the topic.
  • Attitude shift: Most formats produce positive attitude shifts. Teachers leave feeling more confident about the topic.
  • Practice change: Only sustained, practice-embedded, community-supported PD produces measurable change in what teachers actually do in their classrooms.

For AI PD specifically, the practice change gap is enormous. A 2024 survey found that while the majority of educators report interest in using AI, only a small fraction have meaningfully integrated it into their instructional practice. The gap isn't enthusiasm. It's the bridge between knowing and doing, and that bridge is built through vertical development, not information transfer.

What 100+ Districts Taught Us

In our work across California, we've observed a clear correlation between PD format and AI integration outcomes:

  • One-shot workshop districts: High initial enthusiasm, rapid decay. Three months later, few teachers are using AI differently. The workshop becomes a checkbox on the district's innovation report.

  • Multi-session training districts: Moderate improvement. Teachers develop some AI skills but often use them to automate existing practice rather than transform it. They use AI to write faster, not to think differently.

  • Sustained vertical development districts: Genuine transformation. Teachers redesign assignments. They create new assessment frameworks. They begin asking questions about cognitive development that they didn't ask before. The AI becomes a catalyst for rethinking what learning means, not just a productivity hack.

The difference is not budget or access to technology. The difference is the depth of the cognitive shift the PD produces.

What We Got Wrong at First

Transparency matters: we didn't arrive at this framework on day one. Our earliest training sessions were heavy on AI tools and light on cognitive development. Teachers learned prompt engineering and went home. We noticed the same pattern every district notices: enthusiasm up, practice change flat.

The breakthrough came when we stopped teaching AI and started using AI as the vehicle for teaching thinking. When the session shifted from "Here's what AI can do" to "Here's a real problem you face. Now let's use AI to think about it differently," the practice change metrics shifted dramatically.


Recommendations

Step 1: Audit Your Current PD Model (This Week)

Ask: How much of our AI professional development teaches educators about AI (horizontal) versus changes how they think about their work (vertical)? If more than 80% is tool-focused, you have a horizontal PD model that will not produce the integration you're seeking.

Step 2: Shift from Single Sessions to Sustained Engagement (This Semester)

Replace the one-day AI workshop with a semester-long learning community. Monthly live sessions. Asynchronous practice challenges. A shared space where educators can discuss what they're trying and what they're learning. The investment is comparable, but the return on practice change is dramatically higher.

Step 3: Customize to Real Workflows (Starting Now)

Every AI PD session should start with an educator's real challenge, not a hypothetical. "What are you planning to teach next week? Let's design it together using AI." When the learning connects to tomorrow's lesson, it transfers.

Step 4: Create Space for the Identity Question (Ongoing)

Build time into PD for educators to process the existential shift. "How has AI changed what you believe your role is?" This question, more than any tool demo, unlocks the vertical development that leads to sustainable integration.

Step 5: Measure Practice Change, Not Satisfaction (The Standard)

Stop measuring PD success by post-session surveys ("Did you find this useful?"). Start measuring by classroom observation: three months after training, have instructional decisions changed? Are students doing different cognitive work? Are assessments capturing thinking, not just knowing?

If the PD doesn't change what happens between the teacher and the students, it didn't work, no matter how high the satisfaction scores were.


Conclusion

The AI age demands more from educators than it has ever demanded. Not more knowledge. More complexity. The ability to navigate ambiguity, design for cognition, evaluate AI output with judgment, and hold the creative tension between efficiency and depth.

You don't build that with a slide deck about ChatGPT. You build it by treating professional development as vertical development, an ongoing practice that upgrades the educator's own cognitive architecture so they can design the learning experiences that build the next generation of thinkers.

This is not soft work. It is the hardest, most important investment a district can make. Because in the end, the quality of AI integration in your classrooms will never exceed the quality of thinking in your teachers. Build the teachers first. The technology follows.


Edapt's professional development programs are designed for practice change, not just knowledge gain. Through sustained, workflow-embedded, community-supported training, grounded in the science of vertical development, we help educators make the shift from content delivery to cognitive architecture.

From in-person workshops customized to your team's actual work, to our ongoing learning platform with monthly live sessions and a private community: this is what AI training for educators should look like.

edapt.com


References

Kegan, R. (1994). In Over Our Heads: The Mental Demands of Modern Life. Harvard University Press.

OECD. (2019). TALIS 2018 Results (Volume I): Teachers and School Leaders as Lifelong Learners. OECD Publishing.

Petrie, N. (2014). Vertical Leadership Development, Part 1: Developing Leaders for a Complex World. Center for Creative Leadership.

TNTP. (2015). The Mirage: Confronting the Hard Truth About Our Quest for Teacher Development. TNTP.

Related Reading

Want to see this in action?

We'll walk you through a real report and recommend the right starting point for your team.