Kotter’s Model Was Built for a Different Era — Here’s What the AI Age Demands Instead
Recently, I was meeting with a member of our Charles F. Dolan School of Business Advisory Board—a senior executive at a large financial services firm—when the topic turned to AI. I asked how his organization was approaching adoption. He paused, then said something that stayed with me: “We stopped trying to train everyone and started looking for who was already doing interesting things. We built a system to identify the experimenters—people using AI on their own, often without permission—and made them the center of our strategy instead of the recipients of it.”
It was a simple reframe, but it cut to the heart of something I had been struggling to articulate. His organization wasn’t waiting for a top-down mandate. It was letting curiosity lead, then building institutional infrastructure around the people who were already curious.
That conversation sent me back to a comparison I had been developing between John Kotter’s classical 8-Step Change Model and what I’m calling an AI Adoption Change model. The contrast, once you see it, is hard to unsee.
Kotter’s framework—create urgency, form a guiding coalition, develop and communicate a vision, empower others, generate short-term wins, consolidate gains, anchor change in culture—is elegant, sequential, and top-down by design. It assumes change agents are identified in advance and guided from above.
The AI adoption model works differently. It begins not with manufactured urgency but with organic exploration of potential. Champions emerge not because they’re appointed, but because they’re self-motivated. They experiment freely, fail openly, and lead from the ground up. The key difference: Kotter’s agents are pre-selected and guided; AI adoption agents are self-driven and unconstrained.
This is not a minor tactical distinction. Business schools that miss it will find their AI initiatives stuck in policy documents and pilot programs that never scale.
The Implementation Gap
A 2025 Association to Advance Collegiate Schools of Business (AACSB) survey of 236 deans and 429 faculty found that while over a third of member schools have allocated dedicated AI funding, only 13% mandate AI training for students, 12% for faculty, and 9% for administrators. A February 2026 AACSB report noted that many schools initially responded to generative AI by issuing policies, updating misconduct codes, and deploying detection software—a posture of control rather than cultivation. The AACSB’s own language is telling: deans’ enthusiasm “has not yet been fully embraced by faculty to the same extent.”
That gap is precisely what Kotter’s model, faithfully applied, tends to produce in the case of AI adoption. The schools making the most meaningful progress share a different starting point: explore potential before you prescribe solutions. They create psychological safety for experimentation, elevate champions who are already curious, and build communities where both successes and failures are shared openly.
What Getting It Right Looks Like
The University of Colorado’s Leeds School of Business reached a meaningful milestone by fall 2025: AI integrated across 100% of its core 14 business courses and nearly 50 instructors. That outcome wasn’t the result of a mandate—it was a faculty-champion model that began with a small group of early experimenters and expanded outward. By March 2025, Leeds was sharing its framework through AACSB and inviting peer institutions to co-develop pilot projects, embodying a core principle of the AI adoption model: celebrating quick iterations by sharing successes and failures openly.
Northeastern University’s D’Amore-McKim School of Business formalized its experimentation-first philosophy in an October 2025 California Management Review paper. Its AI Strategic Hub (DASH) frames the classroom as an innovation lab, with faculty running A/B comparisons between AI-assisted and traditional approaches before any institution-wide rollout. The paper’s central argument mirrors the AI adoption model precisely—requiring testing before scaling, and co-creating tools alongside end-users rather than delivering solutions from above. In February 2026, the school announced a STEM-designated AI MBA to be launched in September 2026.
The University of Washington’s Foster School of Business took a structural step in the same direction. Beginning in fall 2025, every incoming student must complete a mandatory AI bootcamp covering six learning objectives, including core AI literacy, ethical assessment, and cultivating a lifelong AI learning mindset. Access is universal; the goal is capability, not compliance.
Globally, two new 2025 partnerships signal where the most forward-thinking schools are heading. Frankfurt School of Finance & Management integrated ChatGPT Edu institution-wide through an OpenAI partnership. NEOMA Business School in France gave every new student a license for Mistral AI’s Le Chat LLM—making experimentation a default, not a privilege. Both remove friction from exploration rather than gatekeeping access behind approvals.
The Wharton School of Business Human-AI Research Center reinforced the stakes in its October 2025 annual report. Chief AI Officer roles now exist in 61% of large organizations, and “people set the pace” of adoption. The implication is direct—organizational readiness, not technology, is now the binding constraint. The most urgent investment for business schools is not in tools, but in the human infrastructure that determines whether tools get used well.
Fairfield University’s Charles F. Dolan School of Business: Values-Driven AI Adoption
Among smaller business schools, the Charles F. Dolan School of Business at Fairfield University stands out—and it is no coincidence that its approach carries a distinctly communal character. Fairfield is a Jesuit university, and the Ignatian tradition is oriented toward discernment, service, and formation of the whole person. The Jesuit concepts of cura personalis—care for the whole person—and magis—the pursuit of the greater good—are not decorative values at Fairfield.
They have shaped how Dolan has approached AI, not merely whether to adopt it. Where other institutions framed AI adoption as an efficiency play, Dolan’s animating question has been more Jesuit in character: how do we use this technology in service of others, and how do we build a community capable of asking that question together?
The journey began with organic momentum, not a mandate. Faculty in the MS in Business Analytics program had been weaving AI principles into the curriculum since 2015. When generative AI arrived, those early explorers had the fluency and confidence to move quickly—and their students followed, producing AI-generated short films, co-authored books, and generative art, demonstrating that the school’s AI vision was being co-created with learners, not handed down to them.
In April 2025, we launched the Dolan AI and Technology Institute, directed by Dr. Jie Tao, with the stated mission of “AI for the greater good”—hosting experts, training local businesses, and advising faculty and students on responsible use. Nobel laureate Myron Scholes participated virtually in the launch. In September 2025, we added an MBA concentration in AI—designed, in the words of MBA Director Dr. Mousumi Bose-Godbole, to “bridge the gap between technical AI knowledge and business acumen” and to develop ethical leadership for an AI-augmented world.
A New Change Architecture
The schools leading on AI adoption share a common architecture. They identify the people already experimenting, give them resources and legitimacy, co-create the vision after initial exploration—not before—and build lateral communities where knowledge flows peer-to-peer.
Kotter was right for a world where change was rare and needed to be mandated. AI is arriving fast, from all directions, and cannot be mandated into relevance. The board member I met with understood this instinctively. The business schools that will lead are those that understand it institutionally—and build their change architecture accordingly.









