Notes from a dean leading AI transformation at Sacramento State, with a French passport, a Gen-X mixtape, and zero patience for AI theater
For most of the day I’m a normal dean: triaging budgets, accreditation timelines, and the infinite physics of committee calendars. I’m also the person on campus who gets the forwarded emails. The faculty member warning that AI is “cheating on demand” and demanding that the administration “do something about it.” The admissions staff begging for an AI solution because student questions arrive faster than they can answer them. The well-meaning administrator who wants a “splashy” AI announcement by Friday. Three messages. Three directions. One institution.
Universities aren’t fragile because they’re old. They’re fragile because they run on norms: trust between faculty and students; trust that grades reflect learning; trust that decisions are explainable; trust that scholarship is authentic. AI presses on those norms like a thumb on bruises.
I’m 50, French-American, trained at HEC Paris and UCLA, and I once served as a Navy officer at the French embassy in Portugal. So I’m not shocked by bureaucracy. I am shocked by how quickly a new tool can turn “trust” from background music into the whole plot.
In my experience at Sacramento State, three areas buckle first: assessment and academic integrity, administrative approval processes, and governance and turf wars.
Where AI Breaks Universities: Assessment and Academic Integrity
AI didn’t create cheating. It industrialized it. The old academic integrity regime assumed a detectable boundary between a student’s work and someone else’s. Large language models blur that boundary. A student can brainstorm, paraphrase, translate, rewrite, and polish an AI-produced draft in ways that are hard to classify and even harder to prove.
The reflex response has been to reach for AI detection tools. The problem is that detectors are unreliable in exactly the situations where fairness matters most—especially with mixed human and AI writing. If we try to “police our way back” to 2019, we’ll spend years burning trust in hearings over a probability score, and we’ll still lose.
The alternative is harder up front but healthier long term. We need to redesign assessment so learning becomes visible again. Where stakes are high, use more in-class writing, oral checks, and authentic demonstrations of skill. Use iterative assignments that require drafts, reflection, and “show your work” traces. Use projects tied to local data, lived experiences, or unique constraints that make generic AI output look like it came from nowhere.
This is also where the “AI is the same everywhere” story breaks down. Nursing, creative writing, and accounting do not face the same educational problems.
In nursing programs, a hallucinated answer about medication dosing is not an interesting mistake. It forces hard questions about simulation design, supervision, and how students demonstrate safe clinical reasoning before they ever touch a patient.
In creative writing, voice is the point. The challenge is not whether AI can produce a competent short story, but how a student develops a distinct voice when a machine can generate variations on demand.
In accounting education, AI can generate a polished set of financial statements from a messy dataset. But the real learning objective is whether a student understands the assumptions behind revenue recognition, can detect anomalies, and can defend their reasoning when facts change. Producing a clean answer is not the same as demonstrating judgment.
Each discipline must rethink integration based on what it is actually trying to form in students.
And yes, assessment redesign takes resources. Real resources. It requires disciplinary expertise and time: course releases, summer stipends, and political capital to protect that time from being eaten by the next urgent thing. Buying that time is not nothing. When leaders say redesign but don’t buy time for it, we are asking faculty to do surgery between meetings. It’s not fair, and it won’t scale.
Student Services and Administrative Operations
On the administrative side, AI has real promise and real risk. If your campus handles tens of thousands of repetitive questions each semester—such as “What are the deadlines?” or “How do I change majors?”—AI can be a relief valve. But it can also become a confident liar at scale if it isn’t grounded in accurate information and wrapped with a clear handoff to humans.
At Sacramento State, one of our earliest low-risk proposals—a narrow admissions bot focused on routine questions—took far longer to clear than anyone expected. That delay wasn’t about one chatbot. It was what happens when an institution doesn’t have a tiered process for approving AI pilots. Every idea gets treated like a data breach, a legal threat, and a reputational event, all at once.
Security teams are paid to imagine the worst day. Innovators are rewarded—if they are rewarded at all—for imagining the best day. Put those groups together without decision rights and you get stalemate rather than strategy.
This is why many universities are providing supported, institutionally governed AI platforms. The logic is simple. If you don’t provide a supported option, people will use unsupported ones. Shadow AI isn’t a moral failing. It’s what happens when demand outruns governance.
Governance, Turf, and AI Theater
Generative AI is an accelerant on campus politics. Leaders want momentum. Faculty want academic freedom and due process. IT wants security and standardization. Students want clarity, fairness, and a job when they graduate. When those interests aren’t aligned, AI becomes symbolic: an announcement, a committee, a slogan, a strategic priority with no staff. That’s called AI theater.
I’ve watched conversations get hijacked by questions that sound academic but are really about power—such as who “owns” AI. Does it belong to computer science? To business? To everyone? To no one? In a shared governance environment, that question can swallow a year of meetings and produce exactly zero improvement in student learning or staff workload.
One more truth. The best AI pedagogy right now is often invisible. It sits in individual classrooms, while the colleague next door reinvents the wheel. That isolation is a governance problem. A simple informal exchange mechanism would help.
Organize monthly “teaching kitchen” sessions where faculty share one assignment redesign that worked and one that flopped, plus a lightweight repository of prompts and rubrics. Nothing fancy—just making the invisible visible.
Layered on top is the program design argument. Do we create a standalone applied AI degree, or do we treat AI as a skill set to be infused across majors? I hear the same analogy again and again: “Remember when e-commerce was a degree?” The point isn’t that AI is a fad. It’s that rigid program labels can age quickly, while flexible pathways—concentrations, minors, certificates, practica—can evolve with the technology and the labor market.
Students also need to be treated as agents here, not just recipients. They are already using these tools in uneven, sometimes surprising ways. Bringing students into policy development does two useful things: it forces realism, and it builds legitimacy. If we want norms that hold, students should help write them.
A Sacramento State Lesson: Begin With the Student, Not the Tool
Sacramento State has a special advantage and a special temptation. We sit in the capital city of California, with access to public sector problems that matter. We also have a special talent for talking ourselves into big innovation narratives without building anything that survives procurement, privacy review, or leadership turnover.
Recently, I had lunch with a civic tech operator who helped modernize the California DMV during the pandemic. He said something that should be printed on the wall of every campus AI committee: data is infrastructure, but people are the point. His approach wasn’t “we have this technological tool, let’s use it.” It was to define the citizen’s experience, collapse redundant processes, then build the architecture around the human being at the center—with the help of technology.
That’s also the correct instinct for universities. Most of our debates begin with tools (“Should we allow ChatGPT?”) or ownership (“Which college hosts the program?”). A better starting point is seeing the need. Where are students stuck? Where are faculty overloaded? Where is staff work looping pointlessly? Then pick the smallest intervention that reduces friction without creating new harm.
If you want to kill AI theater, ask one unglamorous question: what will be measurably better, for whom, and how will we know?
The Darker Side We Need to Name
The biggest danger of generative AI isn’t robot rebellion. It’s surveillance creep. AI makes it cheap to monitor work, communication, and performance at granular levels. If universities import corporate surveillance logic—tickets closed per hour, productivity dashboards, and constant activity logging—we will corrode the autonomy and trust that make academic work possible.
Automation of judgment is the second danger. Students are using AI to write. Some educators are using AI to grade. AI can help generate rubrics or draft feedback—which is fine. What is dangerous is outsourcing the evaluative judgment itself. When both sides outsource judgment to machines, the university becomes a hall of mirrors.
Privacy and data leakage is the third danger. Student work is sensitive, and so are advising notes, disability accommodations, and the little bits of personal story students share when they trust us. If we push people toward consumer tools with vague terms, we increase the odds that private academic life becomes training data, breach data, or accidental public data.
Then there are legitimacy shocks. One AI mistake in financial aid guidance or graduation advising can erase the goodwill generated by a dozen quiet successes. Higher education is already in a legitimacy fight. We should not set ourselves up for additional headlines.
What We Should Do Next, Without Pretending It’s Easy
At many public universities—including mine—the AI “priority” has been assigned the way universities sometimes assign priorities: with sincere intent, a steering committee, and approximately zero new staff or budget. That reality doesn’t excuse inaction, but it does force discipline. We have to choose pilots that are small enough to govern, useful enough to earn trust, and visible enough to justify the next resource ask.
First, start with a risk-tier framework with named decision makers. Low-risk uses—drafting internal documents, summarizing public policies, tutoring that doesn’t make decisions—should move quickly. Medium-risk uses—student-facing guidance with disclosures and human escalation—should have a clear review path. High-risk uses—anything affecting student standing, employment, or sensitive data—should be slow and strict. If everything requires the same gauntlet, nothing moves and people route around the system.
Second, provide supported tools and training, or accept you will get shadow AI. Equity matters here. When only the students with money can access high-quality tools safely, the institution has quietly built a new digital inequality.
Third, treat assessment redesign as core academic work, and fund it like you mean it. Buy time. Build the exchange mechanisms so faculty can learn from each other. Otherwise, we will keep saying “redesign” while rewarding everyone for avoiding the hard parts.
Fourth, build a portfolio of student pathways rather than one politically volatile flagship degree. AI is vertical—specialists who understand the technology—and horizontal—fluency in every discipline. Both are true.
Finally, be honest about place. Sacramento is a government town. Many of our graduates work in public agencies or in companies that serve them. A practical, mission-aligned direction for a capital region university is applied AI for public service: training, practica, and carefully scoped projects that help agencies modernize without breaking trust.
Universities have always been slow for a reason. We exist to preserve knowledge, not chase hype. But slowness becomes a vice when the world changes underneath you. AI is becoming infrastructure in the way the internet became infrastructure. You don’t get to opt out. You get to choose whether the change happens with you steering it or with you clinging to the bumper.
Some days, steering feels like trying to turn a cargo ship with a canoe paddle. I’m exaggerating. A little. But drift is worse. Drift is how you wake up one morning and realize students stopped believing your signals, staff stopped trusting your processes, and faculty stopped thinking the institution can tell the truth about change.
The job now is simple to say and hard to do. Protect trust, preserve learning, modernize operations, and do it without turning the university into a surveillance machine. Bon courage.









