Walk into almost any college library today and you’ll find students doing what students have always done—research, writing, cramming before exams. What’s changed is that millions of them now have a new kind of study partner: AI.
The numbers tell a story of near-universal adoption among students. Surveys conducted in 2024 and 2025 consistently find that the vast majority are using AI tools in their studies, with ChatGPT leading the pack by a wide margin. Students are reaching for these tools to do everything from drafting essays and summarizing research papers to brainstorming ideas and decoding complex concepts.
But heavy use doesn’t equal confidence or competence. Many report feeling underprepared for an AI-enabled workplace, and one of their most consistent concerns is accuracy—specifically the risk of receiving incorrect or misleading information.
The challenge isn’t just that students are using tools they don’t fully understand. It’s that the institutions meant to guide them are struggling just as much.
A major new survey released in January 2026 by the American Association of Colleges and Universities (AAC&U) and Elon University’s Imagining the Digital Future Center puts the state of faculty sentiment in sharp contrast. The survey of more than 1,000 U.S. faculty found that nearly half view the future impact of generative AI in their fields as more negative than positive, while only one in five see it as the other way. This is not a portrait of a profession eagerly embracing a new tool.
Elon University President Connie Book, PhD, whose institution has been among the more proactive in developing AI frameworks for students and faculty, offered a nuanced read of where things stand. “Faculty express deep concerns about AI’s negative impact on learning outcomes, along with longer-term effects of AI systems on young adults’ attention spans and the prospect that these learners could develop an overreliance on AI tools,” Book says. “At the same time, faculty views are not uniformly pessimistic. Significant numbers acknowledge AI’s potential to improve aspects of teaching and learning, including the customization of instruction, efficiency in course preparation, and the quality of assignments and research support,” she said.
The numbers behind faculty skepticism are striking, nonetheless. Ninety percent of faculty in the survey said the use of generative AI will diminish students’ critical thinking skills. Seventy-eight percent reported that cheating on their campus has increased since AI tools became widely available, and nearly 75% said they have personally dealt with academic integrity cases involving student use of AI.
Those figures help explain why most faculty said they feel their schools are not prepared to use AI tools effectively. Faculty adoption of AI in their own teaching remains limited and uneven. While a meaningful share have experimented with tools like ChatGPT, most use them minimally, and many have stayed away entirely. A quarter of faculty in the AAC&U survey said they do not use generative AI tools at all, and 33% said they choose not to use them for teaching.
The range of opinion is vast—from outright hostility to cautious optimism—and the profession has yet to arrive at a consensus about what responsible use even looks like. Those who have embraced the technology tend to describe its appeal in practical terms, citing time savings and reduced cognitive load for routine tasks. But comfort with AI as a personal productivity aid is a long way from knowing how to integrate it responsibly into a classroom.
Meanwhile, the AAC&U report found many are deeply uncertain about where the ethical lines are, even for their own use. When presented with scenarios about using it to grade essays, write portions of research articles, or respond to student emails, survey respondents were nearly evenly split on many of these questions. The ambiguity reflects a genuine lack of shared norms across the profession—a problem that institutional policies have largely failed to address.
“These are not peripheral anxieties; they go to the heart of what higher education exists to cultivate—habits of mind such as critical analysis, reflection, persistence, and judgment,” says AAC&U President Lynn Pasquerella. “Faculty skepticism reflects a principled concern for student learning and for the public purposes of higher education. It also reflects the reality that institutions have often adopted new technologies without sufficient guidance, shared norms, or investment in professional development. GenAI raises crucial questions about assessment and authorship, equity, accessibility, data privacy, and the future of expertise itself. Faculty are right to insist that these questions be addressed deliberately rather than reactively,” she said.
The AAC&U data also brings to the surface a troubling gap between the scale of AI’s arrival on campuses and the readiness of graduates to navigate it responsibly. More than 70% of faculty surveyed said they believe last spring’s graduates were not prepared in their understanding of the ethical issues raised by generative AI systems, and nearly as many said those graduates were not prepared to use AI in the workplace.
The American Association of University Professors’ (AAUP) national survey of its members, released in 2025, adds a governance dimension to this picture. It found that nearly three-quarters of respondents said AI decision-making at their institution is overwhelmingly led by administrators, with little meaningful input from faculty, staff, or students. Faculty are asked to navigate a technological transformation that is largely being handed down to them rather than developed with them. The report argues that institutions need meaningful shared governance mechanisms around technology—something most campuses currently lack.
There are also equity dimensions that tend to get lost in the broader conversation about chatbots and cheating. Research has found that first-generation college students are less likely to feel confident about appropriate uses of AI compared to their peers. The AAC&U survey showed that more than 80% of faculty believe generative AI will widen digital inequities. As AI becomes an increasingly expected competency in professional settings, that gap carries real consequences for students who are already navigating higher education with fewer resources.
A few institutions are leading the way. The University of California San Diego rolled out TritonGPT, an AI support system for its 37,000 employees, through a process that included town halls, pilot programs, and ongoing feedback mechanisms. Arizona State University established a dedicated team to evaluate AI capabilities and facilitate training across the institution. These are not magic solutions, but they represent something that most campuses still lack—a strategy.
The AAC&U survey asked faculty an open-ended question about which human skills colleges should teach in an AI-saturated world. The most dominant response by far was that critical thinking becomes more important, not less, as AI becomes more pervasive. Respondents argued that without skills like skepticism, verification, reasoning, and discernment, AI accelerates misinformation, intellectual passivity, and what the report describes as epistemic collapse.
That framing points to something the data across all of these surveys suggests but rarely states directly. The arrival of AI in higher education is not primarily a technology problem. It is a question about what education is for—and whether institutions are willing to defend that answer with the same urgency the technology demands.









