Schools and universities have, generally speaking, responded to the rise of AI with the same instinct: try to catch students using it.
The result has been a wave of AI detection tools rolled out across institutions worldwide, framed as a way to protect academic integrity, but research suggests they are doing the opposite. Studies on widely used AI detectors show a false positive rate of around 61% for non-native English speakers, because detection models are predominantly trained on native English corpora and interpret non-standard grammar and lower lexical complexity as markers of AI authorship. The students most likely to be wrongly accused are those already facing linguistic disadvantage.
The equity problem is serious enough on its own, but several education experts and technology specialists argue that fixing the tools would still leave the fundamental question unanswered: how should teaching and assessment change when AI is already a permanent part of how people work, write and think? Detection assumes the goal is to preserve the old way of doing things. A growing body of evidence and practice suggests the goal should be replacing it.
AI isn’t going away, and students are aware of this. Graduates entering the workforce are expected to use AI for research, writing, analysis and decision-making from their first day on the job. Institutions that respond by policing AI use rather than teaching it are, as several contributors to this piece argue, preparing students for a world that no longer exists.
The better approach, according to researchers, educators and EdTech leaders, is a redesign of assessment itself: moving toward process-based evaluation, oral defence, iterative work and assignments that require students to demonstrate judgment, not just output.
The Detection Trap
The detection-first approach creates a specific kind of harm beyond false positives.
When institutions centre their AI policy on catching misuse, the indirect message to students is that AI is something to hide rather than something to learn. That produces one of two outcomes: students who use AI without guidance, developing no critical relationship with the tool, or students who avoid it entirely to stay within rules that no longer apply once they graduate. Neither approach builds the skills that employers now expect as baseline.
The institutions getting this right are shifting the question from whether AI was used to how it was used and what the student did with it. That means asking students to document their process: not just submitting a finished piece, but showing the decisions behind it.
The calculator analogy runs through several contributions here: in the 1990s, schools did not ban calculators. They redesigned coursework so that the calculator became part of the process and the thinking remained the student’s responsibility. The same logic applies now at an entirely new level.
Leading universities are already moving toward what researchers call AI-resilient assessment: process-led formats where the work itself can’t be outsourced, and where understanding has to be demonstrated in real time.
These formats are hard to outsource to AI and straightforward to observe. We asked education and AI experts what effective integration actually looks like.
More from Artificial Intelligence
- Why Are So Many Mega Influencers Creating AI Clones To Replace Them?
- OpenAI Puts Stargate UK Data Centre Project On Pause – But Why?
- Should Governments Have The Power To Switch Off The World’s Most Powerful AI Models?
- Taiwan’s TSMC Profits Set To Surpass 50% Thanks To AI Chip Demand
- Google And Intel Deepen AI Chip Ties, Indicating That AI Isn’t Just About GPUs Anymore
- The ICO Just Weighed In On AI Agents And Data Protection, Here Is What UK Startups Need To Know
- Sam Altman’s Robot Tax Plans: What Does It Actually Mean And Who Would It Affect?
- In The AI Age, Do You Still Need To Spend Money On Expensive Phone Cameras?
Our Experts:
- Doug Hughes, CEO, Codio
- Dr Misha Kouzeh, Professor, USC Annenberg School for Communication and Journalism
- Teresa Fuller, TeresaFuller.AI
- Daniel Burrus, Futurist and Founder, Burrus Research
- Brennan Kolar, Founder, Atlas CPA Index
- Kuljit Dharni, Chief Product Officer, Panopto
- Sabrina H. Williams, Associate Professor, University of South Carolina
- Laurence Minsky, Professor, Columbia College Chicago
- Syed Asif Ali, Founder and Digital Identity Architect, Point Media
Doug Hughes, CEO, Codio
![]()
“The issue with AI detectors is not just that they’re inaccurate. It’s that they reflect the wrong mindset. Policing AI use assumes learning happens without these tools, when the reality is they’re already embedded in how people work. Focusing on catching students misses the bigger shift entirely.
“What we’re seeing across universities is that learning is moving more slowly than the technology itself. The institutions that will succeed are not the ones trying to control AI, but the ones redesigning how students engage with it. That means moving away from static assessments and toward applied, hands-on work where students are using AI to solve real problems, not just produce answers.
“Effective integration looks less like banning or detecting AI and more like building it into the learning process. Students should be evaluated on how they use these tools: how they prompt, iterate, validate outputs, and apply them in context. That’s much closer to how work actually happens today. We’re still very early in figuring this out, but the direction is clear: education has to shift from measuring what students know to measuring what they can do with the tools available to them.”
Dr Misha Kouzeh, Professor, USC Annenberg School for Communication and Journalism
![]()
“The detector debate is a dead end. Schools are spending energy trying to catch AI use instead of asking why students feel like they need to hide it in the first place. That’s the more interesting question. You can’t audit your way to better learning.
“The universities that are going to matter in ten years are the ones redesigning around a simple premise: if AI can do the task, the task was not worth much to begin with. That’s not a threat to education. It’s a forcing function for better education.
“What I actually see working is when you make the AI part of the submission. Show me your prompts. Show me where you disagreed with the output. Show me how your thinking changed. Suddenly you cannot fake it, because the process is the assignment. The people who thrive with AI are not the ones who use it most. They’re the ones who know when to push back on it. That’s a learnable skill. Chasing cheaters is a distraction. Teaching people to think is the actual job.”
Teresa Fuller, TeresaFuller.AI
![]()
“AI detectors are a distraction. They create the illusion of academic integrity while actively undermining equity and learning outcomes. A 61% false positive rate for non-native English speakers is not just a technical flaw. It is a systemic bias problem. But even if detectors were perfect, they would still be the wrong solution.
“Three changes matter. First, assessment must move from output to process: require students to show how they think, with prompt logs, iteration history, and reflection graded alongside final deliverables. Second, assignments must become AI-aware, not AI-resistant: design work that requires judgment, taste, and synthesis. AI can generate answers, but it cannot defend decisions in a live discussion. Third, AI fluency should be a core skill. Prompting, editing, validating and applying outputs are now baseline competencies.
“I now require recorded video submissions for major assignments. Students can use AI to inform their thinking, but they must articulate, defend and synthesise those ideas in their own voice. This mirrors how work actually happens. The institutions that win will not be the ones that catch AI use. They will be the ones that teach students how to outperform with it.”
Daniel Burrus, Futurist and Founder, Burrus Research
![]()
“In many cases, AI detectors can be doing more harm than good because they are creating a false sense of certainty. When it is possible for a tool to mislabel honest work, especially from non-English speakers, the cost is not just a bad and false result. What really happens is that it damages the trust between the students and the educators, which matters a lot.
“The larger problem is that the detectors keep institutions stuck in a reactive mindset. The focus is on catching misuse after the work is submitted instead of redesigning the learning model for a world where AI is already here and improving quickly. That is similar to trying to stop the use of calculators by checking who used one, rather than teaching students when and how to use a calculator well.
“AI is not going away, so the question and real focus should not be how do we police it better. The question we should all be asking is how do we raise the value of human judgment, encourage originality and the use of applied thinking.
“Universities and schools need to start by accepting that AI is now part of the learning environment. That means more focus on oral in-person defences, more live problem-solving, more project-based team collaborative learning and more real-world application. Rather than giving a final grade, we should be grading the process, the judgment, the questions asked, the sources chosen and the reasoning behind the choices. Students should be encouraged to show where AI helped, where it failed them and how they worked to improve the output.
“The goal should be to move away from preserving yesterday’s classroom and to prepare students for the world they are actually entering.”
Brennan Kolar, Founder, Atlas CPA Index
![]()
“AI detectors are doing more harm than good because they punish the wrong people. A 61.3% false positive rate for non-native English speakers means the students most likely to be flagged are the ones who did not use AI at all. The damage to a student’s trust and confidence when they are falsely accused of cheating on a paper they spent two weeks writing cannot be understated.
“If a student can paste your essay prompt into ChatGPT and get a passing grade, the problem is the prompt. Assessments that ask students to analyse a specific dataset from class, connect course material to something from their own experience, or defend a position in a live conversation with the instructor are all things AI cannot do on a student’s behalf. The assessment has to require something that only a person who was in the room and did the reading can produce.
“The schools getting this right are the ones treating AI the way they treated calculators in the 1990s. You do not ban the tool. You redesign the coursework so the tool becomes part of the process and the thinking is still the student’s responsibility.”
Kuljit Dharni, Chief Product Officer, Panopto
![]()
“A responsible approach to AI in higher education should start with well-defined academic problems. AI should be introduced to solve specific challenges: improving access, reducing administrative burden, scaling tutoring support. Technology should support educators, not supplant them. If AI overshadows the teaching relationship, it’s the wrong design.
“AI has a critical role to play when it removes barriers that distract from learning: the friction of navigating content, the difficulty of finding key concepts in lengthy recordings, the burden of repetitive preparation work, or the challenge of supporting increasingly diverse learners at scale. Used intentionally, AI can help institutions deliver more individualised, accessible learning experiences, while giving educators more time to teach, coach and support students.
“By integrating AI into the video and content systems students already rely on, institutions can offer tools that automatically segment, caption, translate and summarise lecture content. For educators, this means less time creating materials and more time available for one-to-one interactions, feedback and higher-order learning experiences: the work that actually drives outcomes.”
Sabrina H. Williams, Associate Professor, University of South Carolina
![]()
“AI detectors encourage a culture of suspicion at the very moment schools and universities should be teaching for a world where AI is part of everyday practice. The central question is whether education is helping students develop judgment, agency and the ability to think critically with and beyond these tools.
“A better path is assessment redesign. That means asking students to show their process, explain their choices, critique AI outputs, and document where AI helped, where it fell short, and what they changed. My research shows that AI can support idea generation and expand possibilities, but it can also anchor students’ thinking and weaken their confidence when they lose a sense of ownership over the work.
“Effective AI integration is about teaching students to use it with intention, awareness, reflection and accountability.”
Laurence Minsky, Professor, Columbia College Chicago
![]()
“AI detectors do not automatically cause harm nor good. How they are being implemented, however, can be as much for good as it is for harm. Often, the content in question sounds like it was AI generated and the detector just confirms it. But there could be lots of reasons why the content failed: writing style issues, a learning disability, other. So just assuming that it was AI generated can cause harm.
“A better approach is to use the results of the scan as a conversation starter. The next step would then be to identify why the content was flagged as potentially being AI generated.
“How you assess students is another area that can uncover the use of AI and how well they understand the materials. It should be performance-based to determine the student’s depth of knowledge and reasoning abilities. By asking them questions. By having them present the findings in front of the class. By seeing how they’re able to discuss the points. All of these quickly show if a student used AI.
“Instruction should start by showing students how to effectively and ethically use the various AI tools. AI can be a wonderful thought starter, but the facts still need to be confirmed. I ask my students to hand in their queries and results so I can coach them on how to better use the tools. I look for their ability to refine, iterate and curate, because they require critical thinking, which is what we want students to be able to do.
“Finally, AI isn’t going away. Students need to learn to use it in a way that will help them in their future careers, and that’s what I’m trying to do.”
Syed Asif Ali, Founder and Digital Identity Architect, Point Media
![]()
“AI detection is not just unreliable. It’s eroding trust. We hired a non-native English writer who was flagged by multiple AI detectors on her first assignment. The work was original. The tools still marked it as AI. That was enough for us to stop relying on detection entirely, not because the tools are imperfect, but because the cost of being wrong is too high.
“Most institutions are trying to catch AI use instead of redesigning how thinking is evaluated. That is a losing approach. You cannot police a tool that is already embedded in how people work. You have to redesign the system around it.
“What works is transparency over detection. We encourage AI use in early stages but require clear annotation of how it was used. It turns the conversation from did you cheat to how did you think, which is a far more useful signal of learning.”
For any questions, comments or features, please contact us directly.
