AI detectors are widely used on college campuses to flag work that may have been generated by large language models, but critics say the software is unreliable and disproportionately targets non-native English speakers. Students have filed lawsuits over emotional distress and punitive actions, while a new class of “humanizer” tools rephrase text to evade detection.
At a Glance
- AI detectors are common but criticized for false positives.
- Students use humanizer tools to avoid detection or prove innocence.
- Universities face lawsuits and calls to rethink automatic penalties.
- Why it matters: The clash between detection and defense tools is reshaping academic integrity policies.
The AI Detection Arms Race
Across the United States, the introduction of generative artificial intelligence has sparked an arms race. Professors now run papers through online detectors that inspect whether students used large language models to write their work for them. Some colleges claim they’ve caught hundreds of students cheating, yet the detectors have repeatedly been criticized as unreliable and more likely to flag non-native English speakers on suspicion of plagiarism. A growing number of students say their work has been falsely flagged as written by AI, and several have filed lawsuits against universities over the emotional distress and punishments they say they faced. Faculty and students describe being caught in the middle of an escalating war of AI tools, with some turning to new generative AI tools called “humanizers” that scan essays and suggest ways to alter text so they aren’t read as having been created by AI.

Humanizers: Tools to Avoid Detection
Humanizer tools scan essays and offer suggestions to make the writing appear more human. Some are free; others cost around $20 a month. Users employ them either to slip past detectors or to prove their innocence when accused of using AI.
Typical functions of humanizers include:
- Alter sentence structure to reduce AI signatures.
- Remove or change uncommon words flagged by detectors.
- Add personal anecdotes to increase perceived originality.
Company Responses
In response to the rise of humanizers, companies such as Turnitin and GPTZero have upgraded their AI detection software to catch writing that’s gone through a humanizer. They also launched applications that track keystrokes or browser activity so students can prove authorship. Turnitin’s chief product officer, Annie Chechitelli, told the News Of Fort Worth that the software should prompt a conversation rather than serve as the sole basis for penalties.
“Students now are trying to prove that they’re human, even though they might have never touched AI ever,” said Erin Ramirez, an associate professor of education at California State University, Monterey Bay. “So where are we? We’re just in a spiral that will never end.”
Student Experiences
Brittany Carr received failing grades on three assignments she completed as a long-distance student at Liberty University because they were flagged by an AI detector. She presented revision history, showing handwritten drafts, yet was still required to take a “writing with integrity” class and sign a statement apologizing for using AI. Carr worried another cheating accusation could cost her VA financial aid, so she ran all her material through Grammarly’s AI detector, editing until it concluded a human had written the piece.
“How could AI make any of that up?” Carr wrote in a December 5 email. “I spoke about my cancer diagnosis and being depressed and my journey and you believe that is AI?”
“But it does feel like my writing isn’t giving insight into anything – I’m writing just so that I don’t flag those AI detectors,” she said.
After the semester ended, Carr decided to leave Liberty and is uncertain where she will transfer.
Academic Perspectives
Erin Ramirez argues that anyone who relies on a detector has never tested their own work. “It’s almost like the better the writer you are, the more AI thinks you’re AI,” she said. “I put my own papers into AI detectors just to check because I don’t like to hold students accountable without knowing how the tool works. And it flags me at like 98% every time, and I didn’t use AI in any capacity.”
Aldan Creo, a graduate student from Spain who studies AI detection, said, “If we write properly, we get accused of being AI – it’s absolutely ridiculous.”
Turnitin’s Chechitelli added that the most important question is not so much about detection, but where the line lies. “The most important question is not so much about detection, it’s really about where’s the line,” she said.
GPTZero’s CEO, Edward Tian, noted that training 3,000 teachers revealed fragmented understandings of acceptable AI use, which is becoming more pronounced as the number of tools grows.
Eric Wang, vice president of research at Quillbot, warned that fear will persist unless educators move away from automatically deducting points instead of engaging students in dialogue about AI use. “Once that happens, it starts to not matter whether you do or don’t sound like AI and instead moves us toward a world asking how are we using this technology but not losing our sense of humanity, our sense of creativity, and our ability to create great things on our own.”
Industry and Future
Turnitin issued a software update last August to detect text modified by humanizers and maintains a list of 150 tools that charge up to $50 for a subscription. Joseph Thibault, founder of Cursive, tracked 43 humanizers that had a combined 33.9 million visits in October.
“I think we have to ask students, what level of surveillance are you willing to subject yourself to so that we can actually know that you’re learning?” he said. “There is a new agreement that needs to be made.”
Superhuman, the company behind Grammarly, added Authorship to basic accounts, tracking typed, pasted, or AI-generated sections. Jenny Maxwell, Superhuman’s head of education, said they will keep track of Wikipedia use, Grammarly suggestions, time spent, and session counts. “We’re going to keep track of when you are going to Wikipedia,” Maxwell said.
In upstate New York, an online petition urged the University at Buffalo to drop the software, gathering more than 1,500 signatures last year. The university stated it has no institution-wide AI rule but requires evidence beyond detector scores for academic dishonesty.
“So it’s like, how far do you want to go down the rabbit hole? I’m making myself crazy,” Kelsey Auman said.
Tricia Bertram Gallant advises against banning AI in unsupervised assessments, warning that proving AI use consumes valuable time and urges government regulation of AI and the cheating industry. “We keep turning on what the academic institutions need to do to fix problems that they didn’t create,” she said.
Conclusion
The battle between AI detectors and humanizer tools is reshaping academic integrity. Open dialogue and balanced policies will be essential as colleges navigate this evolving landscape.

