59% of Teens Say AI Use in School Is Normal — Can AI Detectors Keep Up?
59% of American teenagers say that using AI to cheat is a regular thing at their school. That's from a Pew Research Center survey conducted in the fall of 2025. Teens were talking about what they see in classrooms every day. And based on the numbers, they see a lot more than the adults around them realize.
They're not hiding it. They just know that nobody's really checking.
AI Detection in Schools: 59% of Teens Say Nobody's Checking
Most research on AI in education relies on teachers or administrators for data. Pew Research Center went straight to the people sitting at the desks. In the fall of 2025, the center surveyed 1,458 American teens aged 13 to 17, along with one parent of each participant. The sample was representative by gender, age, race, and household income. Results came out in two reports: the first in December 2025 (a broad look at teen chatbot use), the second in February 2026 (how teens specifically use AI for schoolwork and what they think about cheating).
For context: 30% of parents in the same survey said they weren't even sure whether their child uses chatbots at all. Teens are simply closer to what's happening, because they're in the middle of it every day.
AI-Generated Schoolwork: Who Depends on It and Who Gets Caught
The big picture from the section above is that just over half of teens use AI for schoolwork. But break the numbers down by household income, and the picture shifts.
For a teen from a well-off family, ChatGPT is one resource among many — alongside tutors, prep courses, and parents who can help with homework. For a teen from a low-income family, the chatbot is often the only source of help available. The gap between "AI helps me understand a topic" and "AI does the assignment for me" is massive, but when there's nowhere else to turn, that line blurs fast.
The racial breakdown tells a similar story. Black and Hispanic teens use AI chatbots at higher rates (around 70%, compared to 58% for White teens) and are more likely to say chatbots were helpful for school. That in itself isn't a problem. The problem starts when schools try to catch AI-generated content: according to Common Sense Media, these same groups of teens are two to three times more likely to be falsely flagged by AI detectors.
AI Detection Accuracy: What Schools Pay For vs. What They Get
Schools are spending money on a solution. According to the Center for Democracy and Technology, over 40% of middle and high school teachers already use AI detectors to review student work. Broward County Public Schools in Florida, one of the largest districts in the country with 230,000 students, signed a three-year contract with Turnitin for over $550,000 (per NPR reporting, December 2025).
But buying a tool and solving the problem are two very different things.
Turnitin told NPR that the company considers it more important to avoid falsely accusing students than to catch every case of AI writing. The company's Chief Product Officer told BestColleges the quiet part out loud: Turnitin deliberately lets about 15% of AI-generated content through to keep false positives under 1%. There's logic to that — a false accusation destroys trust between a student and a teacher. But independent research by Mike Perkins at British University Vietnam found that when students edit AI-generated text even lightly, detection rates across major tools drop much further. It raises a fair question: what exactly are schools paying for?
It looks like schools are buying the feeling of control, not control itself.
AI Detector False Positives: Who Actually Gets Flagged
When a detector misses a cheated paper, the teacher can assign a redo or run an oral check. A false accusation is harder to fix. It damages trust, and in some cases ends in suspension or a lawsuit.
In December 2025, NPR reported on Ailsa Ostovitz, a 17-year-old in Maryland whose grade was docked after an AI detector flagged her personal essay at 30.76%. Her teacher never asked if she wrote it. It took her mother going to the school before anyone reconsidered. The district later said it doesn't even recommend using these tools.
As the data in the previous section showed, the groups most likely to be falsely flagged are the same ones who already face systemic disadvantages in education. The consequences are reaching courtrooms: a Yale student sued after suspension based on GPTZero results in 2025, a University of Michigan student filed a similar lawsuit in 2026, and Princeton and MIT have advised faculty not to use detectors as the sole basis for integrity violations.
The issue is not with detection as a concept. It is with how most tools are built. A poorly trained model mistakes natural writing patterns for signs of generation, and a bare percentage gives the teacher no way to tell a real violation from an algorithm error. That is how cases like Ailsa's happen, and why the approach needs to change.
AI Text Detection for Schools: Beyond the Percentage Score
All of this creates a vicious cycle. Teens use AI openly, and most of them consider cheating normal. Schools spend serious money on detection. But the tools let through the students who actually submit AI-written work, and punish those who wrote their own.
Breaking that cycle requires a tool that doesn't reduce the entire review to a single percentage. One that shows the teacher which specific part of the text raised a flag and why — so the conversation with a student starts with evidence, not an accusation.
The It's AI detector works on that principle: it analyzes text and highlights specific passages with an explanation of what triggered the flag. The teacher doesn't get a black-box percentage — they get a reasoned breakdown they can actually act on.
For schools, this isn't about catching more students. It's about accuracy and fairness toward the ones who did the work.


