A somewhat commonly-used artistic trope that can be used to illustrate both pro-AI and anti-AI sentiment. Modeled and rendered with Blender Cycles 5.0.1

AI Detectors Are Broken

Students should not be forced to gamble their career because of a broken algorithm. For over 800 years, society has fought for fairness and individuals have fought to be innocent unless proven guilty. Yet, across the country, students have been accused of academic dishonesty, not by a human, but by a machine, a machine so flawed that even its own creators declared it as having a “low rate of accuracy”, it is a digital coin flip. The issue is that ever since the start of AI society always has to wonder now, “is this AI?” Educational institutions across the world have convinced themselves that they have found a solution, when in reality there really is no way to have 100% certainty if something is AI or not. Institutions have been using AI detectors to practically “catch” students who are using AI in their work, when these "detectors" are not reliable at all. They do not “detect” anything; they only attempt to find common patterns and see if it maybe matches. This is why they output a percentage as the result, it is not a guarantee. All the detectors do is guess; there will never be 100% accuracy. Even OpenAI shut down its own detector for its "low rate of accuracy." For artists, writers, and really anyone that publishes their work, there is a legal way to use AI work under the “Human Contribution” exception, but if the only tool available to enforce this is broken, then the rule is useless. 

First, let us understand how AI content is handled by the law. What most people do not realize is that AI-generated content is actually considered “public domain.” Public domain means that the content is available to the public. For example, if an individual gives AI a prompt and then copies the output word for word, the text that was just copied is considered public domain. 

Amanda Robert, an accredited social justice worker and graduate from Saint Louis University states that, "The Copyright Office and the District of Columbia agreed that only works created by human beings are eligible for copyright" (Robert). This source supports the claim that one cannot claim AI content as their own. As a result, “Any pure AI output is not eligible for copyright protection.” This is the biggest rule for using AI content. 

However, there is a way to claim work that contains AI. This is called the Human Contribution exception. For example, someone could have AI generate ideas for an invention, but they cannot just steal the idea. The main rule is that the person who wants to patent an invention created with the assistance of AI, the person must have had a very significant contribution in the work. The AI must only be an assistant. As the U.S. Copyright Office (2023) explains, the key question is "whether the 'work' is basically one of human authorship, with the computer... merely being an assisting instrument." The point of the document is that pure AI output cannot be claimed, but a human can be the author if their own creative contribution is the main part of the work.

One thing that is insanely concerning about checking for AI content, is the AI 

“detectors”. The name itself is misleading. They do not actually detect anything. These detectors actually work by storing millions of outputs from different types of AI language models, and then they use those to discover a common pattern. Then they provide a percentage based on how closely the patterns match up. They are pattern-matchers, not truth-finders. 

Now, we would not have trouble worrying about whether something is AI or not, if the detectors worked correctly. These pattern-matching tools are not accurate. Their lack of certainty is revealed in their results; if they had 100% accuracy, they would not have a percent probability as the result. Many detectors will not necessarily give you a percentage, but they will usually sound like this, “We are highly confident that this text is human content.” A university study found that AI detectors, "consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are robustly identified as human-written" (Liang et al.). This proves that AI detectors are not just unreliable, but are also mistaking text from writers whose first language is not English as AI. 

On top of that, even the makers of these detectors are guilty! OpenAI, the company that created ChatGPT (the industry leader in generative AI) admitted its own tool was a complete failure, so bad that they decided to fully shut it down after only nine months from the first public release of ChatGPT. This article states, “As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy” (OpenAI). When these detectors say, “We are highly confident that this text is human content” they use the word confidence. Confidence means “the feeling or belief that one can rely on someone or something.” Keywords: feeling, belief. That does not mean 100%. If the geniuses that released the industry changing generative AI chat-bot could not make a reliable detector for their own creation, then how can any other small third-party website do it? Also, this is Elon Musk we are talking about, the person that society believes to be the smartest guy in the world failed at creating an AI detector for his own AI.

Relying on any of these detectors could putting individuals, universities, and businesses in a position of false accusation. These tools are literally just making an educated guess, there is no real certainty. This article states that, “AI detection software is far from foolproof—in fact, it has high error rates and can lead instructors to falsely accuse students of misconduct” (MIT). If one claims it as their own, they might get in trouble, but by whom? Exactly, nobody. This uncertainty makes the entire issue impossible to manage fairly. If someone’s own work is flagged as AI by one of these unreliable detectors, then that person is being accused and nobody knows who to trust. 

Basically, the law allows for individuals to claim partially AI-generated content considered Human Contribution, the tools used to enforce plagiarism are based on "belief" and 

"patterns," not certainty. People need to stop relying on AI detectors to determine if something is 

AI or not. The reliance on these detectors leaves society with this unanswerable question: “is it AI?” One can never be certain that something is AI, or if something is not, like the text on the screen right now… but everybody can trust that the detector says it is not. Right? 

Works Cited 

“AI Detectors Don’t Work. Here’s What to Do Instead.” MIT Sloan Teaching &  

Learning Technologies, 2024, mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work/

 

Liang, Weixin, et al. “GPT Detectors Are Biased against Non-Native English Writers.”  ArXiv (Cornell University), 5 Apr. 2023, https://doi.org/10.48550/arxiv.2304.02819. 

 

OpenAI. “New AI Classifier for Indicating AI-Written Text.” Openai.com, 2023,  openai.com/index/new-ai-classifier-for-indicating-ai-written-text/

 

Robert, Amanda. “Art Generated by AI Can’t Be Copyrighted, DC Court Says.” ABA Journal,  

25 Aug. 2023, www.abajournal.com/news/article/art-generated-by-ai-cant-be-copyrighted-dc-court-says

Accessed 26 Feb. 2026. 

 

U.S. Copyright Office. “Copyright Registration Guidance: Works Containing Material Generated  by Artificial Intelligence.” U.S. Copyright Office, U.S. Copyright Office, 13 Mar. 2023,  www.copyright.gov/ai/ai_policy_guidance.pdf. Accessed 26 Feb. 2026.