AI Detectors May Never Be Accurate

2023.09.11 01:36 AM By Joshua Taddeo, Principal Consultant
ai detectors

Whether we embrace it or not, Artificial Intelligence (AI) is poised to revolutionize various facets of our society. Tools like ChatGPT are accessible to anyone with an internet connection. As adoption grows, it will be progressively more challenging to distinguish between AI-generated content and human-created material. 

 

Luckily, we have AI content detectors, right?

 

Unfortunately, they're not as effective as advertised. AI checkers are supposed to be the new line of defense against AI cheating – the problem is their accuracy is questionable, and those imperfections are hurting people. Even OpenAI has claimed that AI detectors are not "proven to reliably distinguish between AI-generated and human-generated content."

 

So, if you're looking to integrate AI detection tools for any purpose, we recommend understanding their limitations before you accuse someone based on a detector's results alone. If you're concerned, it's crucial to develop other tests for the writers to ensure they understood and wrote the material rather than simply accepting the output of these detectors. 

Beware Before You Accuse: AI Detectors May Not Be Accurate

Many of today's popular AI programs, like ChatGPT, are built upon large language models (LLMs). These LLMs undergo extensive training on vast amounts of text data and utilize this knowledge to generate responses. In simpler terms, AI essentially predicts the next word in a sentence based on what it has learned during its training. It may not understand the meaning of specific phrases like "Snow is cold, but fire is hot," but the model recognizes that "hot" often follows "fire is" (though this explanation oversimplifies the process).

 

AI detectors operate similarly. They are trained on datasets containing text from various sources, including human and AI-generated content. These AI detectors search for indicators that suggest text has been generated by LLMs, such as repetitive language, word choice probabilities, and tone. Developers aim to teach AI checkers to recognize these patterns, hoping they can determine whether a human or an AI authored a piece of text. The language model works by reviewing the output and asking, "Is this text the sort of thing I would have written?" Based on the algorithms, it gives a percentage of how likely it is for the text to have been generated using AI.

ai powered technologies

Generally, these contact detectors use metrics like "perplexity" and "burstiness" to evaluate text and determine whether it's likely human- or AI-generated.


  • Perplexity measures how closely a piece of text aligns with what an AI model has learned during training. Text that closely matches training data is rated as low perplexity.

  • Burstiness assesses the variability in sentence length and structure within a text. Human writing tends to be more dynamic in this regard.


AI detection tools are designed to identify patterns in text that indicate whether it was generated by artificial intelligence (AI) tools such as ChatGPT. These tools use machine learning and natural language processing to analyze different features of the text, such as how fluently it reads, how frequently certain words appear, or whether there are patterns in the text that suggest AI generated it. Some tools even highlight words based on their level of randomness to help users pinpoint AI-generated text.

Almost every AI content detector claims they can distinguish between human and AI text reliably. Those claims sound great on paper, but if you've ever tested AI detection tools with your own writing, you know they can be hit-and-miss at best. More often than not, they will detect a portion of human-written content as AI. The misidentification defeats the tool's entire purpose and puts innocent people at risk of "getting into trouble" with using AI, even when they don't.


AI content detectors are not infallible and often produce false positives, flagging human-generated content as AI-generated and vice versa. Relying on their assessments can lead to incorrect conclusions and actions.

Inaccuracies with Detection

OpenAI has openly acknowledged that AI detectors may not be accurate. In the FAQ section of their knowledge base, Open AI discusses their own research and failed attempts at creating accurate AI detectors. Furthermore, ChatGPT can't identify what content could be AI-generated if you prompt it. It'll often make up responses to some questions like "Could AI have written this?" In that same article, Open AI has said that their AI content detectors have mislabeled human-written text such as the Declaration of Independence and Shakespeare as AI-generated.

 

OpenAI introduced an AI detection tool called AI Classifier in January of 2023, aiming to help educators identify content created using generative AI tools like ChatGPT. Six months later, OpenAI quietly decommissioned the AI Classifier due to its poor accuracy distinguishing between human and AI-generated content.

 

They later explained the decision in a blog post acknowledging that the AI Classifier was "not fully reliable" and provided statistics indicating its limitations, including its struggles with shorter text, mislabeling human-written text as AI-generated, and performing poorly outside of its training data. The firm emphasized its commitment to researching more effective techniques for detecting AI-generated content.

 

The Federal Trade Commission has warned companies against overstating the abilities of any AI tool for detecting generated content. They also cautioned that inaccurate marketing claims can violate consumer protection laws. Even if these tools could accurately distinguish AI-generated content, it's easy to evade detection with minor edits, which is likely to be a natural part of the writing process, even for people who rely on generative AI models to support copywriting.

Issues with AI Detection

Issues with AI DetectionNow that you know they may not be accurate, we'll dive deeper into the various issues with using AI detection. Learning about the issues will help you make a more informed decision about why you should couple AI detection tools with additional investigative methods if you choose to "use and accuse."

Bias Toward Non-Native English Speakers

Researchers found that AI detectors may be biased against non-native English writers. One study found that AI detectors wrongly categorized over half of the samples from non-native English writing as AI-generated. They even found a hack to bypass GPT detectors by simply using prompting strategies like "elevate the text by employing literary language." If anyone can just re-prompt the AI text in ChatGPT to bypass AI detectors, this demonstrates that AI detection has a long way to go before we trust it implicitly.

 

The findings found that AI detection tools can unintentionally penalize writers with limited linguistic expressions, highlighting the need for increased fairness within these tools. 

Changing AI Models

AI models have seen rapid progression in recent years. For example, OpenAI's GPT-4, a widely recognized AI language model, is much more sophisticated than GPT-3, its predecessor. GPT-4 can generate highly coherent and contextually relevant text, making it more challenging for detectors designed for earlier models to accurately distinguish between human and AI-generated content.

 

AI content detectors are typically developed with specific AI models in mind. They are trained on data available at the time and designed to recognize patterns and characteristics associated with those models. As newer AI models emerge, content detectors may struggle to adapt because they lack the data and knowledge to understand the nuances of the latest models.

Limited Understanding

ai technologies and models

AI detectors operate based on predefined rules, patterns, and metrics. They analyze text primarily using statistical models and machine learning algorithms, focusing on factors like word choice, sentence structure, and statistical anomalies. The tools cannot understand the context of the written text, meaning they may misinterpret text that relies heavily on context, idiomatic expressions, or cultural references.

 

Human language is rich with nuances, subtleties, and ambiguities. Words and phrases can have multiple meanings depending on the context, tone, or speaker's intention. AI detectors often struggle to navigate these complexities. They may misinterpret a nuanced argument, miss the underlying tone of a message, or fail to recognize when a word is being used in a specialized or metaphorical sense.

Negative Consequences

False accusations of AI-generated content can have serious consequences, such as academic penalties, damage to one's reputation, or legal actions, particularly in educational and professional settings.

 

Students accused of plagiarism or cheating based on incorrect AI detector assessments may face failing grades, academic probation, suspension, or even expulsion from educational institutions. These penalties can have long-lasting effects on a student's academic record and future opportunities. One professor accused his entire class of cheating by using ChatGPT, jeopardizing their diplomas.

 

False accusations of AI-generated content can also tarnish an individual's personal and professional reputation. In professional settings, allegations of using AI-generated content can harm an individual's credibility and career prospects. Employers and colleagues may question their competence and trustworthiness.

 

False accusations based on AI detector results can sometimes escalate to legal actions. Students or professionals wrongly accused may seek legal remedies to defend their rights and reputations. Legal proceedings can be costly, time-consuming, and emotionally distressing for all parties involved.

Limitations of AI Detectors

While advancing in their capabilities, AI detectors face substantial limitations in distinguishing between AI-generated and human-written text reliably. Several key points underscore these challenges: Human writing styles are incredibly diverse and subjective. People express themselves uniquely, influenced by their backgrounds, experiences, and emotions. This diverse set of writing styles makes it difficult for AI detectors to establish a universal standard for identifying AI-generated content.

 

Also, human written content relies heavily on contextual interpretation. The meaning of a sentence changes based on previous statements or the context of a conversation, making it challenging for detectors to discern intent accurately. AI text detectors often lack sufficient contextual information to produce accurate results. Understanding the context of a conversation or the meaning behind specific text is challenging for AI systems.

 

AI companies often do not maintain comprehensive open records of everything generated on their platforms. This lack of transparency poses a significant obstacle to AI detectors, as they lack access to extensive reference data. Companies have valid security and privacy concerns that prevent them from keeping open records. Storing vast amounts of user-generated content presents risks related to data breaches and privacy violations.

 

One proposed solution is implementing some form of watermarks or metadata that remain attached to text even when shared or edited. However, this approach faces practical challenges. Removing metadata is possible, and tracking changes in edited content can be complex, especially with substantial revisions.

 

Traditional tracking mechanisms, like version control systems, may not directly apply to textual content that undergoes frequent modifications. When multiple authors collaborate on a document or when an AI tool assists in content generation, maintaining an accurate history of revisions becomes challenging. Using metadata and watermarks raises concerns about user privacy and data protection. Storing information about a text's origin and editing history may conflict with privacy regulations, requiring careful handling of user data to ensure compliance.

using generative ai in business

Complexity of Human Writing and AI Interaction

It's important to understand that AI tools are more like assistants than complete content generators. They play several roles in the writing process, working alongside human writers to enhance content creation's overall quality and efficiency.

 

Here are the best ways to use AI tools like ChatGPT for content generation:

 

  • Idea Generation: AI tools can help spark creative ideas. They can suggest topics, headlines, or angles for a piece of writing, providing valuable inspiration to human writers.

  • Expanding Sections: Sometimes, a writer may have a general idea but need help fleshing out specific sections. AI tools can assist by generating additional content, filling in gaps, or offering suggestions to make the text more comprehensive.

  • Research Assistance: Research is a crucial part of writing. AI tools can quickly gather information from various sources and present it to writers, saving them time and effort in the research process.

  • Editing Support: AI tools can help writers refine their work. They can spot grammar and spelling errors, offer style recommendations, and suggest improvements to sentence structure.

  • Generate Outlines: Utilize AI to assist in structuring your content by generating outlines or suggesting subtopics, helping you organize your thoughts and create a well-structured piece.

  • Write Titles and Meta Descriptions: Leverage AI's capabilities to provide suggestions for attention-grabbing headlines and informative meta descriptions based on your topic, enhancing your content's visibility and appeal in search results.

  • Summarize Blog Posts: Use AI to condense lengthy blog posts into concise summaries, making it easier for readers to understand the main points and key takeaways at a glance.

  • Aid Keyword Research: Harness AI to identify relevant keywords and provide insights into the search volume and competition, aiding in optimizing your content for search engines. That said, it won't have real-time data on the keyword volume and competition of keywords, so it's important to use it only to find other keyword variations.

  • Compose Ad Copy: Rely on AI to generate persuasive ad copy, including headlines, taglines, and complete ad scripts, enabling you to create compelling advertising campaigns that resonate with your target audience. Just keep in mind those outputs are not allowed to be copyrighted without material edits. So keep the most essential work to the humans in case you need to copyright it.

 

It's important to highlight that AI tools aren't a replacement for human creativity and judgment. They require human interaction and often benefit from alterations made by the writer. This collaborative approach ensures that the final piece of writing reflects the writer's unique voice, style, and perspective. AI tools serve as valuable writing assistants, making the writing process more efficient and effective. They're tools writers can use to their advantage, harnessing their capabilities to produce better content while maintaining the essential human touch in the final work.

 

If you're a business looking to scale your content marketing, thought leadership, and marketing efforts, you don't have to worry about AI detection. However, be aware that multiple lawsuits against OpenAI revolving around copyright infringement may also put your organization in jeopardy, depending on the cases' outcomes. 

 

In regard to SEO, Google has stated that AI-generated content is not against their guidelines and will not hurt SEO if used correctly. As noted, Google ranks content that demonstrates EEAT (expertise, experience, authoritativeness, and trustworthiness). It doesn't care whether humans or AI produce it as long as it's accurate and contextually applicable.

 

Their stance is pretty straightforward – you should never use AI to generate content to manipulate ranking in the search results, as it violates their spam policies. As long as AI generates helpful content that adds to the user experience, Google has no problem with AI-generated texts. Even big companies like Buzzfeed are openly using AI writing tools to help scale their brand.

 

Instead of trying to fight against the AI revolution, we recommend embracing it. The faster you can hop onboard, the quicker you can use it for all types of business activities as long as you understand the current legal landscape and accept the liability risks for your usage. 

Future Prospects and Considerations

As we look to the future, several key considerations and prospects emerge regarding AI detection technology and its role in distinguishing between AI-generated and human-written content.

 

The field of AI detection is likely to witness significant improvements. Researchers and developers are actively working on enhancing AI's ability to understand context and subtle nuances in writing. This could lead to more accurate and reliable AI detectors, but there are currently no guarantees. Furthermore, companies are looking into other ways to "tag" content as written by AI in new forms of watermarking.

 

Using AI detectors judiciously is essential, recognizing that they are tools rather than infallible arbiters. Human judgment remains indispensable, especially in cases where the distinction between AI and human content is subtle or complex.
the future of ai technology

The AI Revolution is Underway

The AI revolution is happening all around us, and history has shown that what's more efficient nearly always wins out. Some people, like educators and businesses, might want to ban AI completely, but that will be almost impossible, especially with open source tools, lack of regulation, and a near impossibility of enforcing bans when programs like AI detectors are likely not accurate enough to use in court. So, what's the best approach?

 

Well, it's important to remember that AI content detectors aren't always accurate. The key takeaway is that you should be wary of treating any AI content detector as fact. Instead, develop other methods of testing the writer's knowledge of the subject and the likelihood of their involvement in the writing by asking content-specific questions when there's doubt.

 

Ultimately, it's about finding the right balance between AI and humans in this changing landscape. We should allow open usage but never completely rely on AI for things that benefit from human input, like unique perspectives, accurate subject matter material, or creative endeavors. By incorporating AI into our workflows, we can make the most of AI's benefits while keeping the human touch in our content processes.