ChatGPT detector is 98% to 100% accurate | Inquirer Technology

ChatGPT detector spots AI-generated papers with surprising accuracy

08:00 AM November 09, 2023

University of Kansas researchers created a ChatGPT detector for AI-generated chemistry papers. Heather Desaire, one of the study’s co-authors, explained she and her team created a tool that focuses on a specific type of paper to prioritize accuracy. As a result, her software can identify AI-made chemistry papers with a whopping 98% to 100% accuracy.

Schools struggled to stop students from submitting AI-generated papers because we needed to have foolproof detection tools. Fortunately, Desaire’s software may help chemistry professors ensure their students don’t submit ChatGPT-made papers as theirs. Soon, schools may need different detectors for other subjects to ensure academic honesty.

This article will discuss how University of Kansas researchers created an effective detection tool for AI-produced chemistry papers. Later, I will cover the other developments in AI detection.

Article continues after this advertisement

How does this new ChatGPT detector work?

Illustration showcasing the functioning of the new ChatGPT detector.

Heather Desaire’s team focused on checking a particular type of academic paper instead of making an all-around tool. Specifically, the researchers focused on chemistry. Here’s how they created their ChatGPT detector:

FEATURED STORIES
  1. They trained their detector on the introductory sections of content from ten American Chemical Society (ACS) journals. Desaire explained her team chose the introduction pages because these parts are easy for ChatGPT to write if it can access background literature. 
  2. Then, she and her team trained their tool on 100 published intros to serve as human-written text. 
  3. Afterward, they asked ChatGPT-3.5 to write 200 introductions in ACS style. The researchers provided the titles for the first 100 and gave the abstracts for the remaining half.

ChatGPT-3.5 is the current free version of OpenAI’s popular tool. They tested their new tool by submitting manmade introductions and AI-made ones based on the same journals. 

As a result, the tool identified ChatGPT-3.5-written sections based on titles with 100% accuracy. On the other hand, it had a slightly lower 98% accuracy for ChatGPT-made introductions based on abstracts.

Article continues after this advertisement

The team’s new ChatGPT detector also worked on text written by ChatGPT-4, the latest version of OpenAI’s chatbot. Conversely, the AI detector ZeroGPT identified AI-produced introductions with only 35% to 65% accuracy.

Article continues after this advertisement

You may also like: AI hacking method steals passwords

Article continues after this advertisement

The ChatGPT maker’s AI classifier tool also paled compared to the University of Kansas researchers’ program because it was only 10% to 55% accurate. 

The AI content detector also worked on journals it wasn’t trained on, catching AI text made to confuse AI detectors. However, its major flaw is it mistakes real articles from university newspapers for AI-generated content.

Article continues after this advertisement

Debora Weber-Wulff, a computer scientist who studies academic plagiarism, said the study fascinated her. However, she warns it is not “a magic software solution to a social problem.”

Illustration representing the emerging trends in AI detection.

Perhaps one of the most surprising trends in ChatGPT detection is that AI bots can beat “Prove You’re Human” tests better than people. You may have come across CAPTCHA tests when inputting usernames and passwords.

It’s an acronym that means “Completely Automated Public Turing Test to Tell Computers and Humans Apart.” You’ve probably encountered it when a login page asked you to check a box, identify objects in images, or solve puzzles. 

You may also like: How to make your first YouTube video

University of California researcher Gene Tsudik tested CAPTCHA’s effectiveness by making people and AI bots answer them. Surprisingly, the latter solved them more accurately than people. 

The study’s co-author, Andrew Searles, told the New Scientist website, “In general, as a concept, CAPTCHA has not met the security goal, and currently is more an inconvenience for less determined attackers.” Consequently, the world needs “more dynamic approaches using behavioral analysis.”

AI content detection has been improving, but mostly for other media types only. For example, Google SynthID adds an invisible watermark to images to ensure AI detection tools can spot AI-made content.

Conclusion

University of Kansas researchers created a ChatGPT detector for chemistry papers. Their tool achieves a whopping 98% to 100% accuracy by focusing on a specific subject. 

Most of the field of text analysis wants a really general detector that will work on anything,” said co-author Heather Desaire. That’s why she and her team were “really going after accuracy when they focused on chemistry papers.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Learn more about this ChatGPT content detection study on its Science Direct webpage. Check out more digital tips and trends at Inquirer Tech. 

TOPICS: AI, ChatGPT, interesting topics, Trending
TAGS: AI, ChatGPT, interesting topics, Trending

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.