AI answer human tests for logins better | Inquirer Technology

AI answers ‘Prove You’re Human’ tests better than people

12:01 AM August 15, 2023

Do you recall those CAPTCHA tests in login pages that supposedly “prove you are a human?” A recent study says artificial intelligence bots can answer those human tests better than people! AI programs outperform humans in every CAPTCHA style, such as pattern recognition and character detection.

This news means we need a new way to detect artificial intelligence activity, especially now that people use this technology daily. Otherwise, AI systems could jeopardize important digital platforms like corporate or government servers. Also, this development highlights the progress of modern AI development.

This article will discuss how AI bots outclassed people in a recent study. Then, I will explain how artificial intelligence has improved in ways you never imagined, such as understanding emotions and exhibiting biases.

Article continues after this advertisement

How did AI beat people in human tests?

An image comparing AI's success rate in tests against that of humans, emphasizing AI's superiority.

Photo Credit: stanford.edu

A research team led by Gene Tsudik at the University of California, Irvine, wanted to know if CAPTCHA tests are still effective against AI bots. The acronym stands for Completely Automated Public Turing test to tell Computers and Humans Apart.

FEATURED STORIES

They’re the tests that require users to perform actions or solve puzzles before logging into digital platforms. For example, it could ask you to click a check box, identify specific objects in images, or solve puzzles.

They checked the number of famous websites using CAPTCHAS and found 120 out of 200 do. Then, they asked 1,400 with varying tech expertise to complete 14,000 CAPTCHAs.

Article continues after this advertisement

They ran the same human tests on AI bots and contrasted their performances. Ironically, artificial intelligence systems solve them more accurately than people.

Article continues after this advertisement

Futurism said human accuracy ranged from 50% to 85%. Conversely, bots exhibited a staggering 99.8% accuracy. The experts said CAPTCHAS have “evolved in terms of sophistication and diversity” over 20 years.

Article continues after this advertisement

Unfortunately, technologies to “defeat or bypass CAPTCHAS” have also significantly improved. ChatGPT is the most popular example of this phenomenon.

You may also like: How to report a Facebook hack

Article continues after this advertisement

That program is barely a year old, but people worldwide have adopted it for daily tasks. More importantly, it fooled a person into solving a CAPTCHA on its behalf in early 2023.

Co-author Andrew Searles explained to the science news website New Scientist its implications. “There’s no easy way using these little image challenges or whatever to distinguish between a human and a bot anymore.”

Shujun Li from the University of Kent, United Kingdom, told New Scientist, “In general, as a concept CAPTCHA has not met the security goal, and currently is more an inconvenience for less determined attackers.” Consequently, the world needs “more dynamic approaches using behavioral analysis.”

Why is it harder to detect AI activity?

A visual representation of the challenge in detecting AI's activity due to its sophisticated nature.

Photo Credit: enterprisedna.co

You may have heard of the Turing test if you’ve been following AI development. In 1950, British mathematician Alan Turing proposed it to distinguish between computer activity and human behavior.

It involves humans judging short, text-based conservations with a hidden computer and an unseen person. The famous math genius called it the imitation game, which asks the question, “Can machines think?”

Decades later, the Turing test became a catch-all term for human tests for AI. Unfortunately, that doesn’t work because “it was more like a thought experiment.”

“It was not meant as a literal test that you would actually run on the machine,” Google software engineer François Chollet stated. More importantly, AI bots are sophisticated enough to emulate human speech.

ChatGPT became the most famous artificial intelligence system by generating poems, research papers, and other texts previously believed only humans could write. Also, more men worldwide are developing romances with AI chatbots.

You may also like: How to boost learning with ChatGPT

Joseph Weizenbaum from the Massachusetts Institute of Technology discovered people “read far more understanding than is warranted into strings of symbols strung together by computers.” Moreover, a recent study suggests ChatGPT has higher emotional awareness than people.

That could mean an AI bot could understand how to reciprocate your feelings better than another human. Another study discovered popular bots have political biases.

Overall, these AI developments suggest artificial intelligence is becoming more like humans. As a result, that makes it harder for us to distinguish their behaviors from ours.

Conclusion

A recent study discovered AI bots are better than people in answering online human tests for logins. In response, we must develop new ways of detecting bots.

The paper says, “If left unchecked, bots can perform these nefarious actions at scale.” For example, a hacker might open numerous online accounts with an AI program.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

You can gain additional information about the study by clicking here. Moreover, check the other latest digital tips and trends at Inquirer Tech.

TOPICS: AI, interesting topics, Trending
TAGS: AI, interesting topics, Trending

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.