Peer-reviewed article retracted for AI images

Peer-reviewed article retracted for AI images

/ 08:35 AM February 22, 2024

College graduates may remember writing thesis papers for scientific journals.

Most would begrudgingly perform research and organize their findings in papers to finish their courses and never do these again. Yet, we rely on these long, complicated texts to guide our actions and future research. Unfortunately, AI is risking the credibility of scientific research.

Frontiers in Cell and Developmental Biology recently published an article featuring gibberish descriptions of anatomically incorrect mammalian reproductive organs. Moreover, illustrations featuring an unrealistically well-endowed rat became trending among scientists on X. In response, the publications retracted the content. 

ADVERTISEMENT

This writer will discuss emerging AI risk in peer-reviewed articles in various fields and explain how artificial intelligence content could endanger scientific research if left unchecked.

FEATURED STORIES

Why was the peer-reviewed article retracted?

Understanding the reasons behind the retraction of a peer-reviewed article
Free stock photo from Pexels

Researchers mocked the paper titled “Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway” after noticing numerous errors. 

Frontiers in Cell and Developmental Biology removed the research paper, but VentureBeat preserved a copy and posted it in one of its articles. The content seems well-done at first glance.

However, a closer look reveals many misspelled words and gibberish like “zxpens” and “protemns” instead of “proteins.” 

The erroneous peer-reviewed article became trending online because of its outrageous “rat” illustration. It shows the critter with testes and a penis exceeding its size. In response, the publication stated on its verified X account: 

“We thank the readers for their scrutiny of our articles: when we get it wrong, the crowdsourcing dynamic of open science means that community feedback helps us to quickly correct the record.”

“Following publication, concerns were raised regarding the nature of its AI-generated figures. The article does not meet the standards of editorial and scientific rigor for Frontiers in Cell and Development Biology; therefore, the article has been retracted.”

ADVERTISEMENT

The Chief Executive Editor of Frontiers approved the retractions. Also, it would like to “thank the concerned readers who contacted us regarding the published article.“

How can AI endanger scientific research?

Exploring the risks posed by AI to scientific research
Free stock photo from Pexels

Imagine how destructive AI-generated content could be for scientific research. More researchers would likely post AI-made papers to cut corners and publish faster. 

Others may post misinformation on reputable platforms by sneaking with convincing, AI-generated falsehoods.

Sandra Wachter, an Oxford Internet Institute researcher who specializes in artificial intelligence, said she felt “very worried.” She said, “We’re now in a situation where the experts are not able to determine what’s true or not.”

The AI researcher fears we might “lose the middleman that we desperately need to guide us through complicated topics.”

Irene Solaiman, Hugging Face researcher for AI’s social impact, worries that people may depend on large language models for scientific thinking

AI uses past information to create content. As a result, it might limit scientific findings that challenge old methods or propose new ones. 

READ: AI Papers Made With ChatGPT Fool Scientists

Arvind Narayanan, a computer scientist at Princeton University, has a different outlook on using AI for scientific journals.

He admits the issues but believes the scientific community should turn its attention to something other than researchers using AI bots.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Instead, we must resolve the “perverse incentives that lead to this behavior.” In other words, we must determine why scientists use AI chatbots in this manner to discourage them from doing so.

TOPICS: Artificial Intelligence, technology
TAGS: Artificial Intelligence, technology

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.