College graduates may remember writing thesis papers for scientific journals.
Most would begrudgingly perform research and organize their findings in papers to finish their courses and never do these again. Yet, we rely on these long, complicated texts to guide our actions and future research. Unfortunately, AI is risking the credibility of scientific research.
Frontiers in Cell and Developmental Biology recently published an article featuring gibberish descriptions of anatomically incorrect mammalian reproductive organs. Moreover, illustrations featuring an unrealistically well-endowed rat became trending among scientists on X. In response, the publications retracted the content.
This writer will discuss emerging AI risk in peer-reviewed articles in various fields and explain how artificial intelligence content could endanger scientific research if left unchecked.
Why was the peer-reviewed article retracted?
Researchers mocked the paper titled “Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway” after noticing numerous errors.
Frontiers in Cell and Developmental Biology removed the research paper, but VentureBeat preserved a copy and posted it in one of its articles. The content seems well-done at first glance.
However, a closer look reveals many misspelled words and gibberish like “zxpens” and “protemns” instead of “proteins.”
The erroneous peer-reviewed article became trending online because of its outrageous “rat” illustration. It shows the critter with testes and a penis exceeding its size. In response, the publication stated on its verified X account:
“We thank the readers for their scrutiny of our articles: when we get it wrong, the crowdsourcing dynamic of open science means that community feedback helps us to quickly correct the record.”
“Following publication, concerns were raised regarding the nature of its AI-generated figures. The article does not meet the standards of editorial and scientific rigor for Frontiers in Cell and Development Biology; therefore, the article has been retracted.”
The Chief Executive Editor of Frontiers approved the retractions. Also, it would like to “thank the concerned readers who contacted us regarding the published article.“
How can AI endanger scientific research?
Imagine how destructive AI-generated content could be for scientific research. More researchers would likely post AI-made papers to cut corners and publish faster.
Others may post misinformation on reputable platforms by sneaking with convincing, AI-generated falsehoods.
Sandra Wachter, an Oxford Internet Institute researcher who specializes in artificial intelligence, said she felt “very worried.” She said, “We’re now in a situation where the experts are not able to determine what’s true or not.”
The AI researcher fears we might “lose the middleman that we desperately need to guide us through complicated topics.”
Irene Solaiman, Hugging Face researcher for AI’s social impact, worries that people may depend on large language models for scientific thinking
AI uses past information to create content. As a result, it might limit scientific findings that challenge old methods or propose new ones.
READ: AI Papers Made With ChatGPT Fool Scientists
Arvind Narayanan, a computer scientist at Princeton University, has a different outlook on using AI for scientific journals.
He admits the issues but believes the scientific community should turn its attention to something other than researchers using AI bots.
Instead, we must resolve the “perverse incentives that lead to this behavior.” In other words, we must determine why scientists use AI chatbots in this manner to discourage them from doing so.