AI nonsense found in over 100 research papers
If you finished college, you probably remember writing research papers. You performed an experiment or project and then detailed your findings in a document with a hundred pages or more.
Most remember them as a hassle everyone must endure to graduate. However, they have larger real-world significance.
READ: ChatGPT bug caused AI to spout nonsense
Article continues after this advertisementIT firm Nogor Solutions Limited said on its Medium page that ”research papers can inform policy decisions, influence industry practices, and contribute to technological advancements.”
However, a recent study reveals that over 100 research papers contain telltale signs that ChatGPT made them. Many reportedly contain gibberish, despite undergoing peer review.
How AI nonsense spread into research papers
UK news outlet Daily Mail reported on findings from tech journalism site 404 Media. It found that 115 papers in Google Scholar, Google’s search tool for academic papers, contained the phrase, “As of my last knowledge update.”
Article continues after this advertisementAt the time of writing, 186 results appear in searches. The tech news site reported that the dates in the papers with this phrase corresponded with ChatGPT knowledge updates.
In other words, the phrase appeared whenever the AI chatbot gained more information about topics, such as:
- Spinal injuries
- Battery technologies
- Rural medicine
- Bacterial infections
- Cryptocurrency
- Children’s well-being
- Artificial intelligence
Some papers used the phrase to explain the problems with using ChatGPT for research. However, Daily Mail says many are barely intelligible, such as “Global Education Iducation and International Education Advocacy.”
Kolina Koltai, a member of the open-source research group Bellingcat, posted an example of AI nonsense in one academic paper.
You can see above how it shows ChatGPT’s chirpy reply in the paper: “Certainly, here is a possible introduction for your topic.”
Daily Mail reports academic researchers are using ChatGPT to make research papers because of immense pressure from their universities to publish papers.
Scientists are more likely to land new jobs and promotions by frequently publishing papers. Hence, the phrase, “publish or perish,” became a common name for this issue.
Conventional academic review takes months or years because other scientists will check papers and request multiple revisions.
On the other hand, “paper mills” accept nearly any submission as long as the author pays the publishing fee.
For instance, The International Journal of New Media Studies, allegedly published two different papers with the phrase, “As of my last knowledge update.” Yet, the journal claims to conduct peer review.
What are the risks of AI nonsense in papers?
Another Inquirer Tech article warns that ChatGPT makes it easier for researchers to post AI-made papers to cut corners and post faster.
Worse, some may post misinformation on reputable platforms by sneaking with convincing, AI-generated falsehoods.
Sandra Wachter, an Oxford Internet Institute researcher who specializes in artificial intelligence, said she felt “very worried.” She said, “We’re now in a situation where the experts are not able to determine what’s true or not.”
The AI researcher fears we might “lose the middleman that we desperately need to guide us through complicated topics.”