Navigating the AI wave: Promises and pitfalls of ChatGPT in journalism | Inquirer Technology

Navigating the AI wave: Promises and pitfalls of ChatGPT in journalism

By: - Managing Editor / @DMaliwanagINQ
/ 06:07 PM August 03, 2023

Illustration depicting the concept of GPT-5, OpenAI's advanced language model.

Photo Credit: technopixel.org

Eight years ago, we were first introduced to artificial intelligence (AI) as a tool for journalism. The concept was met with skepticism, and for good reason. Could an algorithm truly replicate the nuanced skills of a seasoned journalist, honed over years of professional experience? The technology was young and inexperienced, and we were prideful.

Back then, the answer was a resounding no.

Article continues after this advertisement

Then came November 2022, and with it, the unveiling of OpenAI’s ChatGPT. The media landscape trembled. Here was an AI tool that exhibited an unprecedented level of natural language understanding and generation.

FEATURED STORIES

But while its application garnered significant interest across newsrooms worldwide, media outlets approached the use of this AI tool with measured caution, fully aware of its implications on content creation and audience communication. Even we, the initial skeptics, began its exploration with thoughtful diligence.

INQUIRER.net assembled a working group composed of editors, social media specialists, site traffic and technology experts, marketing executives, and human resource officials. Our task: test the waters.

Article continues after this advertisement

Our experience with ChatGPT was nothing short of revelatory.

Article continues after this advertisement

The AI’s efficiency in writing formulaic stories, such as weather reports or earthquake alerts, was startling. Tasks that would traditionally take journalists at least 15 minutes, such as producing breaking news, are accomplished in less than a minute.

Article continues after this advertisement

And the capabilities didn’t stop there: ChatGPT is also adept at simplifying technical papers, providing summaries, generating story ideas, formulating interview questions,  presenting different SEO-friendly headlines, and even translating major Filipino languages to English with impressive accuracy.

However, the rosy picture I paint comes with distinct caveats.

Article continues after this advertisement

Despite its notable capabilities, ChatGPT is not omniscient. Its data cutoff in September 2021 means it occasionally dispenses stale information. It also has a tendency to generate fictional direct quotes or background details—an alarming trait that needs cautious handling.

In April, I used ChatGPT to write an earthquake story in Davao City. It quickly produced an article with a quote from “Davao City Mayor” Sara Duterte, who, at the time (and until now), is the Vice President of the Philippines, not the mayor of the city. Did she really issue a statement? She didn’t.

This instance underscores ChatGPT’s occasional shortcomings in providing up-to-date and accurate information, which, in turn, undermines its overall credibility as a research aide.

Legal issues, too, loom large. As ChatGPT draws from its training data, the possibility of inadvertently including copyrighted material presents a concerning legal grey area.

At the same time, from a journalistic perspective, AI lacks the capability to discern fresh and best angles in a story. It cannot conduct essential tasks like interviewing sources and attending press conferences. And how can you expect it to rub elbows with reliable sources who can provide you with scoops?

Our jobs are safe.

But the moment AI starts dialing the Senate President for a reaction, it’s time for us journalists to put our pens down, close our laptops, and consider joining a circus as tightrope walkers.

The unique skills a journalist brings—emotional intelligence, intuition, empathy, and authentic human conversation—are irreplaceable. AI, for all its advancements, cannot mimic these.

Most importantly, the ethical implications surrounding AI use in journalism must be addressed. How do we handle situations where AI inadvertently creates defamatory content or disseminates misinformation? What guidelines can ensure responsible AI use without curtailing its potential?

In this exciting AI era, we must tread carefully. While the power of tools like ChatGPT is undeniable, they are not infallible. We must continually refine our understanding of AI’s capabilities and limitations, and remember that even in this technologically advanced world, the essence of journalism remains resolutely human.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

(Edited by artificial intelligence and reviewed by a human editor.)

TOPICS: Artificial Intelligence
TAGS: Artificial Intelligence

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.