Google AI tool for journalists under development
According to The New York Times, Google has been developing an AI assistant for journalists. Contrary to popular belief, the tech firm’s goal is not to remove humans from this essential service. Instead, Google wants to significantly enhance writer productivity and efficiency, letting them focus on more crucial tasks.
Many people have been warning about artificial intelligence taking peoples’ jobs, especially Sam Altman, the creator of ChatGPT. Yet, many people have been using AI tools to make daily tasks easier and faster. After all, technological progress stops for no one, so everyone must adapt.
This article will discuss the known details about the Google AI tool for journalists. Then, I will explain the potential benefits and risks of applying artificial intelligence to news reporting.
What do we know about the Google AI tool?
In its report, the New York Times cited anonymous sources saying Google demonstrated a journalism tool called Genesis to news executives. These included leaders from the NYT, Washington Post, and News Corp, the owner of the Wall Street Journal.
The report cited two execs who “said it seemed to take for granted the effort that went into producing accurate and artful news stories.” Another saw it as a personal helper or assistant. The Verge asked Google spokesperson Jenn Crider about the rumored tool, and here’s what she said:
“In partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work. For instance, AI-enabled tools could assist journalists with options for headlines or different writing styles.”
You may also like: Meta releases “human-like” image creation model
“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity, just like we’re making assistive tools available for people in Gmail and Google Docs.”
“Quite simply, these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles,” Crider noted.
What are the potential risks of this Google AI tool?
People have been feeling scared of artificial intelligence. Some fear AI will take over numerous jobs, while others imagine sci-fi dystopias like “The Terminator.”
Journalists also worry about these tools, but they can cause greater harm to the public. A serious problem with AI programs is the tendency to cite wrong data or “hallucinate” sources.
ChatGPT and similar tools may seem smart, but they don’t “understand” your prompts like a person. Instead, they answer questions by linking your words with its database via algorithms.
In other words, it shares information most likely to fit your requirements. The problem is that AI programs will confidently claim their answers to be true, even when they are not.
Imagine if news platforms start sharing erroneous reports. People around the world would quickly notice they spread misinformation, so they would eventually lose credibility.
Believe it or not, people have been experiencing this problem with AI tools. For example, Mashable reported a lawyer who cited AI-fabricated cases and got in trouble with the court.
AI tools may soon lack quality training data as they become more complex. Epoch AI researcher Pablo Villalobos predicts programs will run out of quality reading material by 2027.
You may also like: Associated Press lets OpenAI train on its articles
If that happens, AI development for more advanced processes may end in a couple of years. You may ask, “The Google AI tool is safe because it will always have data from the internet, right?”
Ilia Shumailov and his team published a paper discussing “model collapse” as more people post AI-generated online content. As the Internet fills with AI media, AI programs would run out of original content for training.
Eventually, they will learn from AI-generated content. As a result, they would perform worse over time, producing more erroneous and boring responses.
What are the potential benefits?
Journalists already know the impending effects of artificial intelligence, so some have adopted the technology. You might have been reading from a writer using AI tools all along!
Most people frequent the Internet, so they share information through digital space. Consequently, journalists always search the web for potentially trending news.
Manually doing it takes time, but several AI programs automatically share trends from various sources. As a result, writers don’t have to look through pages one by one.
You may also like: OpenAI is developing ChatGPT robots
For example, Google News Initiative says Reuters uses News Tracer and Lynx Insight. The former marks potential breaking news stories quickly.
The latter scans trends and key facts in large datasets to provide additional research to journalists. GNI also said The Washington Post, Bloomberg, and The Associated Press have been deploying AI programs to produce news stories at scale automatically.
Google told CNN its latest project would not replace journalists nor their “essential role in reporting creating and fact-checking articles.” AI tools can still make mistakes, so humans must verify their results.
A new Google AI tool will soon assist journalists by performing repetitive tasks. As a result, these professionals can focus on more important ones.
For example, I wrote another article about the Philippines’ Cybercrime Investigation and Coordination Center. I went to their office and gained information to understand how the CICC protects the country from online criminals.
Many jobs still require people, but people must adapt to artificial intelligence to get them. Learn more about the latest digital tips and trends at Inquirer Tech.