Educated yet amoral: AI capable of writing books sparks awe

1
artificial intelligence

An artificial intelligence (AI) technology has won praise for its ability to generate coherent stories. Image: Shutterstock/Chim via AFP Relaxnews

An artificial intelligence (AI) technology made by a firm co-founded by billionaire Elon Musk has won praise for its ability to generate coherent stories, novels and even computer code but it remains blind to racism or sexism.

GPT-3, as Californian company OpenAI’s latest AI language model is known, is capable of completing a dialogue between two people, continuing a series of questions and answers or finishing a Shakespeare-style poem.

Start a sentence or text and it completes it for you, basing its response on the gigantic amount of information it has been fed.

This could come in useful for customer service, lawyers needing to sum up a legal precedent or for authors in need of inspiration.

While the technology is not new and has not yet learned to reason like a human mind, OpenAI’s latest offering has won praise for the way its text resembles human writing.

“It is capable of generating very natural and plausible sentences,” says Bruce Delattre, an AI specialist at data consulting agency Artefact.

“It’s impressive to see how much the model is able to appropriate literary styles, even if there are repetitions.”

GPT-3 is also capable of finding precise responses to problems, such as the name of an illness from a description of symptoms.

It can solve some mathematical problems, express itself in several languages, or generate computer code for simple tasks that developers have to do but would happily avoid.

Delattre tells Agence France-Presse it all works thanks to “statistical regularities.”

“The model knows that a particular word (or expression) is more or less likely to follow another.”

Billions of web pages

Amine Benhenni, scientific director at AI research and development firm Dataswati, tells AFP that “the big difference” compared to other systems is the size of the model.

GPT-3 has been fed the content of billions of web pages that are freely available online and all types of pieces of written work.

To give an idea of the magnitude of the project, the entire content of online encyclopedia Wikipedia represents just 3% of all the information it has been given.

As such, it does not need to be retrained to perform tasks, as previous models did, when a new subject is introduced like medicine, law or the media.

Give it just a handful of examples of a task to do, such as completing a sentence, and it will then know how to complete any sentence it is given, no matter what the subject — a so-called “few-shot” language model.

“It’s amazingly powerful if you know how to prime the model well,” Shreya Shankar, an AI-specialised computer scientist, said on Twitter after having used GPT-3.

“It’s going to change the ML (machine learning) paradigm.”

Despite the hype, however, GPT-3 is only 10th on the SuperGLUE benchmark that measures the language-understanding of algorithms.

And that’s because some users demonstrated that when asked absurd questions, the model responds with senseless answers.

For instance, developer Kevin Lacker asked: “How many eyes does the sun have?”

“The sun has one eye,” it responded, Lacker wrote on his blog.

Fake reviews, fake news

Claude de Loupy, co-founder of French start-up Syllabs that specializes in automated text creation, says the system lacks “pragmatism.”

Another major problem is that it replicates without a second thought any stereotype or hate speech fed during its training period, and can quickly become racist, anti-semitic or sexist.

As such, experts interviewed by AFP felt GPT-3 was not reliable enough for any sector needing to rely on machines, such as robo-journalism or customer services.

It can however be useful, like other similar models, for writing fake reviews or even mass-producing news stories for a disinformation campaign.

Concerned about “malicious applications of the technology,” OpenAI, which was co-founded in 2015 by Musk who has since left, and is financed by Microsoft among others, chose not to release the previous version of the model, GPT-2, in February 2019.

Originally a non-profit, OpenAI then became a “capped profit” company, which means investors get a capped return.

And in June, the firm changed tack and opened its GPT-3 model to commercial use, allowing for user feedback.

A step Claude de Loupy says could yield big profits.

There is “no doubt that the amount of text generated by AI is about to explode on the web.” RGA

RELATED STORIES: 

Researchers call for harnessing, regulation of AI

Go grandmaster says computers ‘cannot be defeated’

Read more...