Meet Dall-E, the text-to-image generator

1
avocado dall-e

Image: courtesy of OpenIA via ETX Studio

Dall-E, so-called in homage to Salvador Dalí and Wall-E, is an artificial intelligence program from the American research lab, OpenAI, co-founded by Elon Musk. The program has been trained to generate an image from a text caption.

This neural network is capable of translating absurd concepts and of conceiving objects that don’t exist. It could prove a major boon for the future of design, potentially exceeding the limits of our imagination.

If you talk about flowers in a text, Dall-E will create pictures of flowers. These flowers may be inspired by reality, but won’t necessarily be copies. In fact, this AI program produces its own graphic worlds.

Dall-E makes use of the GPT-3 language model, which, in its full version, has a capacity of over 175 billion parameters. It is the largest language model in history, easily outstripping the 17 billion parameters of the former largest, Microsoft’s Turing NLG. Dall-E draws on a 12-billion-parameter version of GPT-3 to form and generate images based on text descriptions. The model is capable of translating absurd concepts or conceiving objects that don’t exist.

The impact of Dall-E on creative industries could provoke a major shake-up for professionals in the sector. This potential could be so enormous that it is also raising questions. What about illustrators, designers and other kinds of artists, for example? A content-generating model can offer various propositions in a flash, leaving humans unable to keep pace.

A tool to be honed

Conscious of the potential consequences of Dall-E, OpenAI does not envisage making its product commercially available for the time being. On the one hand, the program isn’t yet ready. On the other hand, even if artificial intelligence offers a formidable lever for amplifying progress, it is still often viewed as dangerous. Based on human data and the internet, potentially biased data, artificial intelligence inevitably suffers the bias of algorithms.

More precisely, the module is trained each time that it is used, making use of information it can find in its billions of parameters. Forms of bias therefore emerge as the structural reflection of our society, like discrimination based on skin color, gender, nationality or religion. Such essential ethical issues will need to be addressed before unleashing this kind of artificial intelligence on the creative industry.

Now, OpenAI hopes to continue developing the program which, it admits, is not yet sufficiently honed. And it’s true that while certain images generated by the AI are seriously impressive, much still needs more work. Note that OpenAI is set to reveal more information about the program in an upcoming research paper. JB

RELATED STORIES:

Can artificial intelligence encourage good behavior among internet users?

Researchers call for harnessing, regulation of AI

Read more...