AI photo editor turns Asian MIT student white to make her look “more professional”
An AI photo editor shocked an Asian MIT student after turning her into a Caucasian. Rona Wang has been experimenting with AI prompts and image generators, but she never expected Playground AI to turn her into a white woman. Nevertheless, she shrugged off the software’s error, saying the AI was not inherently racist.
ChatGPT and numerous artificial intelligence are available worldwide for free. Consequently, these tools must serve users properly by respecting their cultures and beliefs. Understanding Rona Wang’s unusual experience with an AI photo editor will help us see how this technology could impact our daily lives.
This article will elaborate on the eye-opening issue regarding an AI photo editor. Later, I will discuss the other challenges AI tools must hurdle to serve global users better.
Article continues after this advertisementWhy did the AI make that error?
was trying to get a linkedin profile photo with AI editing & this is what it gave me 🤨 pic.twitter.com/AZgWbhTs8Q
— Rona Wang (@ronawang) July 14, 2023
Business Insider confirmed Rona Wang to be a 24-year-old Asian American student at the Massachusetts Institute of Technology. She is completing her graduate program in math and computer science.
On July 14, 2023, she posted images on X with the caption, “Was trying to get a LinkedIn profile photo with AI editing & this is what it gave me.” The first picture shows Wang in a red MIT sweatshirt.
Article continues after this advertisementShe allegedly uploaded that photo into Playground AI with the prompt: “Give the girl from the original photo a professional LinkedIn profile photo.” The second image shows the AI program changed her appearance to look more Caucasian.
It gave her dimples, blue eyes, thinner lips, and a lighter complexion. “My initial reaction upon seeing the result was amusement,” Wang told Insider.
Yet, she expressed relief to see her testimonial spark discussions about bias in machine learning programs. Wang said, “However, I’m glad to see that this has catalyzed a larger conversation around AI bias and who is or isn’t included in this new wave of technology.”
The tech student claimed, “Racial bias is a recurring issue in AI tools.” Consequently, these errors discouraged her from using AI programs further.
You may also like: The Ultimate ChatGPT Guide
“I haven’t gotten usable results from AI photo generators or editors yet,” Wang stated. “I’ll have to go without a new LinkedIn profile photo for now!”
However, she told The Boston Globe she worries about its impact in more serious situations. For example, what if a company uses an AI tool to select the most “professional candidates” and it picks white-looking folks?
“I definitely think it’s a problem. I hope people who are making software are aware of these biases and thinking about ways to mitigate them.”
What is the problem with AI photo editors and other tools?
Rona Wang was correct in saying AI programs are prone to racial bias and other types of discrimination. However, she’s also right in not blaming these tools.
Contrary to popular belief, AI tools don’t think like humans yet. They don’t have specific attitudes toward or against people. However, they function depending on how their developers intended.
Understanding this issue requires basic knowledge of how modern AI works. ChatGPT and other generative artificial intelligence tools rely on large language models.
LLMs contain billions of words from different languages, organized into a three-dimensional graph. Then, it follows algorithms and embeddings to determine the relationship among words.
You may also like: OpenAI plans a copyright-friendly ChatGPT
Algorithms are rules computers follow when executing commands. Meanwhile, embeddings measure the “relatedness of text strings,’ depending on use cases:
- Search: Embeddings rank queries by relevance.
- Clustering: Embeddings group text strings by similarity.
- Classification: These embeddings classify text strings by their most similar label.
- Recommendations: They recommend related text strings.
- Anomaly detection: Embeddings identify words with minimal relatedness.
- Diversity measurement: Embeddings analyze how similarities spread among multiple words.
The problem lies in how developers train their AI models. If they only provided an AI photo editor with samples of Caucasian people, it would be more likely to show more results with white people.
Conclusion
An AI photo editor mistakenly turned an Asian student into a white woman to make her look “more professional.” Fortunately, Rona Wang did not take it against the AI program.
Instead, she was glad her experience made more people aware of biases affecting the AI tools we use daily. Also, Playground AI founder Suhail Doshi responded to her issue.
He said, “The models aren’t instructable like that, so it’ll pick any generic thing based on the prompt. Fwiw (for what it’s worth), we’re quite displeased with this and hope to solve it.” Check out more digital tips and trends at Inquirer Tech.