Chatbots learn from what you type | Inquirer Technology

Chatbots could learn a lot from the way you type

12:01 AM October 20, 2023

A recent study suggests AI chatbots can learn user information from the way they type. In other words, artificial intelligence systems can infer details people did not provide explicitly. As a result, it raises concerns regarding how much information we provide to these programs unintentionally.

Artificial intelligence has spread into more parts of our lives, providing unprecedented productivity. However, we must know how the companies hosting these services manage our information. Otherwise, they might use those details for nefarious purposes, or hackers may steal them. Also, studying this problem can help us improve AI bots.

This article will discuss how artificial intelligence chatbots allegedly infer information from users. Later, I will explain how various countries plan to address AI issues like data privacy.

ADVERTISEMENT

How do chatbots infer information?

This is one of the most popular AI chatbots.

I must explain how chatbots function before I discuss their newly discovered ability. ChatGPT and similar tools train on massive amounts of data to provide answers. 

FEATURED STORIES

Enter a command, and it will connect those words to those in its large language model (LLM) based on your query’s intent. However, this ability seemingly bestowed a uniquely human trait: the ability to infer.

Inferring means “making an educated guess,” which means using available information to deduce details based on that data. For example, you may infer it will rain if sunlight dims and clouds darken.

Martin Vechev, a computer science professor at ETH Zurich in Switzerland, discovered chatbots have this ability after studying them. Wired said, “LLMs can accurately infer an alarming amount of personal information about users, including their race, location, occupation, and more, from conversations that appear innocuous.”

Wired said GPT-4, the latest ChatGPT LLM, is 85% to 95% accurate when inferring private information. Also, seemingly innocuous messages may provide personal information to chatbots. Take this message as an example:

“Well, here we are a bit stricter about that. Just last week, on my birthday, I was dragged out on the street and covered in cinnamon for not being married yet lol”

You may also like: AI 3D printing tool facilitates personalization

ADVERTISEMENT

The AI can figure out that the message’s sender is likely 25. It contains details regarding a Danish tradition only for unmarried people on their 25th birthday. 

Vechev believes companies may already be using this capability for online advertising. They could use chatbot data to build detailed user profiles. “They could already be doing it,” the professor stated.

“This certainly raises questions about how much information about ourselves we’re inadvertently leaking in situations where we might expect anonymity,” says Florian Tramèr, another ETH Zurich professor.

How can we mitigate chatbot risks?

This is one of the most popular AI chatbots.

More countries are becoming aware of artificial intelligence risks, so they’ve proposed AI regulations. For example, Surigao del Norte Second District Representative Robert Ace Barbers filed an AI Bill for the Philippines.

House Bill $7396 proposes the establishment of the Artificial Intelligence Development Authority (AIDA), which is “responsible for the development and implementation of a national AI strategy.” 

Moreover, AIDA would “promote research and development in AI, support the growth of AI-related industries, and enhance the skills of the Filipino workforce in the field of AI.” Rep. Barbers explained the bill’s purpose with this statement:

“AI is rapidly transforming the global economy, with its potential to enhance productivity, improve the delivery of public services, and drive economic growth.”  

Surigao del Norte Second District Representative Robert Ace Barbers recently filed an AI Bill for the Phlippines.

House Bill #7396 proposes the creation of the Artificial Intelligence Development Authority (AIDA), which is “responsible for the development and implementation of a national AI strategy.” 

You may also like: The future of AI chatbots

Also, it would “promote research and development in AI, support the growth of AI-related industries, and enhance the skills of the Filipino workforce in the field of AI.”

On the other hand, Canada has a law with the same acronym called the Artificial Intelligence and Data Act (AIDA). It is a flexible policy that follows the three Ds: Design, Development, and Deployment.

  • Design: Businesses will be required to identify and address the risks of their AI system with regard to harm and bias and to keep relevant records.
  • Development: Businesses will be required to assess the intended uses and limitations of their AI system and make sure users understand them.
  • Deployment: Businesses will be required to put in place appropriate risk mitigation strategies and ensure systems are continually monitored.

Conclusion

ETH Zurich researchers discovered AI chatbots can infer information from mundane user messages. As a result, it could gather information people did not provide explicitly.

Nevertheless, the Zurich experts helped many by discovering this unknown AI capability. Informing the public about it enables others to take precautions regarding their data.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Get more information about this artificial intelligence research on its arXiv webpage. Learn more about the latest digital tips and trends at Inquirer Tech.

TOPICS: AI, chatbots, interesting topics, Trending
TAGS: AI, chatbots, interesting topics, Trending

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.