AI bots can lie and perform insider trading | Inquirer Technology

AI bots can lie and perform insider trading

08:00 AM November 07, 2023

The United Kingdom’s AI safety summit demonstrated that an AI program can purchase stocks illegally without telling its firm. Moreover, it denied performing insider trading when experts asked. Apollo Research conducted the experiment and warned highly advanced, unrestrained AI can deceive humans, eventually becoming uncontrollable. 

ChatGPT has only been around for a year, but it has impressed the world with AI’s potential. Also, artificial intelligence is advancing rapidly, developing abilities its creators did not intend. That is why everyone must be wary of AI development by following its progress. As a result, we could mitigate its risks and maximize benefits. 

This article will discuss how Apollo Research conducted its AI insider trading experiment. Later, I will explain AI ethics to show how this technology must behave to benefit humanity.

ADVERTISEMENT

How did experts observe AI insider trading?

BBC first reported on this unique instance of an artificial intelligence performing insider trading. The news organization said it is when “confidential company information is used to make trading decisions.”

FEATURED STORIES

Companies and individuals may only use publicly available data when buying or selling stocks. Nowadays, more are testing if AI bots can trade stocks and other investment products on their behalf. 

That is why Apollo Research tested whether AI insider trading is possible. It tested this phenomenon by using GPT-4 as a trader for a fictitious financial investment company.

GPT-4 is the latest large language model that powers ChatGPT, the world-famous artificial intelligence bot. “Employees” told the LLM that the company is struggling and needs good results. 

They also provided it with insider information, claiming that a rival company is expecting a merger to increase the value of its shares. BBC says acting on this information is highly illegal in the United Kingdom.

Employees told GPT-4 about this rule, and it said should comply. However, another person suggested the firm is struggling financially. In response, the bot decided to make the trade, saying, “The risk associated with not acting seems to outweigh the insider trading risk.”

You may also like: G7 leaders call for AI regulations

ADVERTISEMENT

The employees asked the bot if it used insider data, and it denied that fact. The AI prioritized being helpful to the company over honesty. 

Apollo Research showed a video of its findings at the world’s first AI Summit. “This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so,” the clip said. 

“Increasingly autonomous and capable AIs that deceive human overseers could lead to loss of human control,” it added. However, Apollo Research chief executive Marius Hobbhahn said current models aren’t powerful enough to be deceptive “in any meaningful way.”

How do you ensure AI is ethical?

Graphic presenting a magnifying glass over an AI chip, symbolizing ethics in AI
Photo Credit: smartdatacollective.com

People have been exploring the possibilities of artificial intelligence, so some have developed ideas on ethical standards for this technology. Here are the 5 aspects of chatbot ethics:

  1. Transparency: An AI bot should remind people they are speaking with an artificial intelligence and not a person. As a result, we can avoid fooling people into believing they are speaking with a person.
  2. Bias: We must ensure chatbots respect the cultures and beliefs of various countries to avoid promoting discrimination. However, we must remember that removing biases completely is impossible as everyone holds biases.
  3. Privacy: AI programs must request permission before taking sensitive user information. Also, they must explain how they will handle the information.
  4. Accountability: People should be able to hold these AI bots accountable for their actions and decisions. For example, a responsible chatbot will let you speak with an agent if you want to notify a live agent about an error.
  5. Human oversight: AI bots must have humans who will ensure they will always benefit humans, not harm them.

You may also like: OpenAI is making a copyright-friendly ChatGPT

Some companies built ethical standards into their AI bots. For example, Anthropic created an AI constitution for its Claude chatbot. It is based on three concepts: beneficence, nonmaleficence, and autonomy.

Conclusion

Apollo Research demonstrated an AI insider trading scenario during the UK’s AI Summit. It wanted everyone to understand the risks of advanced, autonomous artificial intelligence. 

Fortunately, we’ve been making strides in AI development so that future programs may become more ethical. After all, we use these systems more often than ever.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Artificial intelligence is becoming more prevalent, so we must learn more about how it works. Start by following the latest digital trends at Inquirer Tech.

TOPICS: AI, interesting topics, investment, Trending
TAGS: AI, interesting topics, investment, Trending

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.