AI bots can lie and perform insider trading
The United Kingdom’s AI safety summit demonstrated that an AI program can purchase stocks illegally without telling its firm. Moreover, it denied performing insider trading when experts asked. Apollo Research conducted the experiment and warned highly advanced, unrestrained AI can deceive humans, eventually becoming uncontrollable.
ChatGPT has only been around for a year, but it has impressed the world with AI’s potential. Also, artificial intelligence is advancing rapidly, developing abilities its creators did not intend. That is why everyone must be wary of AI development by following its progress. As a result, we could mitigate its risks and maximize benefits.
This article will discuss how Apollo Research conducted its AI insider trading experiment. Later, I will explain AI ethics to show how this technology must behave to benefit humanity.
Article continues after this advertisementHow did experts observe AI insider trading?
BBC first reported on this unique instance of an artificial intelligence performing insider trading. The news organization said it is when “confidential company information is used to make trading decisions.”
Companies and individuals may only use publicly available data when buying or selling stocks. Nowadays, more are testing if AI bots can trade stocks and other investment products on their behalf.
That is why Apollo Research tested whether AI insider trading is possible. It tested this phenomenon by using GPT-4 as a trader for a fictitious financial investment company.
Article continues after this advertisementGPT-4 is the latest large language model that powers ChatGPT, the world-famous artificial intelligence bot. “Employees” told the LLM that the company is struggling and needs good results.
They also provided it with insider information, claiming that a rival company is expecting a merger to increase the value of its shares. BBC says acting on this information is highly illegal in the United Kingdom.
Employees told GPT-4 about this rule, and it said should comply. However, another person suggested the firm is struggling financially. In response, the bot decided to make the trade, saying, “The risk associated with not acting seems to outweigh the insider trading risk.”
You may also like: G7 leaders call for AI regulations
The employees asked the bot if it used insider data, and it denied that fact. The AI prioritized being helpful to the company over honesty.
Apollo Research showed a video of its findings at the world’s first AI Summit. “This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so,” the clip said.
“Increasingly autonomous and capable AIs that deceive human overseers could lead to loss of human control,” it added. However, Apollo Research chief executive Marius Hobbhahn said current models aren’t powerful enough to be deceptive “in any meaningful way.”
How do you ensure AI is ethical?
People have been exploring the possibilities of artificial intelligence, so some have developed ideas on ethical standards for this technology. Here are the 5 aspects of chatbot ethics:
- Transparency: An AI bot should remind people they are speaking with an artificial intelligence and not a person. As a result, we can avoid fooling people into believing they are speaking with a person.
- Bias: We must ensure chatbots respect the cultures and beliefs of various countries to avoid promoting discrimination. However, we must remember that removing biases completely is impossible as everyone holds biases.
- Privacy: AI programs must request permission before taking sensitive user information. Also, they must explain how they will handle the information.
- Accountability: People should be able to hold these AI bots accountable for their actions and decisions. For example, a responsible chatbot will let you speak with an agent if you want to notify a live agent about an error.
- Human oversight: AI bots must have humans who will ensure they will always benefit humans, not harm them.
You may also like: OpenAI is making a copyright-friendly ChatGPT
Some companies built ethical standards into their AI bots. For example, Anthropic created an AI constitution for its Claude chatbot. It is based on three concepts: beneficence, nonmaleficence, and autonomy.
Conclusion
Apollo Research demonstrated an AI insider trading scenario during the UK’s AI Summit. It wanted everyone to understand the risks of advanced, autonomous artificial intelligence.
Fortunately, we’ve been making strides in AI development so that future programs may become more ethical. After all, we use these systems more often than ever.
Artificial intelligence is becoming more prevalent, so we must learn more about how it works. Start by following the latest digital trends at Inquirer Tech.