British officials say AI chatbots could carry cyber risks | Inquirer Technology

British officials say AI chatbots could carry cyber risks

/ 07:27 AM August 31, 2023

British officials are warning organizations about integrating AI-driven chatbots into their businesses

FILE PHOTO: AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

British officials are warning organizations about integrating artificial intelligence-driven chatbots into their businesses, saying that research has increasingly shown that they can be tricked into performing harmful tasks.

In a pair of blog posts published Wednesday, Britain’s National Cyber Security Center (NCSC) said that experts had not yet got to grips with the potential security problems tied to algorithms that can generate human-sounding interactions – dubbed large language models, or LLMs.

Article continues after this advertisement

The AI-powered tools are seeing early use as chatbots that some envision displacing not just internet searches but also customer service work and sales calls.

FEATURED STORIES

READ: AI hallucination problem: Chatbots sometimes make things up

The NCSC said that could carry risks, particularly if such models were plugged into other elements organization’s business processes. Academics and researchers have repeatedly found ways to subvert chatbots by feeding them rogue commands or fooling them into circumventing their own built-in guardrails.

Article continues after this advertisement

For example, an AI-powered chatbot deployed by a bank might be tricked into making an unauthorized transaction if a hacker structured their query just right.

Article continues after this advertisement

“Organizations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC said in one its blog posts, referring to experimental software releases.

Article continues after this advertisement

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

READ: ChatGPT turns to business as popularity wanes

Article continues after this advertisement

Authorities across the world are grappling with the rise of LLMs, such as OpenAI’s ChatGPT, which businesses are incorporating into a wide range of services, including sales and customer care. The security implications of AI are also still coming into focus, with authorities in the U.S. and Canada saying they have seen hackers embrace the technology.

A recent Reuters/Ipsos poll found many corporate employees were using tools like ChatGPT to help with basic tasks, such as drafting emails, summarizing documents and doing preliminary research.

Some 10% of those polled said their bosses explicitly banned external AI tools, while a quarter did not know if their company permitted use of the technology.

READ: Google, one of AI’s biggest backers, warns own staff about chatbots

Oseloka Obiora, chief technology officer at cybersecurity firm RiverSafe, said the race to integrate AI into business practices would have “disastrous consequences” if business leaders failed to introduce the necessary checks.

“Instead of jumping into bed with the latest AI trends, senior executives should think again,” he said. “Assess the benefits and risks as well as implementing the necessary cyber protection to ensure the organization is safe from harm.”

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

READ: Navigating the ethical minefield: AI and Chatbot responsibilities

TOPICS: AI, Business, chatbots, Cybersecurity
TAGS: AI, Business, chatbots, Cybersecurity

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.