Google robot constitution keeps humans safe | Inquirer Technology

Google robot constitution keeps humans safe

08:00 AM January 09, 2024

Google DeepMind recently announced three large language models (LLM) to improve how robots perform tasks. AutoRT is the most interesting because it provides a Robot Constitution, a set of rules that prevents machines from harming humans. The remaining two are SARA-RT and RT-Trajectory, which facilitate robot training. 

We’ve been integrating more robots into society, but many still fear a Terminator-esque future of killer machines. Ushering a new technological age needs more people to accept bots, so we must assure the public they will not harm humans. Fortunately, one of the largest tech companies stepped forward with no one solution but three.

This article will discuss how AutoRT, SARA-RT, and RT-Trajectory function. Then, I will share more ways we are training artificial intelligence.

Article continues after this advertisement

How does the Google robot constitution work?

Understanding the working mechanism of the Google robot constitution.

AutoRT features safety guardrails with a Robot Constitution. It is a set of safety-focused prompts that robots must follow when selecting tasks. Also, DeepMind says it took inspiration from Isaac Asimov’s Three Laws of Robotics:

FEATURED STORIES
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Brookings Institution said Asimov later added a “Zeroth Law” superior to the others: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” Consequently, AutoRT has further safety roles. 

For example, it prevents robots from attempting tasks involving humans, animals, electrical appliances, or sharp objects. However, Google recommends programmed precautions to ensure security.

Article continues after this advertisement

For example, you could program robots to stop automatically if their joints experience excessive force to prevent accidents that can harm humans. A human supervisor may also keep a line of sight with active robots and have a deactivation switch in case of catastrophic failures.

Article continues after this advertisement

You may also like: Japan is building its version of ChatGPT

Article continues after this advertisement

Besides the Robot Constitution, Google created SARA-RT and RT-Trajectory. The former helps robots continue learning and maintain performance as they learn more tasks. 

Otherwise, our robots may lag and crash as they serve us longer, unable to process the instructions they learned over the years. On the other hand, RT-Trajectory enables bots to learn from humans by watching them. 

Article continues after this advertisement

For example, robots can learn how to clean your room by seeing you putting trash into a bin. Alternatively, RT-Trajectory also enables bots to learn from video uploads.

How do we train AI nowadays?

Modern AI training methods.

Google’s Robot Constitution focuses on AI inside bots, but we’ve been using artificial intelligence for other purposes. That is why we need effective ways of training them. 

Fortunately, scientists succeeded in letting them learn better than people. Let’s discuss how generative AI tools work before we cover this tool. ChatGPT and similar tools rely on large language models containing numerous words. 

It matches its words with the user’s and combines them into coherent, relevant answers. For example, if it receives the word “jump,” it can make phrases like “jump twice” or “jump around right twice.”

However, the AI program will not provide a result if it gets an unknown word like “spin.” You would have to retrain the entire LLM for that word, which is a painstaking, expensive process.

That is why scientists created a new way of training artificial intelligence: Meta-learning for Compositionality. As mentioned, it makes the AI tool apply different rules to newly learned words. 

It also gives feedback on whether it followed the rules correctly. The researchers used the following steps to test their AI training method: 

You may also like: Researchers create generative AI robot assistant

  1. They made humans match the same words using the same rules. 
  2. Then, they recorded the human errors. 
  3. Afterward, they instructed the AI to learn as it completed tasks. The conventional method involved following a static data set. 
  4. The experts compared AI and human performance by applying human errors to their artificial intelligence.
  5. The AI program shared answers almost identical to those from the human volunteers. 

“For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” New York University scientist Brenden Lake said. 

“We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”

Conclusion

Google created a large language model that provides a Robot Constitution for bots worldwide. It is open source, so it could help more folks use robots safely.

The search engine company also made two other LLMs that facilitate robot training. Soon, teaching your robot assistant will be easier than training your dog!

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Learn more about this open-source Robot Constitution on DeepMind’s webpage. Check out the latest digital tips and trends at Inquirer Tech.

TOPICS:
TAGS:

No tags found for this post.
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.