Stanford AI trains robots on human movement | Inquirer Technology

Stanford, Meta helps make AI move like humans

08:48 AM December 12, 2023

Stanford and Meta’s Facebook AI Research (FAIR) lab created a groundbreaking AI that can generate natural, organized motions between virtual humans and objects based on text commands. Describe what you want CHOIS to do, and it will perform the action. As a result, it could facilitate the creation of human-like artificial intelligence.

Making it do what you want is one of the most difficult hurdles to creating advanced, human-like AI. We’ve figured it out for text-based results from ChatGPT and similar tools. However, movements like walking or picking up objects are surprisingly more complicated. Fortunately, CHOIS might streamline that process, helping to produce more advanced AI.

This article will discuss how Stanford and Meta’s new AI program works. Later, I will elaborate on another system that facilitates robot training.

ADVERTISEMENT

How does the Stanford AI work?

Stanford and FAIR labs collaborated to create CHOIS (Controllable Human-Object Interaction Synthesis). It uses the latest conditional diffusion model techniques to produce precise interactions like “lift the table above your head, walk, and put the table down.”

FEATURED STORIES

VentureBeat defines a conditional diffusion model as a generative AI model that can simulate detailed sequences of motions. For example, let’s say you asked CHOIS to move a lamp closer to a sofa.

In response, the AI will create a realistic animation of a human avatar picking up the lamp and placing it near the sofa. Another unique aspect of Stanford AI is its use of limited object waypoints and language descriptions to guide animations.

Waypoints show where objects must move in a physically plausible way that aligns with a text command’s language input. More importantly, CHOIS can correlate language with spatial and physical actions.

Most may not immediately understand Stanford AI’s significance, so let’s discuss how conventional training works. Let’s say we’re programming a household helper robot similar to Tesla’s Optimus

It all starts with simulating scenarios with digital objects and people. After all, you need to “translate” the robot AI’s environment into something it could understand.

You may also like: ChatGPT Video Game Characters Seem To Have “Free Will”

ADVERTISEMENT

Making those simulations can take months because you should program every action the bot must do. For example, you must program how it would hold a ball and a pair of chopsticks, or it may break household appliances. 

That simulation might take a lot more time to develop the artificial intelligence you originally intended. Thanks to CHOIS, animators could just tell what their AI should do.

Training could also become more dynamic and adaptable because you could watch your AI perform tasks in real-time. Also, you could tell it to do things differently if it fails. 

Another recent robot training innovation

Robot Training Innovation in AI

Stanford and Meta aren’t the only ones creating robot training systems. For example, MIT created one that helps robots recognize real-life objects. It all started when graduate student Andi Peng tried to make her robot pick up her mug.

It could pick up white mugs without fail. However, it struggled to recognize differently colored containers, specifically Peng’s, which had the “Tim The Beaver” mascot. 

She said most robot engineers would go back to the drawing board without understanding why their machine failed. “Right now, the way we train these robots, when they fail, we don’t really know why,” Peng stated. 

“So you would just throw up your hands and say, ‘OK, I guess we have to start over.’ A critical component that is missing from this system is enabling the robot to demonstrate why it is failing so the user can give it feedback,” she added. 

You may also like: The Future of Cardiology: 3D Printed Heart Tissue

Consequently, Peng and her colleagues created a framework that lets humans teach a robot quickly with minimal effort. It uses an algorithm that describes what must change for the robot to perform a task successfully. 

If we use our previous example, the robot only recognizes white mugs. The algorithm shows these factors to a person so that they can refine their robot. It is similar to how humans recognize dogs, no matter their breeds.

Like the MIT robot training, our brains have frameworks called schemas. Our minds can recognize traits that define a dog instead of remembering every breed.

Conclusion

Stanford and Meta recently created an AI system that makes it easier to train robots. It simulates the robot’s intended environment, teaching it how to move according to its intended design. 

That is why the research team believes it could help create advanced AI systems that simulate continuous human behaviors in 3D environments. As a result, we are a step closer to creating humanoid helper robots!

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Learn more about the CHOIS AI system on its arXiv webpage. Moreover, check out the latest digital tips and trends at Inquirer Tech.

TOPICS:
TAGS:

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.