Disclaimer: We may earn a commission if you make any purchase by clicking our links. Please see our detailed guide here.

Follow us on:

Google News
Whatsapp

Google AI teaches robots to imitate behaviors of dogs

Bipasha Mandal
Bipasha Mandal
Bipasha Mondal is writer at TechGenyz

Join the Opinion Leaders Network

Join the Techgenyz Opinion Leaders Network today and become part of a vibrant community of change-makers. Together, we can create a brighter future by shaping opinions, driving conversations, and transforming ideas into reality.

Google’s blog post published new information as to how its AI technology is coming out with better agility. Google explained that researchers have come up with an AI that learns from the motions of animals to copy from for better mobility. The researchers believe that their innovation could take robots to its next level and help out in real life.

The framework is based on the motion captured of an animal and uses reinforcement learning to train a control policy. Different reference motions helped the researchers to teach a four-legged Unitree Laikago robot to perform a range of activities, ranging from fast walking to hops and turns.

The researchers stored data of real dogs performing such activities, and then later trained with about 200 million samples a simulated robot to imitate the motions shown to them.

To make it work in the real world in real-time, researchers applied a technique that applies the randomized dynamics in the simulation, the values recorded here were applied using an encoder. When it was time to apply the policy to a real robot, the encoder was removed and a set of variables were added that gave the robots the autonomy or rather allowed the robots to execute the skills.

The real-world robot learned to imitate various motions from a dog and performed something called hop-turn. “We show that by leveraging reference motion data, a single learning-based approach is able to automatically synthesize controllers for a diverse repertoire [of] behaviors for legged robots,” wrote the coauthors in the paper. “By incorporating sample efficient domain adaptation techniques into the training process, our system is able to learn adaptive policies in the simulation that can then be quickly adapted for real-world deployment.”

However, the control policy is not entirely fool-proof and it could not learn more dynamic behaviours like large jumps and runs.

Join 10,000+ Fellow Readers

Get Techgenyz’s roundup delivered to your inbox curated with the most important for you that keeps you updated about the future tech, mobile, space, gaming, business and more.

Recomended

Partner With Us

Digital advertising offers a way for your business to reach out and make much-needed connections with your audience in a meaningful way. Advertising on Techgenyz will help you build brand awareness, increase website traffic, generate qualified leads, and grow your business.

Power Your Business

Solutions you need to super charge your business and drive growth

More from this topic