Video: Brainy US humanoid robots walk like humans with smooth heel strikes

Figure 02 robots now exhibit human-like movements with heel strikes, toe-offs, and synchronized arm swings.

Video: Brainy US humanoid robots walk like humans with smooth heel strikes

Figure uses its end-to-end neural network, trained with reinforcement learning (RL), for humanoid locomotion.

Figure/YouTube

US robotics firm Figure has made significant progress in developing a natural humanoid walking motion using reinforcement learning.

A new video released by the firm showcases its humanoid robots walking with a more fluid, natural motion, replacing the usual stiff, mechanical gait.

According to the firm, Figure 02 uses reinforcement learning (RL) in a physics simulator, simulating years of data in hours. Domain randomization enables seamless, zero-shot transfer from simulation to real-world walking.

“We have presented a natural walking controller learned purely in simulation using end-to-end reinforcement learning. This enables the fleet of Figure robots to quickly learn robust, proprioceptive locomotion strategies and enables rapid engineering iteration cycles,” said Figure in a statement.

In February, the firm introduced Helix, a Vision-Language-Action (VLA) model combining perception, language understanding, and control to tackle key robotics challenges.

Humanoids walk naturally

Figure has relied on RL to develop a natural humanoid walking motion. RL, an AI approach where a controller learns through trial and error using a reward signal, enabled Figure’s humanoid robot, Figure 02, to walk with a human-like gait.

The company trained its RL controller in a high-fidelity, GPU-accelerated physics simulation, simulating years of data within hours. Thousands of virtual humanoids, each with different physical parameters, were run in parallel.

This simulation exposed the robots to a variety of real-world scenarios, including diverse terrains, actuator dynamics, and challenges like slips, trips, and shoves.

Figure claims that through this extensive learning environment, a single neural network policy was developed to manage the robots’ movements.

A key advantage of Figure’s approach is its ability to transfer this trained policy directly from simulation to real-world robots without additional tuning, a process known as “zero-shot” transfer. This seamless transition ensures robust, human-like walking performance across various environments.

According to the Figure, leveraging RL-driven training has significantly reduced development cycles while enhancing the robot’s adaptability and reliability.

AI robots advance

Figure 02 robots can now demonstrate human-like movements, including heel strikes, toe-offs, and synchronized arm swings. According to the company, these improvements were achieved using their RL training process, rewarding robots for mimicking human walking trajectories while optimizing for velocity tracking, energy efficiency, and robustness.

In the video, Figure AI showcases ten Figure 02 robots operating on the same RL neural network without modifications. This uniformity highlights the scalability of its technology, offering the potential to deploy thousands of robots without manual adjustments.

“We have presented a natural walking controller learned purely in simulation using end-to-end reinforcement learning. This enables the fleet of Figure robots to quickly learn robust, proprioceptive locomotion strategies and enables rapid engineering iteration cycles,” said the firm in a statement.

Looking ahead, 2025 is projected as a pivotal year for Figure AI as it begins production, increases robot shipments, and advances in home robotics.

The firm is positioning itself as a strong competitor in the humanoid robotics sector, alongside Tesla’s Optimus, Agility Robotics’ Digit, and Chinese companies like UBTech Robotics and Unitree Robotics. Further expanding its capabilities, Figure AI recently introduced Helix, a VLM model that enables robots to execute complex tasks using natural language instructions.

RECOMMENDED ARTICLES

This AI advancement allows humanoid robots to interpret and respond to real-time commands, manage unexpected objects, and collaborate effectively. By integrating Helix with its reinforcement learning advancements, Figure AI aims to push the boundaries of robotics innovation.

0COMMENT

ABOUT THE EDITOR

Jijo Malayil Jijo is an automotive and business journalist based in India. Armed with a BA in History (Honors) from St. Stephen's College, Delhi University, and a PG diploma in Journalism from the Indian Institute of Mass Communication, Delhi, he has worked for news agencies, national newspapers, and automotive magazines. In his spare time, he likes to go off-roading, engage in political discourse, travel, and teach languages.