
this week, research teams from amazon far labs, mit, the university of california, berkeley, stanford university, and carnegie mellon university announced the launch of a revolutionary new technology, the "omniretarget interaction preservation data generation engine". this technology enables the yushu g1 humanoid robot to perform complex action sequences solely based on proprioception, without relying on vision or lidar systems. in the demonstration video, the robot not only climbs onto a table using a chair as a stepping stone, but also performs parkour-style rolls to cushion the impact upon landing, showcasing its impressive motor skills.
the core innovation of omniretarget lies in its interaction mesh technology, which accurately models and preserves the spatial and contact relationships between the robot, the terrain, and the objects it interacts with. by strictly satisfying kinematic constraints and minimizing the deformation differences between the human and robot motion meshes, the system generates physically accurate trajectories. the research team conducted validation on multiple datasets, generating high-quality trajectory data over a period of more than 9 hours. in terms of action feasibility and contact stability, the performance far exceeds that of conventional methods.
as a result of this groundbreaking achievement, the robot can now learn complex action sequences of up to 30 seconds in duration with just five reward parameters and simple environmental randomization. in addition to the demonstrated table climbing and rolling actions, the system can also support eight different object handling styles. the research team stated that this purely proprioception-based control strategy holds significant application value in extreme environments where vision sensors fail to function, paving the way for the future practical deployment of humanoid robots in scenarios such as disaster rescue and exploration.