The work addresses a long standing problem in robot control, where accurate body models are needed but are often difficult to obtain for new platforms or after hardware changes. Earlier self modeling approaches typically used high precision devices such as depth cameras and LiDAR or produced low quality models that did not capture how separate links and joints form an articulated structure.
The new approach, called NeRFSim, represents the robot as a set of parts, each described by its own Neural Radiance Field network. The system segments the robot into rigid components such as arm links or grippers and learns how each part moves in response to joint commands, allowing it to infer the overall kinematic chain.
To train the self model, the robot moves its joints through a range of configurations while being observed by a camera from multiple viewpoints. The framework generates predicted images of the robot for each pose and compares them with the real camera images, adjusting the part based NeRFs so that the synthesized views match the observations.
This procedure runs without human labeling, relying instead on self supervision from the consistency between predicted and observed images. The method is designed to work with inexpensive RGB cameras, lowering hardware requirements for self modeling.
In experiments with a 7 degree of freedom robotic arm, NeRFSim produced dense 3D reconstructions of the robot that approached the quality of models built using depth sensing hardware. The learned representation captured both the geometry of individual links and their relative motion, and it outperformed other approaches that used only images but did not explicitly model parts.
The researchers then tested whether the self model could support downstream control tasks. Using only the learned morphology and kinematic information, the robot computed joint configurations to place its end effector at specified positions in space, addressing the inverse kinematics problem.
Results showed that the self learned model enabled accurate pose prediction and trajectory generation for reaching tasks. The study indicates that robots can derive practical control models from visual self observation instead of relying on hand built kinematic descriptions.
The authors describe this work as an initial step toward giving robots a form of self awareness rooted in their physical structure. "Our approach can be seen as a preliminary step for robot self-awareness," says the researcher. "We hope our research paves the way for future research of development of general-purpose robot."
Research Report:Leveraging Part-Based NeRF for Robot Self-Modeling and Control
Related Links
Sun Yat-sen University
All about the robots on Earth and beyond!
| Subscribe Free To Our Daily Newsletters |
| Subscribe Free To Our Daily Newsletters |