Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 25 additions & 9 deletions docs/source/overview/imitation-learning/teleop_imitation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -636,32 +636,33 @@ To create a comprehensive locomanipulation dataset that combines both manipulati

G1 humanoid robot performing locomanipulation with navigation capabilities.

The navigation dataset generation process takes the previously generated manipulation dataset and creates scenarios where the robot must navigate from one location to another while performing manipulation tasks. This creates a more complex dataset that includes both locomotion and manipulation behaviors.
The locomanipulation dataset generation process takes the previously generated manipulation dataset and creates scenarios where the robot must navigate from one location to another while performing manipulation tasks. This creates a more complex dataset that includes both locomotion and manipulation behaviors.

To generate the navigation dataset, use the following command:
To generate the locomanipulation dataset, use the following command:

.. code:: bash

./isaaclab.sh -p \
scripts/imitation_learning/disjoint_navigation/generate_navigation.py \
scripts/imitation_learning/locomanipulation_sdg/generate_data.py \
--device cpu \
--kit_args="--enable isaacsim.replicator.mobility_gen" \
--task="Isaac-G1-Disjoint-Navigation" \
--task="Isaac-G1-SteeringWheel-Locomanipulation" \
--dataset ./datasets/generated_dataset_g1_locomanip.hdf5 \
--num_runs 1 \
--lift_step 70 \
--navigate_step 120 \
--enable_pinocchio \
--output_file ./datasets/generated_dataset_g1_navigation.hdf5
--output_file ./datasets/generated_dataset_g1_locomanipulation_sdg.hdf5 \
--enable_cameras

.. note::

The input dataset (``--dataset``) should be the manipulation dataset generated in the previous step. You can specify any output filename using the ``--output_file_name`` parameter.

The key parameters for navigation dataset generation are:
The key parameters for locomanipulation dataset generation are:

* ``--lift_step 70``: Number of steps for the lifting phase of the manipulation task
* ``--navigate_step 120``: Number of steps for the navigation phase between locations
* ``--lift_step 70``: Number of steps for the lifting phase of the manipulation task. This should mark the point immediately after the robot has grasped the object.
* ``--navigate_step 120``: Number of steps for the navigation phase between locations. This should make the point where the robot has lifted the object and is ready to walk.
* ``--output_file``: Name of the output dataset file

This process creates a dataset where the robot performs the manipulation task at different locations, requiring it to navigate between points while maintaining the learned manipulation behaviors. The resulting dataset can be used to train policies that combine both locomotion and manipulation capabilities.
Expand All @@ -672,7 +673,22 @@ This process creates a dataset where the robot performs the manipulation task at

.. code:: bash

./isaaclab.sh -p scripts/imitation_learning/disjoint_navigation/plot_navigation_trajectory.py --input_file datasets/generated_dataset_g1_navigation.hdf5 --output_dir /PATH/TO/DESIRED_OUTPUT_DIR
./isaaclab.sh -p scripts/imitation_learning/locomanipulation_sdg/plot_navigation_trajectory.py --input_file datasets/generated_dataset_g1_locomanipulation_sdg.hdf5 --output_dir /PATH/TO/DESIRED_OUTPUT_DIR

The data generated from this locomanipulation pipeline can also be used to finetune an imitation learning policy using GR00T N1.5. To do this,
you may convert the generated dataset to LeRobot format as expected by GR00T N1.5, and then run the finetuning script provided
in the GR00T N1.5 repository. An example closed-loop policy rollout is shown in the video below:

.. figure:: https://download.isaacsim.omniverse.nvidia.com/isaaclab/images/locomanipulation_sdg_disjoint_nav_groot_policy_4x.gif

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we change this gif name to "locomanipulation_sdg_groot_policy_4x.gif"?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can but we need to ask for these images to be re-uploaded, I don't have direct access to do this.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. Who can help us to get these images re-uploaded?

:width: 100%
:align: center
:alt: Simulation rollout of GR00T N1.5 policy finetuned for locomanipulation
:figclass: align-center

Simulation rollout of GR00T N1.5 policy finetuned for locomanipulation.

The policy shown above uses the camera image, hand poses, hand joint positions, object pose, and base goal pose as inputs.
The output of the model is the target base velocity, hand poses, and hand joint positions for the next several timesteps.


Demo 3: Visuomotor Policy for a Humanoid Robot
Expand Down
36 changes: 0 additions & 36 deletions scripts/imitation_learning/disjoint_navigation/README.md

This file was deleted.

Loading
Loading