Replies: 2 comments 1 reply
-
Thank you for posting this question. It is a great topic for our Discussions section. I'll move the post for the team and others to follow up. |
Beta Was this translation helpful? Give feedback.
1 reply
-
Hi @Bggopal, we've now released a workflow tutorial using Cosmos to augment the dataset and train an imitation learning agent here. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
To reduce the simulation-to-real gap, i have used to cosmos to generated photorealistic videos. Multiple videos. my goal is to train the franka arm with the generated videos.
And for RL training, the expected format is to be in robomimic (Overview — robomimic 0.5 documentation). In the post processing ( https://developer.nvidia.com/blog/building-a-synthetic-motion-generation-pipeline-for-humanoid-robot-learning/#post-training_in_isaac_lab_using_imitation_learning ) technique mentioned here. To fill the sim2real gap, they use the cosoms generated videos to train the arm. How to do this is not mentioned.
Is it possible to extract the information( actions, rewards,..etc) from the frames of cosmos generated videos. Or should be trained with rgb depth images itself.
Anyone helping me with this would be grateful.
Regards
Bggopal
Beta Was this translation helpful? Give feedback.
All reactions