On this page we'll be covering how to deploy your trained policy, specifically with respect to the Hello Robot Stretch. The first half will be general instructions for operating the Stretch robot, followed by specifics of how to load and run your policy.
stretch_robot_home.py
sudo shutdown now
.min-stretch
repositorygit clone https://github.com/NYU-robot-learning/min-stretch.git
.
./run_robot.sh
This will open up a tmux
window with 2 panes, with system python on the left and the mamba environment activated on the right.
python3 start_server.py
python run.py
python run.py model_weight_pth=YOUR_MODEL_PATH
rsync -avmP --include='*/' --include='checkpoint.pt' --exclude='*' imitation-in-homes/checkpoints/2025-02-20/bag_pick_up_checkpoints_folder hello-robot@ROBOT_IP:/home/hello-robot/min-stretch/imitation-in-homes/2025-02-20
min-stretch
repo of where your policy was trained (likely Greene). You will also need to replace the ROBOT_IP with the robot's local IP address as Tailscale is not supported on Greene.
rsync
command will copy only the latest checkpoint to the robot.
http://ROBOT_IP:7860
. The rest of the instructions for running the policy should be written at the top of the UI.