ROS 2 wrappers for AnyGrasp detection and tracking.
The devcontainer is based on pytorch/pytorch:2.10.0-cuda12.6-cudnn9-devel image and provides
- Pytorch 2.10
- CUDA 12.6
- CUDNN9
- ROS Jazzy (Base container is ubuntu 24.04)
- chenxi-wang/MinkowskiEngine
- CollaborativeRoboticsLab/graspnetAPI
- graspnet/anygrasp_sdk
To have a stable feature id for the anygrasp license, we utilize built-in docker network bridge and a fixed mac address. For the dev container, this is represented by following config. Change the given mac address as required.
"runArgs": [
"--network=bridge",
"--mac-address=02:42:de:ad:be:ef"
]Install VSCode and add the DevContainer addon.
Clone this repo and open using VSCode. Generally VScode should auto detect, if not press Shift+Ctrl+P to open the command palette and select "DevContainer: Rebuild and Reopen the container" option.
Once the Container is built, run the license_checker function from anygrasp_sdk and apply for the license following the steps from here.
Following commands will help to run the license_checker within the dev container.
/dependencies/anygrasp_sdk/license_registration/license_checker -fOnce you fill the form and receive the license zip file, unzip and copy it to the /license folder within the cloned repo (Not inside the container). Devcontainer has been configured to mount the license folder into the following locations of the container,
/dependencies/precompiled/license
To check the license run following command
/dependencies/anygrasp_sdk/license_registration/license_checker -c /dependencies/precompiled/license/licenseCfg.jsonCopy the detection and tracking model weights into weights/detection and weights/tracking folders respectively. These will be mounted into following folders inside the container.
/dependencies/precompiled/weights/detectionallows to run the ros2 packages/dependencies/precompiled/weights/trackingallows to run the ros2 packages
This can also be done alongside the prior Adding Licesne step.
Try running the grasp_detection/demo.py and grasp_tracking/demo.py to confirm the process pipeline is working.
Use the following command to start the anygrasp system
ros2 launch anygrasp_ros detection.launch.pyUse the following command to start the anygrasp system
ros2 launch anygrasp_ros tracking.launch.pyThe nodes expose these services:
/anygrasp/detectionusinganygrasp_msgs/srv/GetGrasps/anygrasp/trackingusinganygrasp_msgs/srv/GetGraspsTracked
Each service takes a count in the request. Detection returns geometry_msgs/PoseStamped[]; tracking returns int64[] ids aligned with geometry_msgs/PoseStamped[], and accepts input_ids as a list to select specific tracked grasps or [] to update the active set. Each stamped pose copies the source pointcloud header, so the frame is explicit for downstream motion planning.
Both nodes publish RViz grasp markers as visualization_msgs/MarkerArray:
- Detection markers:
/anygrasp/detection_markers - Tracking markers:
/anygrasp/tracking_markers
Add either topic as a MarkerArray display in RViz to inspect grasp poses and IDs in 3D.