diff --git a/README.md b/README.md index 54dd88d1d51813e6f4dba1b3f422409ed544755d..ae92d864350cb17960eee6c718f8cce616bb56ad 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,79 @@ # Motion Grammar Example -For documentation, please see https://jastadd.pages.st.inf.tu-dresden.de/motion-grammar-example \ No newline at end of file +For documentation, please see https://jastadd.pages.st.inf.tu-dresden.de/motion-grammar-example + +# Running the Demo + +## Repos into directories (inkl submodules): + - web-interface: git@git-st.inf.tu-dresden.de:ceti/ros-internal/technical-tests/ccf_web_interface.git (branch mg-demo) + - motion-grammar-controller: git@git-st.inf.tu-dresden.de:jastadd/motion-grammar-example.git + - workspace: git@git-st.inf.tu-dresden.de:ceti/ros-internal/ccf/ccf_workspace.git + +## Manual + +- Start the Robot + - to start the robot disconnect it from power, reconnect after ~20s + - use a static IP connection to connect to the robot via ethernet using `172.31.1.13` or `172.31.1.15` or `172.31.1.17` + - for setting see [wiki](https://git-st.inf.tu-dresden.de/stgroup/stwiki/-/wikis/Projects/Ceti/Panda/Panda-Robot-Configuration-Data) + - Log in to Panda Desk in the browser at [172.31.1.13/](https://172.31.1.13/) or [172.31.1.15/](https://172.31.1.15/) or [172.31.1.17/](https://172.31.1.17/) + - using user `st` and password `st-cobot-1` + - accept security risks (if necessary) + - Unlock the joints + - if the robot is in pack position, unfold it (this is the only option anyway) + - Activate FCI Mode + +- Plug in the webcam + - this is only tested for the Trust webcam + - The webcam must be [calibrated](http://wiki.ros.org/camera_calibration). For the Trust model, do the following: + - create a file `~/.ros/camera_info` with the following content: + ```yaml + image_width: 2560 + image_height: 1440 + camera_name: head_camera + camera_matrix: + rows: 3 + cols: 3 + data: [1813.222796360344, 0, 1261.930645515087, 0, 1817.396430837666, 719.5808597823312, 0, 0, 1] + distortion_model: plumb_bob + distortion_coefficients: + rows: 1 + cols: 5 + data: [0.08032097481606813, -0.1159678601922493, 0.005085141441063858, -0.0007346072111796328, 0] + rectification_matrix: + rows: 3 + cols: 3 + data: [1, 0, 0, 0, 1, 0, 0, 0, 1] + projection_matrix: + rows: 3 + cols: 4 + data: [1834.692138671875, 0, 1258.756588840144, 0, 0, 1843.024536132812, 725.2023077170961, 0, 0, 0, 1, 0] + ``` + - figure out the device name of the camera with `v4l2-ctl --list-devices`, if the list shows two devices for the Trust camera, it's most likely the first on, you can check with `v4l2-ctl -d /dev/video2 --all` - it should show a lot of configuration options + +- Make sure a MQTT server is running + +- start ROS (in the `workspace` directory) + - in every console: + - make sure code is compiled (`catkin build`) + - make sure ROS environment is loaded (`source devel/setup.bash`) + - load robot controller + `roslaunch ccf_immersive_sorting robot-cell_ceti-table-1-emptyworld-grey.launch robot_ip:=<ROBOT-IP>` + - load tag detection + `roslaunch ccf_immersive_sorting tag_detection.launch device:=/dev/video0` (or another video device, most likely `/dev/video2` if you connected the webcam after startup) + - load the virtual scene controller + `roslaunch ccf_immersive_sorting virtual_scene_provider.launch` + - objects.yaml to adapt scene + update yaml.file bash to generate json + +- start the control web interface (in the `web-interface` directory) + - *once* install all requirements: https://git-st.inf.tu-dresden.de/ceti/ros-internal/technical-tests/ccf_web_interface/-/blob/master/README.md?ref_type=heads + - run `node index.js` + - open the interface in the browser at http://localhost:3000/ + - switch to *Selection Mode* + +- start the motion grammar (in the `motion-grammar-controller` directory) + - `./gradlew run` + +- start the motion grammar web interface (in the `motion-grammar-controller/dshboard` directory) + - if running for the first time, build the interface with `npm build` + - `npm start` +