- to start the robot disconnect it from power, reconnect after ~20s
- use a static IP connection to connect to the robot via ethernet using `172.31.1.13` or `172.31.1.15` or `172.31.1.17`
- for setting see [wiki](https://git-st.inf.tu-dresden.de/stgroup/stwiki/-/wikis/Projects/Ceti/Panda/Panda-Robot-Configuration-Data)
- Log in to Panda Desk in the browser at [172.31.1.13/](https://172.31.1.13/) or [172.31.1.15/](https://172.31.1.15/) or [172.31.1.17/](https://172.31.1.17/)
- using user `st` and password `st-cobot-1`
- accept security risks (if necessary)
- Unlock the joints
- if the robot is in pack position, unfold it (this is the only option anyway)
- Activate FCI Mode
- Plug in the webcam
- this is only tested for the Trust webcam
- The webcam must be [calibrated](http://wiki.ros.org/camera_calibration). For the Trust model, do the following:
- create a file `~/.ros/camera_info` with the following content:
- figure out the device name of the camera with `v4l2-ctl --list-devices`, if the list shows two devices for the Trust camera, it's most likely the first on, you can check with `v4l2-ctl -d /dev/video2 --all` - it should show a lot of configuration options
- Make sure a MQTT server is running
- start ROS (in the `workspace` directory)
- in every console:
- make sure code is compiled (`catkin build`)
- make sure ROS environment is loaded (`source devel/setup.bash`)
`roslaunch ccf_immersive_sorting tag_detection.launch device:=/dev/video0` (or another video device, most likely `/dev/video2` if you connected the webcam after startup)
- objects.yaml to adapt scene + update yaml.file bash to generate json
- start the control web interface (in the `web-interface` directory)
-*once* install all requirements: https://git-st.inf.tu-dresden.de/ceti/ros-internal/technical-tests/ccf_web_interface/-/blob/master/README.md?ref_type=heads
- run `node index.js`
- open the interface in the browser at http://localhost:3000/
- switch to *Selection Mode*
- start the motion grammar (in the `motion-grammar-controller` directory)
-`./gradlew run`
- start the motion grammar web interface (in the `motion-grammar-controller/dshboard` directory)
- if running for the first time, build the interface with `npm build`