On Saturday, Max generated a fully synthetic dataset and trained a CV model to detect the buoy. In addition to the four buoy glyphs, it also detected the entire buoy – something which I asked Max to add on Saturday after realizing it would be useful for sonar testing.

On Sunday, we tested the new CV model in the pool, and it worked very well. It detected all four buoy glyphs and the entire buoy with very high accuracy and confidence values. It worked just as well as our model from last year (trained on real and synthetic images).

We initially had quite a few false positives, especially for the whole buoy, but these were easily eliminated by increasing the confidence threshold to 0.75. We noticed that the buoy’s reflection on the surface of the water would also lead to some false positives. Overall, the model performed quite well, and we identified some steps we could take to make it even more robust.

We then tested the sonar. There were a few minor bugs which I fixed before going to the pool. At the pool, we updated depthai_spatial_detection.py to use buoy_whole as the target for sonar; thus sonar was scanning the entire buoy. We found the distance reported by the sonar to the buoy to be very accurate; we confirmed the accuracy of the distances using the measuring tape. However, the normal angles were very inaccurate and fluctuated wildly; for the duration of the test, the buoy was perpendicular to the robot, so sonar should have returned normal angles of approximately zero, yet the normal angles varied between approximately -30 and +30. This could be partly because the width of the sonar scan exceeded the width of the buoy. We recorded a ROS Bag of the sonar testing for future analysis.

The sonar randomly crashed while we were performing our testing. We made many attempts to get it to work, but it would always error out immediately when started. We even restarted the robot a few times, but to no avail.

We then checked controls and task planning to make sure they worked reasonably well, before we tested task planning’s move to CV object function. We tuned Z PID. When we commanded the robot to submerge and then move forward, for some inexplicable reason, task planning would command the robot to surface, even though we had intended for the robot to maintain depth. We tried this several times and got the same results. We then tried to debug this issue for the remainder of the pool test, but couldn’t solve it before the pool closed. Thus, we didn’t get the opportunity to test the move to CV object function.

Throughout the pool test, we noticed that the thruster covers with wood glue absorbed a lot of water, but the thruster covers with nail polish absorbed very little water.

Positives

  • CV model trained on a fully synthetic dataset performs just as well as a model trained on real images. This confirms that we will not need to collect and label real images this year for CV. No CV sweatshop!
  • Sonar reports accurate positions for the buoy.
  • Z PID has been retuned given the latest changes to controls.
  • Nail polish is better at waterproofing than wood glue.

Issues

  • CV dataset would benefit from an addition of null images, as well as training images with objects that look similar to the competition objects, but are distinct. This would make the model more robust, reducing false positives.
  • Sonar’s normal angle is very inaccurate.
  • Sonar randomly crashes and cannot be restarted, even if the robot itself is restarted.
  • Task planning doesn’t provide an accurate desired pose to controls when calling move_to_pos_local. I’ve done some debugging of this on land after the pool test, but haven’t found a solution.
  • Wood glued thruster covers need to be replaced with nail polished ones.
  • Other 3D prints also need to be nail polished.