Pool Test 07/20/2024

Today we resolved the thruster issue and made more progress towards the gate task.

Max and I recently improved our detection rate on the DepthAI camera with color correction to 17Hz and also reduced the latency. This was the result of two optimizations

  • Limiting output sizes and queue sizes of nodes so they consumed less RAM and discarded older frames.
  • Removing the stereo vision part of the DepthAI pipeline, as it was unnecessary given that we are able to compute the XYZ coordinates of an object using its actual width and the camera’s parameters.

At the pool, we first tested our gate model on the DepthAI camera. While everything worked as intended, we decided to perform CV on the full FOV of the camera. By default, DepthAI uses a cropped version of the image the camera captures as the input to a neural network, since that results in a faster detection rate. However, that also limits the FOV passed into the neural network. We want to utilize the full FOV so the robot can see the whole gate, even when it gets up close to it.

We change the pipeline to take the original 12MP image, resize it down to 416 x 416 (our model’s input size), and then pass it into the neural network. This meant the image that we performed inference on was distorted, but this didn’t prove to be much of an issue.

With the added steps, the detection rate slowed to 10Hz, but this was still fast enough to perform the gate task. We increased the confidence threshold to 0.9. We were able to reliably detect the red image, as well as the whole gate. We rarely got detections for the gate tick and the blue image. Sometimes, it would mistake the blue image for the red image, as well as the reflection of the red image. However, this was mostly solved by only keeping the highest confidence detection for a given class. It does seem that the color correction isn’t changing the colors in the frame as much as it should; the image still has a strong blue haze compared to the mono camera.

We then switched our focus to the barrel roll and coin flip. We started off by running thrusters over pure serial (no ROS) but with the new Arduino Nano (instead of the Nano Every). This resulted in the robot being unable to complete the barrel roll, as the bottom thrusters didn’t have enough power. Also, the corner thrusters spun up during this, even though they shouldn’t be spinning at all, resulting in the robot yawing.

We also observed that the robot’s current draw spiked during the barrel roll. The voltage would drop by 1V, sometimes even more. This is not a huge issue when the battery is fully charged, but if the battery is close to being discharged (normal voltage around 15V), the drop in voltage is dangerous, so we should avoid testing the barrel roll with a low battery. We should also avoid doing two barrel rolls in one continuous motion. Instead, it is better do have a pause between the two barrel rolls so the voltage doesn’t stay too low for too long.

We then switched the code to use ROS, while still keeping the Nano. This resulted in completely normal behavior. The robot was able to complete the barrel roll just as it had in our past tests. Thus, we concluded that the cause of the thruster issues is the serial code. While we weren’t able to identify any specific part of the code that was problematic, we decided to stick with ROS, at least until we have more insight into what the problem might be. Hopefully, switching to the Nano also solved the ROS disconnect issues, in which case we don’t even need to switch to serial at all.

With the thruster issue solved, we shifted our focus back to testing the barrel roll. We found that with an initial submerge of -0.7 meters, the robot would break the surface on the second barrel roll, as it would come up a bit between the two barrel rolls. We solved this by simply decreasing the time between the two barrel rolls, so the robot would complete both before it got too high. After completing the second one, the robot would get close to the surface, but would submerge again to its original depth before it actually broke the surface.

We then combined the barrel roll with the coin flip. We would submerge, correct yaw, barrel roll, and correct yaw again. We found that this led the the robot’s final yaw to be slightly off from its initial yaw. Thus, the robot wouldn’t see the entirety of the gate and it would be difficult for it to go through the gate. Therefore, we decided to instead perform the barrel roll after we go through the gate. That way, the robot would be able to go through the gate without issue, then barrel roll; the yaw after this wouldn’t matter much, since the robot would have to align itself with a path marker anyways to face the buoy.

We then shifted our focus back to CV, with the goal of accurately computing the XYZ coordinates of the red gate image. We got accurate values for the Y and Z coordinates, but the X coordinate was consistently half of what it should be. While we weren’t able to find the exact cause of this, we suspect we misinterpreted the information provided by the manufacturer regarding the focal length and/or sensor size of the camera, meaning those values are incorrect.

Nevertheless, we simply multiplied the X coordinate by two to get a reasonably accurate value that we could use to test the gate task. In our final run of the test, we did the coin flip and moved through the gate. We didn’t do the barrel roll because the battery was too low. See the video below. I have uploaded a bag file of this run to our Google Drive and Foxglove accounts.

This video has no audio. The robot started 20ft from the gate, facing 90 degrees away and moved through it.