CS Updates: So, When’s the Next Test?

Welcome to the first CS blog post of the semester! The team has been hard at work finishing up remaining items from last semester and has taken on some additional cleanup and refactoring of the code base.

Due to winter weather, the team is coming off a long break from pool testing. Therefore, a lot of code from last semester and early this semester is still in the testing phase. We hope to get many of the currently in-testing items finished by the end of the month. Without any further delay, here is a peek into some of the projects currently in the pipeline.

General Computer Science

Sonar

With limited testing time at our previous two tests, sonar is in the final stages of integration into the codebase. Testing revealed accurate and consistent sonar readings against the wall, with success against objects like the buoy as well. Next pool test, we hope to finish testing the tasks written for sonar, mainly the code that will align the robot with the wall. While this task had existed previously, testing at competition last year revealed the robot would not consistently align with the wall. We hope that this year, with an accurate sonar, we can align with the wall to help with positional accuracy in the pool.

Timeline: This code is currently awaiting pool testing.

IVC Task Planning Cleanup

At last competition, we implemented some intervehicle communication between Crush and Oogway to ensure that they would not collide while attempting the gate task. However, this code was not robot-agnostic and included redundant and confusing code. At the beginning of the semester, we began working to clean up the organization of our IVC code in the correlated task. This included mainly the renaming of functions and the removal of code that was written only for competition. We hope to modularize the code so that it can be used in any task, from blocking robot movement to simple acknowledgements.

Timeline: This code is currently awaiting pool testing and is roadblocked by Crush’s unavailability for pool testing.

Crush Marker Dropper Servo

We currently have a smaller-scale project that is changing existing servo code from Oogway to work for our new marker dropper servo to be installed on Crush. After original drafting, testing stalled due to several electrical and mechanical issues with Crush in January. After having a chance to test at the beginning of February, we soon discovered CS issues relating to the ROS service code that should set the position of the servo. Since calls to the ROS service do not affect the position of the servo, we will work to debug this issue.

Timeline: End of February

Crush 8 Thrusters

With Crush’s upgrade to 8 thrusters, minor adjustments were made to increase the thruster amount from 6 to 8. This included mostly configuration changes and wrench matrix changes.

Timeline: This code is currently awaiting pool testing and is roadblocked by Crush’s unavailability for pool testing.

Computer Vision (CV)

Yaw to CV Object

Last semester, we worked to rewrite an important function, Yaw to CV Object. This function has the robot execute a search pattern until it finds an object through the computer vision package, and then centers that object in the frame. We frequently use this function while navigating to an object in the pool. Previously, the code had the robot rotate in steps and then re-check the position of the object to adjust once again, which led to overcorrections, low accuracy, and slow adjustments. Our new code works to update the PID setpoint in relation to the real-time CV angle instead of a full rotation to the detected degree offset. Unfortunately, testing this function has proven a challenge due to our limited testing time.

Timeline: This code is currently awaiting pool testing.

HSV Refactoring

Over the past year, the CS team has continuously been rewriting and improving our HSV code, which is used to detect solid colored objects in the water. Last year, we worked to cut down the amount of repeated code and files for each object by writing a singular interface that all HSV nodes can implement and reuse. This semester, we are working on migrating some final files to the new framework and resolving some issues from the previous refactoring.

Timeline: This code is currently awaiting pool testing.

DepthAI Refactoring

Our current DepthAI project is working to modularize arbitrary constants that are found in some models, such as the torpedo banner. At competition, we experimentally tuned scaling values to ensure accuracy of detections, as torpedos requires high accuracy to shoot. However, this left behind scaling constants in several locations, which need to be consolidated to existing configuration files and placed in locations that make intuitive sense.

Timeline: End of February

Jetson Nano Characterization and YOLO

Last semester, the team worked on setting up YOLO models to run on the Jetson Nano found in Crush. This semester, we had planned to continue this project in order to eliminate the DepthAI cameras found on Oogway for future robots. However, we ran into some setup/networking issues on the Jetson Nano, and the project has largely stalled for the beginning of the semester. In order to make an informed decision for not only Crush but also future robot development, this project will be reprioritized this month. Stay tuned for more updates on this crucial project as we overhaul our current detections pipeline to move away from DepthAI.

Timeline: TBD

Foxglove GUI

Foxglove Agent

The Foxglove Agent allows Foxglove to upload data recordings (i.e. bag files) and upload them to the Foxglove cloud for remote access. This will ease local testing as well as the management of recording-related files. Currently, this has been set up on Oogway and has been tested, and will be coming to Crush soon.

Timeline: End of February

DepthAI and Mono Camera Bounding Box Rendering

This project is working to remove computational load by having Foxglove, our GUI, render bounding boxes instead of our computer. Foxglove already receives the raw images from the camera, as well as detailed bounding box data from any CV node that is doing detections. Using this data, it can directly render bounding boxes with some additional data on the cameras. This will also allow our annotations to remain sharp when zooming in or out on images in the GUI.

Timeline: This code is currently awaiting pool testing.

Final Takeaways

With so much code awaiting testing, as well as the change of date for teamtime #1 from January to March, the CS team has been unfortunately waiting for more tasks and code to write. However, with pool testing resuming, we hope to continue improving the code base and preparing for competition in July. Stay tuned for the next blog!