Your browser version is outdated. We recommend that you update your browser to the latest version.

Official Website Of Team 14496 Roboctopi

Code

Mineral Detection/Tensorflow

For mineral detection we used the example Tensorflow Object Detection Module. However, it needed to be heavily modified for the Tensorflow Object Detection to work properly on our robot.

 

First, since our robot could only see two minerals we modified the code so that it did not require 3 minerals to bee seen to determine mineral position. The provided sample reports minerals as left, center or right and only continues if 3 minerals are detected and one of the returned values is set to gold. Our modified logic only uses 2 of these returned values. It checks left and right values to determine if they are valid, checking to see if either value is set to gold to find the mineral in the left or center position (our robot sees the left and center mineral on the playfield). If both values return silver it sets the position of the sample to right. Additionally, if only 1 or 0 minerals are detected it enters a failsafe mode and assumes the mineral is in the center position after 8 seconds.



Here’s a sample of the code with the modified logic ↓

 

 
 

We also added a vision filter to remove blocks above a certain height in the camera view as valid detections because the Tensorflow Model was detecting Minerals in the Crater. Our vision filter excludes detection of minerals in the crater, debris in the playfield above the mineral targets, and invalid objects outside the playfield but still visible to the camera. We needed to add this digital blindfold so the tensorflow detection would only use minerals in a valid position on the playfield and exclude all other false positive values. We did that by adding recognition.getTop()>600 to the part that sets the mineralX variables so that any minerals above the known mineral positions would not be considered valid. Vision was tested extensively with various debris, minerals and team members wearing white shoes behind the valid mineral positions in an attempt to confuse the Tensorflow algorithm and to determine the best value for filter height. After testing, we report the filter was successful greater than 99% in both good and exceptionally poor lighting conditions. Here’s a sample of that code ↓

 After that, we modified the tensorflow code to work in bad lighting to reduce failure by changing the minimumConfidence parameter. We tested mineral detection confidence values under many different lighting conditions (including very poor lighting) with various debris and minerals scattered on the playfield within the vision of the camera to verify correct identification and filtering of invalid targets worked with at least a 99% success rate.

Here’s a sample of the code. ↓

 

Finally, we added a Failsafe so if the tensorflow could not determine mineral position our autonomous software will still complete all goals. The Failsafe works by checking if the runtime is greater than eight thousand milliseconds and has not determined a valid mineral position then it sets the gold to the center position and performs all tasks for that position. Our tensorflow vision has worked so well, we have not triggered failsafe mode at any of our tournaments but, it has been tested extensively to ensure that even after 8 seconds elapsed on the clock we still complete all autonomous tasks including descend and detach, sampling, claiming and parking in crater with ample time to spare with a success rate greater than 99%.

 

Most of the failsafe code is contained within this sample ↓

 

We used Github to manage and track all code changes throughout the entire season. Our autonomous software achieves all the goals including detaching from lander, mineral sample, depot claiming and parking in the crater from both the Crater position and the Depot position. This was extremely difficult due to the immense time constraints. Since we are a rookie team we didn’t have any basic chassis to build off of, and we had very few parts. Software had to program all of the autonomous tasks while hardware was still updating the design. In addition to this, hardware changes had to be made to accommodate autonomous software, such as changing the location and angle of the phone mount to improve vision, only days before our first meet. Despite this, we managed to build an extremely complex and consistent system that achieves every objective for a maximum autonomous score of 80.