My friend once asked me to help him with an autonomous RC car race, where we’d go to a warehouse and train the car around the track.
After a few weekends of iterating and improving the model/training set, we were convinced we’d win the next race… only to lose almost immediately by crossing the lines on the track.
We did some inverse model explanation work which quickly showed that our car was paying attention to the overhead skylights more than the actual tracks. Unlike the other weekends, it was a foggy day!
A quick hack to cut out the top 50% of each training image brought our car back to its prior reliability.
In college, I had to make a line follower car that could drive over a line drawn with segments of black tape. While most folks just used a micro-controller to write their control loop, I made a set of op-amp circuits to compute the power to each wheel in analog.
It took a while to get good resistor values, but the night before the project was due I managed to get it working really well. So that was cool, and I got to catch up on sleep. Well the funny thing was that the next day we tested our line followers in this big lecture hall and the courses were not directly under the lights. So my beautiful analog line follower started following its own shadow!
Of course, the more diligent students had already tested their cars under these conditions and found the need to put a little hood over the photoresistors so that they only saw the light reflected from the car's own LED.
Another good trick here is to do two measurements. Once with a light on your bot aimed at the line on, and once with it off. Taking the difference between these measurements will fairly reliably reject ambient lighting.
Of course getting that to work with all-analog hardware is left as an exercise to the reader.
I’m going to bet that well-experienced lesson has positively affected future endeavours. Did it ever come to mind when you were working on something and the memory guided you towards searching for additional edge cases or taking “what ifs” more seriously?
> We did some inverse model explanation work which quickly showed that our car was paying attention to the overhead skylights more than the actual tracks.
It must've been posted here before: this is exactly the strategy Andy Sloane used to localize his car.
After a few weekends of iterating and improving the model/training set, we were convinced we’d win the next race… only to lose almost immediately by crossing the lines on the track.
We did some inverse model explanation work which quickly showed that our car was paying attention to the overhead skylights more than the actual tracks. Unlike the other weekends, it was a foggy day!
A quick hack to cut out the top 50% of each training image brought our car back to its prior reliability.