Compass Trials and Tribulations

I finally got back to working on my 1st robotic vehicle, and it’s been 1 step forward, two steps back, with one of the back steps being self-inflicted.  My first go at a robotic vehicle used only wheel encoders for dead reckoning, working alright for measuring distance traveled, but suffering the well known lack of both precision and accuracy in heading.  So for determining heading, I decided to add an electronic compass, in particular the Devantech CMPS10 tilt-compensated compass.  I don’t really need the tilt compensation, since I’m running over flat ground and flooring, but I figured I may want to reuse the compass later on another project and it’s inexpensive for a tilt-compensated compass.

So, I add the compass and run a simple test program.  First issue is that there’s clearly interference from the metal, motors, and/or electronics.  So I’ve mounted in on an aluminum mast, and that seems to clear things up.  One problem down.

Next I modified my code and went into a several hour debugging nightmare.  As is often the case, the bug is obvious in hindsight, with the symptoms pointing to it.  My robot ran forward the set distance, then, when it should have turned left, it spun right in endless circles, bringing to mind the Tommy Roe song Dizzy for those of us of a certain age. In debugging, I notice that while the simple compass reading test program works fine, when I load the full robot code, the bearing jumps in large discrete increments of about 22 degrees.  Curious, obviously a clue, but I couldn’t figure out what it meant.  Only after a couple hours of staring at code and trying small incremental changes did I notice the problem. Where I should have typed: lowerByte = compass.read(), I had instead typed lowerByte – compass.read().  The higher precision result from the compass is sent in two bytes, and I was never actually setting the lowerByte value, resulting in the large discrete jumps.  One self-inflicted problem solved.

Why did the robot turn right instead of left? Either the code was erroneously jumping to the middle of the obstacle avoidance routine, where it try turning right to avoid obstacles, or something else.  This was relatively easy to isolate, as when the robot is hooked up to the PC with debugging serial.print statements on, it dumps its current state and parameters each time through the loop.  So I quickly saw it wasn’t a bad state change.  The problem was self-inflicted wound two: a sign error.   A clockwise turn is positive in the coordinate system, but it’s a negative turn in terms of compass bearing (e.g., turning right from heading 90 degrees to heading 0 degrees).  So, flip the sign and I’m in business.  An easy one.

Now it moves straight to the first waypoint, turns in the proper direction, but not to the correct heading.  There’s a lag in the compass reading that I need to account for.  At least that’s not a silly mistake, and I know the source of the problem.

Here’s a picture of my initial compass test (without the robot) , and then MARV-1 with the compass mounted on the mast:

Tilt Compensated Compass Test displaying bearing

 

MARV-1 robotic vehicle with electronic compass on mast

Recommended: Udacity’s CS373: Programming a Robotic Vehicle

I haven’t posted recently because I haven’t had time to mess with my robots.  Instead, my free time has been taken up with learning more theory, via Udacity’s free 7-week class.  There will be a new session starting next month, and I highly recommend that you check it out.

The class gives a broad but hands-on introduction to key robotic concepts and algorithms.  It covered localization,  filtering (monte-carlo, Kalman, and particle filters), pathfinding (intro to A*, dynamic programming, etc.), PID (Proportional Integrated Differential) control, and something called graph SLAM (Simultaneous Localization and Mapping). If you’re already well-versed in one or more of these, you probably won’t learn anything new on that subject, but if you’ve only a passing or no familiarity, the course is great.

The format for Udacity’s courses is what really stands out: short 5-10 minute videos with a question or short programming assignment at the end.  It’s a slightly higher tech Khan academy: mostly an electronic whiteboard and pen, but some videos from Google’s autonomous vehicle and the DARPA challenge.  The programming is done in python and submitted directly from the web page (although I recommend an IDE for the weekly homework programming).

Beyond the robotics course, I think this is starting on the path to the future of college education.  I think it’s much like newspapers and the print Encyclopaedia Britannica.   They have valuable features that the online experience can’t duplicate, but the cost differential is just too great to sustain the old model.  When you can offer a college level course to thousands of students at once, on-line, and crowd-source support to partially make up for the lack of direct, 1 on 1 help, it’s hard to imagine that, in 10-30 years, this won’t be the future of a college degree.  Now, it’s not there yet.  This was a beta run, and the numerous glitches and automated grading problems made that abundantly clear.  In addition, there’s a lot to work out, especially for non-tech courses.  But, compared to $10-$50K per year for a college resident degree?  I think I may have seen the future of education.

UPDATE 2/8/2013: I’m more convinced then ever that some sort of blended mix of low-cost online courses and a reduced “residency requirement” will be at least one model for future degrees.  Private tuition costs have been rising faster than healthcare, while college credit is already becoming available for some online courses.  The University of California system has partnered with Udacity to offer a couple of lower division and remedial courses for credit, online, for $150.  And now, The American Council on Education (ACE) has approved five Coursera courses for “credit equivalency.”  Personally, I loved the college experience, but with today’s high cost, it’s becoming unaffordable for too many, and/or imposing a huge debt.

Two items of interest: Legos in a research lab and DARPA Challenge video

Make Magazine’s blog had an interesting item on how researchers in the U.K. have automated a long, repetitive process by using Lego Mindstorms NXT robots. The researchers are working on automated bones, and building up the material involves repeatedly dunking the substrate in various liquids in a certain order.  Rather than buy expensive equipment dedicated to this application, they put some Mindstorms robots together and programmed them.

A second item of interest comes from Udacity’s Programming a Robotic Car course.  The course is in beta (as are all their courses) and still has some rough edges (e.g., in the automated grading of programming homework), but the course is fascinating and well-taught.  I find it to be good value for the time committed and the courses are free.  In any event, there’s an interesting video showing a visualization one of the cars in the DARPA challenge detecting the barriers in the maze it’s in and using it’s A* pathfinding algorithm to determine its path.  The video also shows the car dealing with a complete roadblock and parking itself.