Taking a Coursera Course: Control of Mobile Robots

I’m in week 2 of a 7 week on-line Coursera course titled Control of Mobile Robotstaught by professor Magnus Egerstedt of Georgia Tech. It’s an intro to control theory, as applied to mobile robots. The first week and a half has covered much of what I’ve been messing around with, so I’m holding my breath on my ability to keep up as the course moves to new stuff, especially as other activities compete for my time.  The course consists of a series of video lectures and weekly quizzes.  While not required for the course, there’s also a set of optional hands-on programming assignments that uses a robot simulator running in Matlab.  Matlab’s not free, but you can buy a student version for $99, as being a student in online courses qualifies you for the software (you just need to email them proof of enrollment).  I’ve always been curious about Matlab, so I’m checking it out.

The seven weeks will cover:

  1. Introduction to Control
  2. Mobile Robots
  3. Linear Systems
  4. Control Design
  5. Hybrid Systems
  6. The Navigation Problem
  7. Putting It All Together

The style is different than the Udacity classes I’ve taken, and that takes a bit of getting used to.  The videos are short lectures showing the professor, with occasional videos to illustrate points and some formulas.  Udacity courses, on the other hand, are like Khan Academy videos:in that they show a hand drawing on a white board rather than the professor’s face.  In addition, the videos are very short segments with questions in between.  I think I prefer the more interactive Q&A style, even if it’s overused on Udacity.  However, the Coursera videos are only about 6-10 minutes each, with 8 or 9 videos each week, so they aren’t so long that you get bored.

Week one provided an intro to PID control.  Week two is introducing differential drive robots, odometry using wheel encoders, sensors, and an introduction to behaviors, like traveling to a goal and obstacle avoidance.  I don’t know when the course will be offered again, but from what I’m seeing so far, I recommend it.

Struct and functions when using the Arduino IDE

As anyone reading this blog probably knows, the Arduino IDE simplifies a number of programming for an embedded environment and hides some of the required C / C++ material. This can make life a lot easier, but it can also cause problems, especially when you step out to do more complex things. I got bit by one of those earlier today. Since I eventually found a post to the work around, I thought I’d post it here.

In my robot code, I”ve defined a struct called coord that holds two doubles, which are the x and y coordinates for whatever I need (e.g., the position of the robot, the next waypoint, etc.

Today, I wanted to compute the distance from the ray defined by the previous and next waypoint and the current position of the vehicle, so that the error could be fed into a PID controller. I figured it would be easy to pass the parameters as coord types. BUT, this turns out to be trickier than it should be with the Arduino. Unless the structs are defined in a .h file, there are problems with their scope. A work-around is documented by Alexander Brevig on the Arduino Playground: Struct Resource for Arduino.

A Simple Low-Pass Filter (Concluded)

The last post covered the concept of the Exponentially Weighted Moving Average Filter and illustrated how it worked on a theoretical example, both with and without noise.  To wrap up, I want to include an actual set of data from the Devantech CMPS10 Tilt Compensated Compass on my current robot.  Although, as the name says, it has tilt compensation, I’m using the raw magnetometer output, as the robot is running on flat floors, and rotations and accelerations are liable to introduce more error than the practically non-existent tilt.

In my code, I’ve set α to a fairly high value of 0.33.  Here’s a plot of both the raw and filtered output for a case where the robot stayed fixed, then was manually rotated rather quickly to a new position:

As you can see form the plot, the filtered response, as expected, lags the raw response after the turn.  However, the raw output overshoots (I’m not sure why), so the lag actually results in the filtered output more closely matching reality, even right after the turn.  It’s not possible to see the effects of the filter on the noise from this plot, so this second graph shows just the raw and filtered data before the rotation, with the scale blown up:

The filter’s ability to modulate the noise is clearly evident.  The raw output bounces around +/- about 4.6 degrees, while the filtered output jumps within the much narrower range of +/- 1.5 degrees.