Lidar Part 3: Improvised 3D Scanning with Neato XV-11 Lidar

Lidar setup and 3D scan

It’s been over two years since I first wrote about lidar units, and at the time I stated I wrote that the final part would be a look at the Neato XV-11 that I had purchased off of Ebay. That got delayed by several years, first due to an initially bad unit (but great response from the seller to correct the problem!), then higher priority projects and life intervened, but I’m finally ready to report. Besides playing around with the unit, I added some enhancements to the display software available from Surreal, who make the controller unit I used, and mounted the lidar on an pan-tilt system so I could do 3D scans.

Equipment:

Construction

Construction is really straight-forward. I mounted the lidar to a plastic base plate using some standoffs (since the drive motor sticks out underneath the unit). Then I mounted the base plate to the pan-tilt system, and mounted that to a project box I had lying around.

XV-11 lidar mounted on a servo-controlled pan/tilt system for 3D scans

The XV Lidar Controller version I used is built around a Teensy 2.0 and uses PID to monitor and control the rotation speed of the Lidar, controlling it with PWM. In addition, it reads the data coming off the lidar unit and makes the information available through a USB port.

The servos are both connected to the servo controller, which also uses an USB interface. The figure below shows the setup:

Labeled picture of the setup

Software

I didn’t touch the firmware for the lidar controller. The source code is available through links at Surreal’s site. There are several very similar versions of visualization code in python that takes the output from the lidar controller and displays it using VPython. They all seem to start from code from Nicolas Saugnier (Xevel). I started with one of those variants. The main changes I made were 1) to add the ability to do a 3D scan by sweeping a pan/tilt mount through various pre-set pan and tilt angles and 2) to add the ability to capture and store the point cloud data for future use. In addition, I wrote a routine to then open and display the captured data using a similar interface. Additional smaller changes were made such as implementing several additional user controls and moving from the thread to the threading module.

The pan-tilt setup does not rotate the lidar around the centerpoint of itself. Therefore, in order to calculate the coordinate transformation from the lidar frame of reference to the original non-moving frame you have to do both a coordinate rotation and an angle-dependent translation of the origin. This is handled by the rotation.py routine using a rotation matrix and offset adjustments.

The servos are controlled through a servo controller, with the controlling software (altMaestro.py) being an enhanced version of the python control software available through Pololu that was originally developed by Juhapekka Piiroinen and Brian Wu. My version corrects some comments which were inconsistent with actual implementation, fixed bugs in the set_speed routine, and added “is_moving” as a API interface to be able to check whether or not each individual servo was moving.

The point cloud data is stored in a simple csv file with column headings. Each row has the x, y, and z coordinates, as well as the intensity value for the returned data point (provided by the XV-11), and a flag that is set to 1 if the XV-11 has declared that the intensity is lower than would normally be expected given the range.

When displaying the results, either during a scan or from a file, the user can select to color code the display by intensity, by height of the point (the z value), or by neither. In the latter case, points with the warning flag set are shown in red. In addition, as in the original software, the user can toggle showing lines from the lidar out to each data point and also an outer line connecting the points.

The software, along with some sample point cloud files, can be found on my Neato-XV-Lidar-Tools repository on GitHub.

A Note on VPython Versions

The original visualization code was written for Python 2.x and for VPython 2.6 or earlier. After some deliberation, I decided not to update this. Post version 6, VPython internals have been entirely redone, with some minor changes to how things are coded. However VPython 7 currently has issues with Spyder, that I use as my development environment, while VPython 6 won’t run in Python 3.x, and never will. It shouldn’t be a hard lift to convert the code, but note that if you update it to run under Python 3 you’ll also need to update to VPython 7 or later, while updating VPython alone may create issues depending upon your development environment. So it’s best to make both updates at the same time.

Sample Results

This first scan is a 2D scan from the floor of my kitchen, with annotation added. It clearly shows the walls and cabinets, as well as the doorways. Note that the 2nd doorway from the bottom is to a stairway to the basement. Clearly either a 3D scan or an additional sensor would be needed to keep a robot using the lidar from falling down the stairs, which the lidar just sees as an opening at it’s level!

2D Lidar scan of my kitchen

As mentioned above, one option is to display lines from the lidar unit out to the point data, This is shown in the annotated scan below:

2D Scan showing lines to each data point

The display options also allow you to color code the data points based on either their intensity or their height off the ground. Examples of the same scene using these two options are shown below. In the intensity scan, you can see that nearby objects, as a general rule, show green (highest intensity), however the brown leather of my theater seats do not reflect well, and hence they appear in orange, even though they aren’t very far away from the lidar unit.

3D scan, with colors indicating height

3D scan with colors depicting intensity of the return

Even after calibrating the pan and tilt angles the alignment is not perfect. This is most clearly seen by rotating the view to give a top/down view and noting that the lines for vertical surfaces do not all overlap on the display. The 3D results weren’t as good as I’d hoped, but it certainly works as a proof of concept. The 2D results are very good, given the price of the unit, and I could envision modifying the code to, for example rapidly capture snapshots and use the code to train a machine learning program.

Potential Enhancements

One clear shortcoming in the current implementation is the need to carefully calibrate the servo command values and the angles. This takes some trial and error, and is not 100% repeatable. In addition, one has to take into account the fact that as the unit tilts, the central origin point of the lidar moves, and where it moves to is also a function of the pan angle. One of the effects of this setup is that unlike an expensive multi-laser scanning unit, each 360 degree scan is an arc from low to high to low, rather than covering a fixed elevation angle from horizontal. This makes the output harder to interpret visually. The 3D scanning kit from Sweep takes a different approach, rotating the lidar unit 90 degrees, so that it scans vertically rather than horizontally, and then uses a single stepper motor to rotate the unit through 360 degrees. Both the use of a single rotation axis and the use of a stepper motor rather than a servo likely increase the precision.

With either 2D or 3D scanning, the lidar can be used indoors as a sensor for mobile robots (as that’s what it was used for originally, after all). There’s an interesting blog post about using this lidar unit for Simultaneous Location And Mapping (SLAM). I may try mounting the lidar and a Raspberry Pi on a mobile robot and give that a try.

A Bit More on Project Yorick

I’ve been extremely gratified by the interest in Project Yorick, so I thought I’d share a bit more. First up is a St. Patrick’s Day greeting:

And here’s a video of Yorick without the project box, as he was being developed:

Last Halloween, I accidentally applied too much voltage to the servos and burned out the eyes servo. Luckily, I could take the skull apart and replace it. Here’s a picture from the successful brain surgery:

Yorrick’s Brain Surgery

 

The Yorick Project

I like to decorate for Halloween, including various talking skeletons that I’ve set up over the years. For Christmas 2015, my wife gave me a great 3 axis talking skull with moving eyes so I could upgrade one of the skeletons from just a moving jaw skull. Then a friend suggested that there had to be other applications for the rest of the year. This got me thinking, and when I saw the Alexa Billy Bass I knew what I had to do, and the Yorick project was born. I’m pretty happy with the result:

Now, if you put this project together from scratch, it’s pretty expensive, due to the cost of the 3-axis talking skull, but if you are looking to re-purpose one you have, or a similar device, then you may want to develop a similar project. The key elements are the talking skull, a Raspberry Pi and the AlexaPi software for turning the Pi into an Alexa client device, the audio servo controller for turning the output sound into servo commands for the jaw, and the servo controller for controlling the nod, turn, tilt, and eye servos of the skull.

Block diagram for Yorick

Bench testing the set-up

The AlexaPi software provides output to two GPIO pins, intended to light up LEDs as the Pi hears the wakeup word, listens to the input, gets the response from the Amazon Alexa service, and then speaks the response. All the directions for AlexaPi are on the creator’s GitHub site. For this project, we also linked the same pins to input pins on the Maestro servocontroller. The Maestro I used allows pins to be used as output (primarily for servos, but also for other purposes) or as analog inputs. Other models also have digital input pins. By reading the status of the input pins, we know which state to be in, as there is a separate routine of head motions for inactive, wake, listen, get response, and speak response.

The servo sequences are developed using the GUI-based controller software provided by Pololu, and then custom control commands are added using their stack-based scripting language. The partial script is included at the end of the article. The short first section that is provided is the control code I wrote, the rest (not shown) are the automatically generated subroutines based on the sequences I defined using their GUI driven software.

The skull motions for each state are predefined and fixed (the routines are looped as needed for the typically longer lasting states (get response and speak response). The one key tip is to slow down the servos in order to look more realistic. The Maestro controller software lets you limit the speed and acceleration of each servo, and with the exception of the jaw servo, which had to have quick response to changing audio, I set both the speed and acceleration values to 20.

The audio servo driver board converts the audio put out by the Pi into servo commands for the jaw, while also passing the audio through to the powered speakers. Others have developed their own software do drive motor (rather than servo) based devices such as Billy Bass and Teddy Ruxpin, based on the amplitude of the sound. I’m sure the same could be done to drive the jaw servo by extracting the volume of the sound, but I already had an audio servo driver board that is otherwise unused except for Halloween, so I used that. (Update: I’ve since developed my own Raspberry Pi based audio servo controller and there’s also the Jawsduino project that uses an Arduino.)

3-axis talking skull with moving eyes

Raspberry Pi with Kinobo USB microphone

Audio servo controller board

Mini Maestro Servo Controller board

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I put it all in a project box, and covered the threaded rod supporting the skull with a short length of PVC pipe, painted black, to produce the final result:

Rather messy (mainly with the 5 servo cables), but it all fits in the box.

Hardware Components

  • 3-axis talking skull with moving eyes. Mine is from audioservocontroller.com
  • Powered speakers – The Raspberry Pi puts out very little volume from the audio jack, so you want amplified speakers, whether battery powered or from line current. I used an old pair of inexpensive computer speakers
  • Raspberry Pi 3 – Other models with WiFi should also work
  • Kinobo – USB 2.0 Mini Microphone – You need a microphone for input, and I was quite happy with the performance of this one, considering it only cost $5.25!
  • Pololu Mini Maestro Servo Controller – I used the 12 channel version, larger versions should also work.
  • Audio servo driver board – I used an ST-200 board from Cowlacious that I had for a Halloween project. That model has been replaced by the newer and improved ST-400 model, which should work fine. audioservocontroller.com also sells a similar board
  • Misc., including project box, LEDs (optional), resistors (optional, if LEDs used), breadboard (optional, if LEDs used), PVC pipe, and jumper wires.

Software Components

  • AlexaPi open source client software for Amazon’s Alexa service
  • Pololu’s Maestro Controller Software – I used the Windows version, but they have versions for Linux as well.
  • Custom script for the Maestro servo controller to control the skull. An excerpt to give you a feel for the scripting language is posted below. The full script I used is posted as a gist on Git Hub: https://gist.github.com/ViennaMike/b714fc2c9b7eaaaaff952175536a4942

 

begin     # Loop until input 9 goes high
  9 get_position # get the value of the red trigger, 0-255
  155 greater_than # test whether it is greater than 155 
  if      # Wakeup word detected, run wake movement sequence
    wake     
    750 delay
    1 # flag for getting out of loop
    begin
      dup
      while
      9 get_position
      10 get_position
      plus
      300 greater_than
      if     # If after wake, both inputs high, then getting reply
        think
      else
        9 get_position
        155 greater_than
        if     # if just input on 9 high, then speaking reply
          answer
        else
          10 get_position
          155 greater_than
          if   # if just input on 10 high, then listening to query
            listen
          else     # when both inputs at zero, back to rest mode
	    Rest
            1 minus
          endif
        endif
      endif 
    repeat
  endif
repeat

### Sequence subroutines: ###
# Sequence subroutines removed not included for brevity