Yorick Becomes a Mimic: Wireless 3-Axis Skull Control via Motion Capture

Introduction

Picture showing me wearing the sensor hat, with the 3-axis skull next to me.

The sensor cap and the skull it wirelessly controls.

A few years back I presented my Yorick Alexa project at a local Hack & Tell meeting. At the meeting, someone asked if I could hook up a microphone to provide the voice, rather than Alexa. That is a pretty trivial change (much easier than getting Alexa interfaced in the first place), but from that I had the idea of capturing a person’s head motions for the skull in real-time, rather than using pre-canned, repeated routines. This Mimic project was the result.

This project uses two Raspberry Pi’s, communicating over WiFi using XMLRPC, along with an Inertial Measurement Unit (IMU) sensor and a Maestro Servo Controller to have a 3-axis skull wirelessly mimic the head movements of the operator, who is wearing a baseball cap with the sensor Pi and IMU unit mounted on the brim. (A shoutout to Greg G on Haunt Forum for the ball cap mount idea).

Side note: My first job out of college was as a systems engineer on the Shuttle Mission Simulator, including the simulation code for the rate gyro assemblies and the on-board accelerometers. There, I first heard of this weird thing called a “quaternion.” Now, over 40 years later, here I am using them in a hobby project!

Overview

This project has two units, a sensor unit and a controller unit. The sensor unit consists of a Raspberry Pi Zero W and an Adafruit BNO055 9 degree of free

Sensor unit mounted on a baseball cap

Sensor unit mounted on a baseball cap

dom IMU Board. The controller unit consists of another Pi Zero W and a Pololu Maestro Servo Controller.

The project has two units: the sensor unit and the controller unit. The Sensor unit consists of a Raspberry Pi Zero W and an Adafruit BNO055 9-Degrees of Freedom IMU board. The controller unit consists of another Pi Zero W and a Pololu Maestro Servo Controller. The two communicate with each other using XMLRPC running over WiFi. The Sensor unit acts as the server and the Controller unit acts as the client.

I mounted the sensor Pi and IMU board on the brim of a baseball cap, so it will track my head movements.

The code for the project is open source, of course, and posted on GitHub.

How it Works

The IMU board has a triaxial accelerometer, a triaxial gyroscope, and a triaxial magnetometer.  The magnetometer provides orientation relative to magnetic north and is not used in the IMU only mode used in this project, as only relative orientation to the starting position (which should be looking straight forward and level) is desired. The accelerometers and gyroscope. The secret sauce in this board is that it includes a  high speed ARM Cortex-M0 based processor that takes in the raw data from all the sensors, fuses it, and outputs the calculated orientation in real-time.

The orientation is provided as Euler angles (think roll, pitch, yaw rotations about the board’s x, y, and z axes), as the x, y, and z components of the gravitational force vector, or as a quaternion (which defines the direction of a single rotation axis and the amount of rotation about that axis).  Unfortunately the chip has some flaws in calculating Euler angles, so it’s safer to use the quaternion output. For this project, the aviation convention is used for angles (x axis looking forward, y axis pointing right, and z axis down). Note however, that some of the servos move in the opposite direction, so you’ll see some sign changes in the function that converts the angles to actual servo commands).

For moving the skull, we want the Euler angles, though, to command the tilt, nod, and turn servos of the skull. Fortunately it’s easy to find the math to implement to convert the quaternion value provided into the proper Euler angles.

In this project, the sensor Pi is set up to be an XMLRPC server which, when receiving the appropriate XMLRPC request, queries the board for the current orientation, provided as a quaternion, and wirelessly sends this to the controller Pi that made the request.

The controller Pi makes this request 50 times per second. Once it has the quaternion, it converts it to the three Euler angles, and then converts each angle into the appropriate servo command to pass on to the maestro.py module that communicates with the Maestro servo controller. In addition, there is a subroutine that sends commands to move the eyes servo in a pre-determined, but relatively random, fashion.

This video [[[[[insert] shows the system in action. The goal is not to get perfectly synchronized motion between the operator and the skull, as the servos will always introduce significant delay. Rather, the intent is to generate spontaneous and realistic movements by capturing the motion of the operator, who normally will be out of sight.

Software Prerequisites

The sensor software uses CircuitPython and Adafruit’s Blinka library must be installed to support CircuitPython on a Pi. In addition, the adafruit_bno055 program is needed to interface with the board. Be sure to use the CircuitPython version rather than the earlier version.

Hardware

Picture of BNO055 IMU board connected to Pi Zero W

Closeup of the BNO055 sensor and Pi Zero sensor unit

This project requires two Raspberry Pi’s with WiFi (I used Pi Zero W’s), an Adafruit BNO055 IMU board, and a Pololu Maestro Servo Controller to control the motions of a 3-axis skull. You also need some jumper wires and a USB OTG cable. The OTG cable is used to connect the controller Pi to the Maestro Servo Controller.

Sensor

The sensor unit captures the operator’s head movements. The sensor software is just one program, imu_rpc_server.py. It is configured to use a UART interface to the BNO055 board. One can use an i2c interface instead by changing a few line at the top of the program, however all Raspberry Pi’s have a hardware issue with i2c clock stretching, and the sensor board uses clock stretching. This gave extremely unreliable results when I was developing this software and I put the entire project aside for months until I came back and found out about the hardware issue. There is a workaround that you can use if you want to use i2c that consists of slowing the speed down so that clock stretching will rarely (hopefully never) be needed. You can read more about the issue here: https://www.mcgurrin.info/robots/723/.

Normally when the Pi kernel boots up it will put a login terminal on the serial port. You’ll need to turn this off if using the UART interface. To do so, you can run the raspi-config tool and go to Interface Options, then to Serial Port, and disable shell messages on the serial connection. Then reboot the Pi. If you later need to re-enable it, just follow the same procedure. To wire up the sensor board and the Pi using the UART interface:

  • Connect BNO055 Vin to Raspberry Pi 3.3V power
  • Connect BNO055 GND to Raspbery Pi ground
  • Connect BNO055 SDA (now UART TX) to Raspberry Pi RXD pin
  • Connect BNO055 SCL (now UART RX) to Raspberry Pi TXD pin
  • Connect BNO055 PS1 to BNO055 Vin / Raspberry Pi 3.3V power

You can test that everything is working by running the simpletestuart.py program. It will print out the temperature of the board, the individual sensor parameters, and the integrated Euler angle and Quaternion values, as well as the calibration status and the axis map. This will update every 5 seconds. Don’t worry that the magnetometer readings will be None. This project uses relative orientation and therefore the software turns off the magnetometer. For more information on the board and the various outputs and settings, see the datasheet.

The imu_rpc_server.py program is similar to the test program, but it only reads the quaternion values (the Euler angles are unreliable on this sensor). It also starts and XMLRPC server that will respond to read_sensor requests by returning the quaternion value for the current orientation. The program, once started runs continually until forced closed.

Controller

The controller software runs on a different Pi, and consists of two programs: main_controller.py and maestro.py. Main_controller.py initializes the servo controller and sets limits on the speed and acceleration for the servos. I found this cuts down on the noise of the servos, provides smoother motion, and keeps the head from whipping around when turning.

The board queries the sensor Pi via XMLRPC ten times per second to get the position as a quaternion. It converts the quaternion into Euler angles and converts the Euler angles into servo commands and sends the servo commands to the Maestro board. It also generates a sequence of eye movements and sends that to the Maestro as well.

ManServoTest can be used as you set things up to make sure your controller Pi is talking to the Maestro correctly.

Use

Set up the Pi’s to connect to your local WiFi network and install CircuitPython on the sensor Pi (it’s not needed for the controller). Then install the appropriate modules on each one. Connect the sensor Pi to the BNO055 and connect the controller Pi to the Maestro Servo Controller using a USB OTG cable. And, of course, connect the appropriate pins on the servo controller to the servos controlling your skull. Roll or tilt is channel 0, pitch or nod is channel 1, yaw or pan is channel 2, and random eye movements are sent out on channel 3.

Once everything is hooked up, you must start imu_rpc_server.py on the sensor pi first, so that it starts the XMLRPC server. Then launch main_controller.py on the controller Pi and you’re off and running.

Opportunities for Expansion

It would be very straightforward to take the sensor software and use it to capture and record head motions to a file. Then a different program could read back the file and send out the previously captured motion commands.

A somewhat more complex undertaking would be to capture the motions as described above, but then translate them into the scripting language used on the Maestro servocontroller, so that they could be played back without a Pi or other computer. Sending out 50 commands per second, however, would quickly fill the limited memory of the Maestro. Instead, one would want to pre-process the motion file so that only key commands are kept (analogous to keyframe animation) and only those key commands translated to the Maestro’s scripting language.

Lidar Part 3: Improvised 3D Scanning with Neato XV-11 Lidar

Lidar setup and 3D scan

It’s been over two years since I first wrote about lidar units, and at the time I stated I wrote that the final part would be a look at the Neato XV-11 that I had purchased off of Ebay. That got delayed by several years, first due to an initially bad unit (but great response from the seller to correct the problem!), then higher priority projects and life intervened, but I’m finally ready to report. Besides playing around with the unit, I added some enhancements to the display software available from Surreal, who make the controller unit I used, and mounted the lidar on an pan-tilt system so I could do 3D scans.

Equipment:

Construction

Construction is really straight-forward. I mounted the lidar to a plastic base plate using some standoffs (since the drive motor sticks out underneath the unit). Then I mounted the base plate to the pan-tilt system, and mounted that to a project box I had lying around.

XV-11 lidar mounted on a servo-controlled pan/tilt system for 3D scans

The XV Lidar Controller version I used is built around a Teensy 2.0 and uses PID to monitor and control the rotation speed of the Lidar, controlling it with PWM. In addition, it reads the data coming off the lidar unit and makes the information available through a USB port.

The servos are both connected to the servo controller, which also uses an USB interface. The figure below shows the setup:

Labeled picture of the setup

Software

I didn’t touch the firmware for the lidar controller. The source code is available through links at Surreal’s site. There are several very similar versions of visualization code in python that takes the output from the lidar controller and displays it using VPython. They all seem to start from code from Nicolas Saugnier (Xevel). I started with one of those variants. The main changes I made were 1) to add the ability to do a 3D scan by sweeping a pan/tilt mount through various pre-set pan and tilt angles and 2) to add the ability to capture and store the point cloud data for future use. In addition, I wrote a routine to then open and display the captured data using a similar interface. Additional smaller changes were made such as implementing several additional user controls and moving from the thread to the threading module.

The pan-tilt setup does not rotate the lidar around the centerpoint of itself. Therefore, in order to calculate the coordinate transformation from the lidar frame of reference to the original non-moving frame you have to do both a coordinate rotation and an angle-dependent translation of the origin. This is handled by the rotation.py routine using a rotation matrix and offset adjustments.

The servos are controlled through a servo controller, with the controlling software (altMaestro.py) being an enhanced version of the python control software available through Pololu that was originally developed by Juhapekka Piiroinen and Brian Wu. My version corrects some comments which were inconsistent with actual implementation, fixed bugs in the set_speed routine, and added “is_moving” as a API interface to be able to check whether or not each individual servo was moving.

The point cloud data is stored in a simple csv file with column headings. Each row has the x, y, and z coordinates, as well as the intensity value for the returned data point (provided by the XV-11), and a flag that is set to 1 if the XV-11 has declared that the intensity is lower than would normally be expected given the range.

When displaying the results, either during a scan or from a file, the user can select to color code the display by intensity, by height of the point (the z value), or by neither. In the latter case, points with the warning flag set are shown in red. In addition, as in the original software, the user can toggle showing lines from the lidar out to each data point and also an outer line connecting the points.

The software, along with some sample point cloud files, can be found on my Neato-XV-Lidar-Tools repository on GitHub.

A Note on VPython Versions

The original visualization code was written for Python 2.x and for VPython 2.6 or earlier. After some deliberation, I decided not to update this. Post version 6, VPython internals have been entirely redone, with some minor changes to how things are coded. However VPython 7 currently has issues with Spyder, that I use as my development environment, while VPython 6 won’t run in Python 3.x, and never will. It shouldn’t be a hard lift to convert the code, but note that if you update it to run under Python 3 you’ll also need to update to VPython 7 or later, while updating VPython alone may create issues depending upon your development environment. So it’s best to make both updates at the same time.

Sample Results

This first scan is a 2D scan from the floor of my kitchen, with annotation added. It clearly shows the walls and cabinets, as well as the doorways. Note that the 2nd doorway from the bottom is to a stairway to the basement. Clearly either a 3D scan or an additional sensor would be needed to keep a robot using the lidar from falling down the stairs, which the lidar just sees as an opening at it’s level!

2D Lidar scan of my kitchen

As mentioned above, one option is to display lines from the lidar unit out to the point data, This is shown in the annotated scan below:

2D Scan showing lines to each data point

The display options also allow you to color code the data points based on either their intensity or their height off the ground. Examples of the same scene using these two options are shown below. In the intensity scan, you can see that nearby objects, as a general rule, show green (highest intensity), however the brown leather of my theater seats do not reflect well, and hence they appear in orange, even though they aren’t very far away from the lidar unit.

3D scan, with colors indicating height

3D scan with colors depicting intensity of the return

Even after calibrating the pan and tilt angles the alignment is not perfect. This is most clearly seen by rotating the view to give a top/down view and noting that the lines for vertical surfaces do not all overlap on the display. The 3D results weren’t as good as I’d hoped, but it certainly works as a proof of concept. The 2D results are very good, given the price of the unit, and I could envision modifying the code to, for example rapidly capture snapshots and use the code to train a machine learning program.

Potential Enhancements

One clear shortcoming in the current implementation is the need to carefully calibrate the servo command values and the angles. This takes some trial and error, and is not 100% repeatable. In addition, one has to take into account the fact that as the unit tilts, the central origin point of the lidar moves, and where it moves to is also a function of the pan angle. One of the effects of this setup is that unlike an expensive multi-laser scanning unit, each 360 degree scan is an arc from low to high to low, rather than covering a fixed elevation angle from horizontal. This makes the output harder to interpret visually. The 3D scanning kit from Sweep takes a different approach, rotating the lidar unit 90 degrees, so that it scans vertically rather than horizontally, and then uses a single stepper motor to rotate the unit through 360 degrees. Both the use of a single rotation axis and the use of a stepper motor rather than a servo likely increase the precision.

With either 2D or 3D scanning, the lidar can be used indoors as a sensor for mobile robots (as that’s what it was used for originally, after all). There’s an interesting blog post about using this lidar unit for Simultaneous Location And Mapping (SLAM). I may try mounting the lidar and a Raspberry Pi on a mobile robot and give that a try.

Hobbyist Lidar of note

When I wrote my previous posts on Lidar units, the cheapest 360 degree scanning unit that was suitable for outdoor use cost over $1,000. While both Velodyne and Quanergy have promised sub $1,000 solid-state automotive grade lidar units (at least if bought in automotive industry quantities) within the next 12 months, in the meantime, hobbyists now have the Scanse Sweep. This unit uses the previously reported on LIDAR-Lite as it’s core lidar unit. The specs state that it has a range of 40 meters, a resolution of 1cm, and that because it uses coded signals, it works outdoors as well as indoors. The price, as of today, is $349. This is higher than its original price when launched, which is likely driven by the fact that the cost of LIDAR-Lite units have gone up since it’s come back into production by it’s new owner, Garmin.  The LIDAR-Lite v3 itself has a list price of  $149.59.  While certainly not an automotive great sensor, the Scanse Sweep looks like a good product for outdoor as well as indoor projects, including scale vehicle projects, at an affordable price.