Wireless Microphone Using Two Raspberry Pi’s (Updated 5/4/2021)

I’m in the process of integrating wireless audio input and  into my Yorick the Mimic project. This involves adding microphone input and wireless transmission into the sensor cap and then integrating the movement controller with a modified version of Chatter Pi that takes the transmitted audio as an input.

In order to get started, I first put together bare bones transmitter and receive programs. I’m using Python, along with PyAudio, which I also used in Chatter Pi, to process the audio on both ends. I’m using UDP to send the data packets contain the audio. I saw some examples using TCP, but it seemed to me that UDP was better suited to real-time audio. If anyone knows more on which is the better approach, please post a comment.

PyAudio

The code runs fine, but generates a continuing stream of

ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred

warning messages. This doesn’t interfere with the program’s operations, but if anyone knows why I’m getting them and/or how to eliminate the warnings, I’d appreciate your letting me know.

PyAudio has two modes, a blocking mode, where each call to pyaudio.Stream.write() or pyaudio.Stream.read() blocks until all the given/requested frames have been played/recorded and a non-blocking mode where a callback function is launched in a separate thread, so that processing can continue in the program calling it, and the thread ends when the current chunk of audio is processed. The gist with my code uses the non-blocking mode using the callback function and two different versions of the receiver, one using blocking mode and one using non-blocking. You need to be careful when using the non-blocking mode that the callback function does not include anything really time consuming, like file reading and writing. If it does, it can’t finish before the next chunk of audio is ready and you get clipping or worse problems.

As is well-known, the audio jack output on a Pi produces low volume, poor quality audio. A USB speaker works much better. However at least for the speaker I’m using, I need to use the ALSAMixer to control the volume. If I touch the speaker icon in the GUI, it produces no sound if set to anything other than the maximum volume. Again, if anyone knows why and how to fix this, please add a comment.

By design, both the xmit and rcv programs run forever once started. There’s one other feature beyond the bare bones basics. The receive program has a Boolean variable named EFFECTS. If set to True, the Sox library is used to deepen the pitch and add a bit of reverb before sending the audio to the speaker.

Hopefully this project will help others with similar needs.

Yorick Becomes a Mimic: Wireless 3-Axis Skull Control via Motion Capture

Introduction

Picture showing me wearing the sensor hat, with the 3-axis skull next to me.

The sensor cap and the skull it wirelessly controls.

A few years back I presented my Yorick Alexa project at a local Hack & Tell meeting. At the meeting, someone asked if I could hook up a microphone to provide the voice, rather than Alexa. That is a pretty trivial change (much easier than getting Alexa interfaced in the first place), but from that I had the idea of capturing a person’s head motions for the skull in real-time, rather than using pre-canned, repeated routines. This Mimic project was the result.

This project uses two Raspberry Pi’s, communicating over WiFi using XMLRPC, along with an Inertial Measurement Unit (IMU) sensor and a Maestro Servo Controller to have a 3-axis skull wirelessly mimic the head movements of the operator, who is wearing a baseball cap with the sensor Pi and IMU unit mounted on the brim. (A shoutout to Greg G on Haunt Forum for the ball cap mount idea).

Side note: My first job out of college was as a systems engineer on the Shuttle Mission Simulator, including the simulation code for the rate gyro assemblies and the on-board accelerometers. There, I first heard of this weird thing called a “quaternion.” Now, over 40 years later, here I am using them in a hobby project!

Overview

This project has two units, a sensor unit and a controller unit. The sensor unit consists of a Raspberry Pi Zero W and an Adafruit BNO055 9 degree of free

Sensor unit mounted on a baseball cap

Sensor unit mounted on a baseball cap

dom IMU Board. The controller unit consists of another Pi Zero W and a Pololu Maestro Servo Controller.

The project has two units: the sensor unit and the controller unit. The Sensor unit consists of a Raspberry Pi Zero W and an Adafruit BNO055 9-Degrees of Freedom IMU board. The controller unit consists of another Pi Zero W and a Pololu Maestro Servo Controller. The two communicate with each other using XMLRPC running over WiFi. The Sensor unit acts as the server and the Controller unit acts as the client.

I mounted the sensor Pi and IMU board on the brim of a baseball cap, so it will track my head movements.

The code for the project is open source, of course, and posted on GitHub.

How it Works

The IMU board has a triaxial accelerometer, a triaxial gyroscope, and a triaxial magnetometer.  The magnetometer provides orientation relative to magnetic north and is not used in the IMU only mode used in this project, as only relative orientation to the starting position (which should be looking straight forward and level) is desired. The accelerometers and gyroscope. The secret sauce in this board is that it includes a  high speed ARM Cortex-M0 based processor that takes in the raw data from all the sensors, fuses it, and outputs the calculated orientation in real-time.

The orientation is provided as Euler angles (think roll, pitch, yaw rotations about the board’s x, y, and z axes), as the x, y, and z components of the gravitational force vector, or as a quaternion (which defines the direction of a single rotation axis and the amount of rotation about that axis).  Unfortunately the chip has some flaws in calculating Euler angles, so it’s safer to use the quaternion output. For this project, the aviation convention is used for angles (x axis looking forward, y axis pointing right, and z axis down). Note however, that some of the servos move in the opposite direction, so you’ll see some sign changes in the function that converts the angles to actual servo commands).

For moving the skull, we want the Euler angles, though, to command the tilt, nod, and turn servos of the skull. Fortunately it’s easy to find the math to implement to convert the quaternion value provided into the proper Euler angles.

In this project, the sensor Pi is set up to be an XMLRPC server which, when receiving the appropriate XMLRPC request, queries the board for the current orientation, provided as a quaternion, and wirelessly sends this to the controller Pi that made the request.

The controller Pi makes this request 50 times per second. Once it has the quaternion, it converts it to the three Euler angles, and then converts each angle into the appropriate servo command to pass on to the maestro.py module that communicates with the Maestro servo controller. In addition, there is a subroutine that sends commands to move the eyes servo in a pre-determined, but relatively random, fashion.

This video [[[[[insert] shows the system in action. The goal is not to get perfectly synchronized motion between the operator and the skull, as the servos will always introduce significant delay. Rather, the intent is to generate spontaneous and realistic movements by capturing the motion of the operator, who normally will be out of sight.

Software Prerequisites

The sensor software uses CircuitPython and Adafruit’s Blinka library must be installed to support CircuitPython on a Pi. In addition, the adafruit_bno055 program is needed to interface with the board. Be sure to use the CircuitPython version rather than the earlier version.

Hardware

Picture of BNO055 IMU board connected to Pi Zero W

Closeup of the BNO055 sensor and Pi Zero sensor unit

This project requires two Raspberry Pi’s with WiFi (I used Pi Zero W’s), an Adafruit BNO055 IMU board, and a Pololu Maestro Servo Controller to control the motions of a 3-axis skull. You also need some jumper wires and a USB OTG cable. The OTG cable is used to connect the controller Pi to the Maestro Servo Controller.

Sensor

The sensor unit captures the operator’s head movements. The sensor software is just one program, imu_rpc_server.py. It is configured to use a UART interface to the BNO055 board. One can use an i2c interface instead by changing a few line at the top of the program, however all Raspberry Pi’s have a hardware issue with i2c clock stretching, and the sensor board uses clock stretching. This gave extremely unreliable results when I was developing this software and I put the entire project aside for months until I came back and found out about the hardware issue. There is a workaround that you can use if you want to use i2c that consists of slowing the speed down so that clock stretching will rarely (hopefully never) be needed. You can read more about the issue here: https://www.mcgurrin.info/robots/723/.

Normally when the Pi kernel boots up it will put a login terminal on the serial port. You’ll need to turn this off if using the UART interface. To do so, you can run the raspi-config tool and go to Interface Options, then to Serial Port, and disable shell messages on the serial connection. Then reboot the Pi. If you later need to re-enable it, just follow the same procedure. To wire up the sensor board and the Pi using the UART interface:

  • Connect BNO055 Vin to Raspberry Pi 3.3V power
  • Connect BNO055 GND to Raspbery Pi ground
  • Connect BNO055 SDA (now UART TX) to Raspberry Pi RXD pin
  • Connect BNO055 SCL (now UART RX) to Raspberry Pi TXD pin
  • Connect BNO055 PS1 to BNO055 Vin / Raspberry Pi 3.3V power

You can test that everything is working by running the simpletestuart.py program. It will print out the temperature of the board, the individual sensor parameters, and the integrated Euler angle and Quaternion values, as well as the calibration status and the axis map. This will update every 5 seconds. Don’t worry that the magnetometer readings will be None. This project uses relative orientation and therefore the software turns off the magnetometer. For more information on the board and the various outputs and settings, see the datasheet.

The imu_rpc_server.py program is similar to the test program, but it only reads the quaternion values (the Euler angles are unreliable on this sensor). It also starts and XMLRPC server that will respond to read_sensor requests by returning the quaternion value for the current orientation. The program, once started runs continually until forced closed.

Controller

The controller software runs on a different Pi, and consists of two programs: main_controller.py and maestro.py. Main_controller.py initializes the servo controller and sets limits on the speed and acceleration for the servos. I found this cuts down on the noise of the servos, provides smoother motion, and keeps the head from whipping around when turning.

The board queries the sensor Pi via XMLRPC ten times per second to get the position as a quaternion. It converts the quaternion into Euler angles and converts the Euler angles into servo commands and sends the servo commands to the Maestro board. It also generates a sequence of eye movements and sends that to the Maestro as well.

ManServoTest can be used as you set things up to make sure your controller Pi is talking to the Maestro correctly.

Use

Set up the Pi’s to connect to your local WiFi network and install CircuitPython on the sensor Pi (it’s not needed for the controller). Then install the appropriate modules on each one. Connect the sensor Pi to the BNO055 and connect the controller Pi to the Maestro Servo Controller using a USB OTG cable. And, of course, connect the appropriate pins on the servo controller to the servos controlling your skull. Roll or tilt is channel 0, pitch or nod is channel 1, yaw or pan is channel 2, and random eye movements are sent out on channel 3.

Once everything is hooked up, you must start imu_rpc_server.py on the sensor pi first, so that it starts the XMLRPC server. Then launch main_controller.py on the controller Pi and you’re off and running.

Opportunities for Expansion

It would be very straightforward to take the sensor software and use it to capture and record head motions to a file. Then a different program could read back the file and send out the previously captured motion commands.

A somewhat more complex undertaking would be to capture the motions as described above, but then translate them into the scripting language used on the Maestro servocontroller, so that they could be played back without a Pi or other computer. Sending out 50 commands per second, however, would quickly fill the limited memory of the Maestro. Instead, one would want to pre-process the motion file so that only key commands are kept (analogous to keyframe animation) and only those key commands translated to the Maestro’s scripting language.

Further UPDATED for Halloween 2021! ChatterPi: a Raspberry Pi Audio Servo Controller for Talking Skulls

Introduction

ChatterPi is a software package that turns a Raspberry Pi into an audio servo controller. In other words, the Pi outputs commands to control a servo based on the volume of the audio input. The input can be either stored audio files (in either mono or stereo .wav format) or from an external source, such as a microphone or line level input. One of the uses is to drive animatronic props, such as a skull or a talking bird.

[This post has been updated to reflect new features added in the latest version]

Background: A Brief History of Talking Skull Control

A common prop that still makes a good impact is a talking object, whether a skull or animal. Some lower cost commercial props use a motor and spring. Another approach is to pre-program a complete sequence to match the vocals, but this is very time consuming and if you want to change the vocals, or even just edit them slightly, you need to reprogram the entire sequence. For that reason, the use of an audio servo controller to drive a servomotor controlling the jaw is a very popular approach. There are several variations. One of the earliest use hardware to detect when the audio exceeded a threshold, and then began moving the jaw towards a fully open position, and when the audio went below the threshold, it would begin closing the jaw. “Scary Terry” Simmons may have been the first to develop an electronic hardware board to do this, and Cowlacious Designs has continued to improve and sell commercial versions, with many added features such as a built in audio player, various triggering options, and the ability to control LEDs as eyes.

Later, someone named Mike (no relation) combined an Arduino with a hardware volume level board to produce the Jawduino. This went from having just 2 levels to 4. The original project just took audio in and controlled the servo, but others added extensions to play stored mp3 files and/or randomly move additional servos (for example, http://batbuddy.org/resources/Halloweenstuff/TalkingSkull.php).

A few years ago, Steve Bjork from Haunt Hackers combined dedicated hardware with a propeller microcontroller to increase the number of levels to almost 256 and also to filter out low and high frequencies that don’t tend to result in jaw movement for spoken sound. The result is the Wee Little Talker. This commercial board also has an onboard mp3 player, can be triggered externally, control LED ‘eyes,” and adds a wide array of features including a voice feedback menu system.

It occurred to me that with current single board computer capabilities and powerful software libraries, it should be possible to incorporate most of the best features of all of these into a single, software-based system running on a Raspberry Pi. The result is ChatterPi. ChatterPi was developed from scratch using the Python language, but ideas for capabilities and features were freely borrowed from previous audio servo controller projects.

Features

ChatterPI is designed to be extremely powerful and flexible without requiring the user to modify any of the code (although advanced users can certainly do that as well). Table 1 compares the current capabilities of ChatterPi with other audio servo controllers. The full range of capabilities and options are described in the Operations subsection.

Table 1. Feature comparison of ChatterPi and other audio servo controllers

Table comparing features of Chatter Pi to other audio servo controllersDemonstration Video

This video shows ChatterPi in action, using both a saved .wav file and microphone input. ChatterPi is controlling the jaw movement. The other skull movements are pre-programmed as a script running on a Pololu Maestro Servo Controller.

Using ChatterPi

This section describes how to set up and install both the hardware and software for using ChatterPi, and for using it.

Hardware

ChatterPi was developed and tested on a Pi 3 A+ and a Pi Zero W. It should work on any Pi. [Originally it ran too slowly on a Pi Zero. A section of code for analyzing audio volume used list processing and a loop. This code was replaced using Numpy, and it’s now fast enough to work on a Pi Zero.]

In addition to the Raspberry Pi, you’ll ]need a USB sound card. This is needed for several reasons. First, if you plan to use an external sound source, you need a way to get audio into your Pi. Second, besides not producing very good sound, the audio out connector may share timing with the Pulse Width Modulation (PWM) code that is needed to drive the servo, creating conflicts.  Use of an inexpensive USB sound card solves both issues. I’ve used one from Adafruit that sells for less than $5 and works well (see https://www.adafruit.com/product/1475). You will need TRS (standard stereo) plugs or an adapter to go into the headphone and microphone jacks on the sound card. The card does not work with a TRRS (combination microphone / stereo headphone plug. You only need a microphone or other external sound source if you wish to use one. Otherwise you can use audio .wav files that you save on the Raspberry Pi. You still need the USB sound card for audio output, however.

That’s all you need for the audio servo controller. Of course, you’ll need a power supply and a servo that you want to control, such as a servo-equipped talking skull, and a passive infrared sensor (PIR) if you want to trigger your prop using one. I used this one (https://www.parallax.com/product/555-28027) from Parallax for development, as I had a spare one already. ChatterPi can also be set to trigger off of a repeating timer or to just turn on and run if you don’t want to use an external sensor.

Figure 1 shows a test bench setup for testing operation. The Red LED is attached to the “TRIGGER_OUT” pin for testing purposes. It can be moved or another LED and resistor attached to the “EYES_PIN” to test that feature. The TRIGGER_OUT pin goes high for 0.5 seconds when the controller is triggered. This can be used to trigger another prop or controller. The EYES_PIN stays high for as long as the audio plays.

Figure 1. A layout for fully testing the ChatterPi

The default PIN selection (which can be changed in the config.ini file) are:

  • Jaw servo: 18
  • PIR input trigger: 23
  • Trigger out: 16
  • Eyes: 25

Figure 2 is a photograph of my test setup. The placement of the wiring on the breadboard is slightly different because I was using it to test a variety of items and also a 3-wire servo controller wire, but the schematic connections are identical.

picture of the test setup

Figure 2. A picture of the test seup used for development

Software Overview

Knowledge or understanding of the software code is not required to operate or use the ChatterPi. The ChatterPi package consists of eight Python 3 modules and one configuration file, as shown in Figure 3.

The configuration file, config.ini, holds all of the user selectable parameters, including which pins are used for which functions, whether the audio source is the microphone input or stored .wav files, which servo control mode should be used, and the servo threshold levels. The config.py program simply reads these values and makes them available in memory during runtime.

The main.py program essentially simply loads the configuration parameters on start-up and calls control.py.  The functions in control.py are not folded into main.py in order to avoid a submodule having to import the main program, which can be problematics.

Most of the processing occurs in the control.py and audio.py modules. The control.py program handles most of the triggering (either a timer, an external trigger such as a PIR, or immediately upon startup, with the method specified in the config.ini file. It uses the GPIO Zero and PiGPIO libraries to monitor the triggering sensor and send output to the output trigger and led pins. PiGPIO is used as the GPIO layer underneath GPIO Zero because it uses DMA control for the Pulse Width Modulation (PWM) control used to the control the servo. Some other libraries, including the default one used by GPIO Zero, use software PWM, which is adequate for tasks such as controlling the brightness of LEDs, but not precise enough for servo control.

Unless the triggering mode is START, the file enters an infinite loop waiting for either a timer to expire (TIMER mode) or the external trigger to be generated (PIR mode). The wait functions meet the requirements and during development, interrupt driven approaches interfered with the audio output, probably due to timing conflicts. In TIMER mode, the timer is restarted after either the audio file finishes playing (if the source is FILES) or after a configurable pre-set time (if the source is MICROPHONE).

When triggered, an event handler is called that, depending upon the settings, sets off the TRIGGER_OUT to trigger another prop or device and turns on the LED eyes or other low power device. Then, if the audio source is FILES, it will call tracks.py, which will select the next .wav file to be played and call audio.py, passing the name of the .wav file to be played. If the audio source is MICROPHONE, audio.py is called without passing a file name.  When the call to audio.py returns, the event handler turns the LED eyes off and returns.

Audio playback, audio analysis, and servo control are all performed by the audio.py module. It defines one class, AUDIO. When the audio.play function is called, it checks on whether the audio source is MICROPHONE or FILES and opens a PyAudio stream appropriately. The stream call runs in a separate thread (this is automatically handled by PyAudio). For each chunk of the input stream, a callback function is called. This callback function is where the audio stream volume is analyzed. The average volume for each chunk is calculated, and the servo is commanded to the appropriate position based on that average volume and the threshold levels that the user has specified in the config file. The wave library is used to read the wave files from storage, and the struct library is used to help deconstruct the wave data to calculate volume and to help to separately analyze the left and right channels for stereo files. The number of levels, the specific thresholds, and whether a bandpass filter is applied before calculating the volume is based on the STYLE setting set by the user in the config file. In addition to the official documentation, I found a slide presentation, Introduction to PyAudio, by Jean Cruypenynck to be very helpful.

If the STYLE is set to 2, then bandpassFilter.py is called to process the digital audio stream and return a modified stream with the bandpass filter applied. The program is very short and simple. It uses two functions from the scipy signal processing library to filter out audio input below 500 Hz and over 2500 Hz. No bandpass filter is applied for STYLE 0 or STYLE 1.

When AMBIENT is set to ON, the ambient playback function in audio.py must also monitor for triggering events (either the timer or sensor), since it needs to interrupt itself and pass control back to control.py when such an event occurs.

The config.ini file can either be edited directly or through a GUI program named controlPanel.py. If the servo or controller subset of parameters are changed during execution, the changes will be reflected the next time a vocal track is triggered. Other changes will not take effect until after ChatterPi is stopped and then restarted.

maxVol.py is a utility program that can be launched from the control panel. It reads and analyzes each wave file in either the vocals or ambient subdirectories and writes them back with the volume levels increased to the maximum possible without clipping or distortion.

Screenshot of Chatter Pi Configuration Control Panel

Screenshot of Chatter Pi Configuration Control Panel

Software Installation, Setup, and Operations.

See the User’s Manual on GitHub for complete instructions. UPDATE: In addition to the source code, there’s now a Pi SD card image file available for download, so you won’t have to do any installations of dependencies or the source code, just install the image on a micro SD card, load it up in your Pi, and your ready to go! See the link in the README file at GitHub – ViennaMike/ChatterPi.

Project Roadmap

This version, 0.9, includes all the features currently planned for ChatterPi. That said, there are two additional features that might be added at a later time (or if anyone cares to add them to this open source project:

  • The ability to use .mp3 files. Simply playing MP3 files on a Raspberry Pi is easy, but they must be processed in real-time as a stream to drive the servo controller.
  • Add drop down pick lists for many of the options in the control panel, and allow lower case values to be entered, auto-correcting to upper case.
  • Add the ability to start and stop the execution of ChatterPi from the control panel.

Wrap Up

The code is open source and published on GitHub (https://github.com/ViennaMike/ChatterPi), and I would welcome anyone who wanted to work on adding any of these advanced features.

To report a bug, make a suggestion, or ask a question, please go to the GitHub repository for the project (https://github.com/ViennaMike/ChatterPi) and open an Issue. To do so, first click on the issues tab, and then use the green “New Issue” button. It’s a good idea to first browse through or search other reported issues to see if someone has already reported the same issue or asked the same question. You can then add comments or suggestions to existing issues, rather than opening a new, duplicative issue.