Raspberry Pi’s, I2C Clock Stretching, and a Note on UART Interfaces with CircuitPython

Picture of BNO055 IMU board connected to Pi Zero W

My IMU project in development, where problems with I2C data corruption popped up.

I ran into a problem with interfacing Adafruit’s BNO055 breakout board with a Raspberry Pi Zero W, and this led me down a rabbit hole or three regarding the Pi’s implementation of I2C and using CircuitPython on a Pi.

There are pieces of this information spread across the internet, but I figured I’d try to summarize it in one place in the hope it will help others.

I2C Clock Stretching Problems with Raspberry Pi’s

Many sensors and other devices use I2C for communicating with microcontrollers or microcomputers, such as an Arduino or Raspberry Pi. Like many, but not all, sensor devices, the BNO055 uses “clock stretching.” Clock stretching occurs when clock line (SCL) is held low after receiving (or sending) a byte, indicating that the device it is not yet ready to process more data. The primary that is communicating with it may not finish the transmission of the current bit, but must wait until the clock line actually goes high. Unfortunately, the chip that handles I2C communications on Raspberry Pi boards has a flaw and does not handle clock stretching properly. In my case, this was causing erroneous readouts. Because the flaw is in the hardware, it can’t be fixed via a software update, but there are workarounds. As of April 2021, there are some reports that this has been fixed on Raspberry Pi 4 boards, but anecdotal reports say it’s a problem even on these newest boards. If you’ve any experience with I2C clock stretching on Pi 4’s, leave a comment on what you’ve found.

While unneeded to work around the problem, you can read a lot more detail about the bug at Raspberry Pi I2C clock-stretching bug.

Workarounds

There are three basic ways to work around this issue: 1) lower the clock speed on the I2C bus so that the device won’t need to clock stretch, 2) Use software I2C rather than the build-in hardware with the flaw, or 3) if supported by your device, use another type of interface, such as SPI or UART interfaces.

Lowering the Clock Speed

First, of course, you want to be sure that you’ve enabled I2C communications on your Pi. You can read how to do so here: Enable I2C Interface on the Raspberry Pi.

Next you want to go into the config file and slow down the clock rate so that it’s less likely that clock stretching will be used. A typical default speed on a Pi is 100000, and it’s suggested in multiple locations on the net to slow it down to 10000. You can do that by adding

dtparam=i2c_arm_baudrate=10000

to your config.txt file. If you need help on how to do that, see this: I2C Clock Stretching. You’ll need to reboot for the change to take effect. If this doesn’t work, you can try slowing it down even further.

Software I2C

Raspberry Pi’s operating system includes support for software I2C which can be enabled by just a few lines in the config.txt file. Just add

dtoverlay=i2c-gpio,bus=3

This will create an I2C bus called /dev/i2c-3. SDA will be on GPIO23 and SCL will be on GPIO24 which are pins 16 and 18 on the GPIO header respectively. For more information, see Configuring Software I2C on the Raspberry Pi.

Other Interfaces

Of course, if the device you are interfacing to supports other communications links, such as SPI or a UART interface, you may want to consider using them rather than the I2C interface.

UART Interfaces with CircuitPython and the Raspberry Pi

CircuitPython has become a very popular choice for writing code for microcontrollers, and can also be used on Linux boards, such as Rasberry Pi’s, to take advantage of the large and growing set of device libraries found in CircuitPython. Several of the libraries provide UART interfaces for boards supporting that interface. However the instantiation for the interface differs between Linux boards (including Raspberry Pi’s) and other boards without an operating system. For example, you may see code like this

# Use these lines for I2C
# i2c = busio.I2C(board.SCL, board.SDA)
# sensor = adafruit_bno055.BNO055_I2C(i2c)

# Use these lines for UART except Linux devices
uart = busio.UART(board.TX, board.RX)
sensor = adafruit_bno055.BNO055_UART(uart)

This works fine for non-Linux boards, but for Linux, the CircuitPython busio library is not used for UART interfaces. Instead, you need to import the Python serial library and assuming you have linked directly to the UART pins, the uart = line should be replaced with something like:

uart = serial.Serial("/dev/ttyS0", baudrate=9600, timeout=1000)

You can also use an external serial to USB convertor to link up your peripheral device and then use the USB port on your Pi. You can find more details at Adafruit’s CircuitPython on Linux and Raspberry Pi UART / Serial page.

Conclusion

If you’re planning a Pi project involving I2C, or have had problems with one in the past, I hope that this collection of information helps you sort things out and implement the best option for your project.

Please share any experiences you’ve had with I2C and Raspberry Pi’s in the comments.

Further UPDATED for Halloween 2021! ChatterPi: a Raspberry Pi Audio Servo Controller for Talking Skulls

Introduction

ChatterPi is a software package that turns a Raspberry Pi into an audio servo controller. In other words, the Pi outputs commands to control a servo based on the volume of the audio input. The input can be either stored audio files (in either mono or stereo .wav format) or from an external source, such as a microphone or line level input. One of the uses is to drive animatronic props, such as a skull or a talking bird.

[This post has been updated to reflect new features added in the latest version]

Background: A Brief History of Talking Skull Control

A common prop that still makes a good impact is a talking object, whether a skull or animal. Some lower cost commercial props use a motor and spring. Another approach is to pre-program a complete sequence to match the vocals, but this is very time consuming and if you want to change the vocals, or even just edit them slightly, you need to reprogram the entire sequence. For that reason, the use of an audio servo controller to drive a servomotor controlling the jaw is a very popular approach. There are several variations. One of the earliest use hardware to detect when the audio exceeded a threshold, and then began moving the jaw towards a fully open position, and when the audio went below the threshold, it would begin closing the jaw. “Scary Terry” Simmons may have been the first to develop an electronic hardware board to do this, and Cowlacious Designs has continued to improve and sell commercial versions, with many added features such as a built in audio player, various triggering options, and the ability to control LEDs as eyes.

Later, someone named Mike (no relation) combined an Arduino with a hardware volume level board to produce the Jawduino. This went from having just 2 levels to 4. The original project just took audio in and controlled the servo, but others added extensions to play stored mp3 files and/or randomly move additional servos (for example, http://batbuddy.org/resources/Halloweenstuff/TalkingSkull.php).

A few years ago, Steve Bjork from Haunt Hackers combined dedicated hardware with a propeller microcontroller to increase the number of levels to almost 256 and also to filter out low and high frequencies that don’t tend to result in jaw movement for spoken sound. The result is the Wee Little Talker. This commercial board also has an onboard mp3 player, can be triggered externally, control LED ‘eyes,” and adds a wide array of features including a voice feedback menu system.

It occurred to me that with current single board computer capabilities and powerful software libraries, it should be possible to incorporate most of the best features of all of these into a single, software-based system running on a Raspberry Pi. The result is ChatterPi. ChatterPi was developed from scratch using the Python language, but ideas for capabilities and features were freely borrowed from previous audio servo controller projects.

Features

ChatterPI is designed to be extremely powerful and flexible without requiring the user to modify any of the code (although advanced users can certainly do that as well). Table 1 compares the current capabilities of ChatterPi with other audio servo controllers. The full range of capabilities and options are described in the Operations subsection.

Table 1. Feature comparison of ChatterPi and other audio servo controllers

Table comparing features of Chatter Pi to other audio servo controllersDemonstration Video

This video shows ChatterPi in action, using both a saved .wav file and microphone input. ChatterPi is controlling the jaw movement. The other skull movements are pre-programmed as a script running on a Pololu Maestro Servo Controller.

Using ChatterPi

This section describes how to set up and install both the hardware and software for using ChatterPi, and for using it.

Hardware

ChatterPi was developed and tested on a Pi 3 A+ and a Pi Zero W. It should work on any Pi. [Originally it ran too slowly on a Pi Zero. A section of code for analyzing audio volume used list processing and a loop. This code was replaced using Numpy, and it’s now fast enough to work on a Pi Zero.]

In addition to the Raspberry Pi, you’ll ]need a USB sound card. This is needed for several reasons. First, if you plan to use an external sound source, you need a way to get audio into your Pi. Second, besides not producing very good sound, the audio out connector may share timing with the Pulse Width Modulation (PWM) code that is needed to drive the servo, creating conflicts.  Use of an inexpensive USB sound card solves both issues. I’ve used one from Adafruit that sells for less than $5 and works well (see https://www.adafruit.com/product/1475). You will need TRS (standard stereo) plugs or an adapter to go into the headphone and microphone jacks on the sound card. The card does not work with a TRRS (combination microphone / stereo headphone plug. You only need a microphone or other external sound source if you wish to use one. Otherwise you can use audio .wav files that you save on the Raspberry Pi. You still need the USB sound card for audio output, however.

That’s all you need for the audio servo controller. Of course, you’ll need a power supply and a servo that you want to control, such as a servo-equipped talking skull, and a passive infrared sensor (PIR) if you want to trigger your prop using one. I used this one (https://www.parallax.com/product/555-28027) from Parallax for development, as I had a spare one already. ChatterPi can also be set to trigger off of a repeating timer or to just turn on and run if you don’t want to use an external sensor.

Figure 1 shows a test bench setup for testing operation. The Red LED is attached to the “TRIGGER_OUT” pin for testing purposes. It can be moved or another LED and resistor attached to the “EYES_PIN” to test that feature. The TRIGGER_OUT pin goes high for 0.5 seconds when the controller is triggered. This can be used to trigger another prop or controller. The EYES_PIN stays high for as long as the audio plays.

Figure 1. A layout for fully testing the ChatterPi

The default PIN selection (which can be changed in the config.ini file) are:

  • Jaw servo: 18
  • PIR input trigger: 23
  • Trigger out: 16
  • Eyes: 25

Figure 2 is a photograph of my test setup. The placement of the wiring on the breadboard is slightly different because I was using it to test a variety of items and also a 3-wire servo controller wire, but the schematic connections are identical.

picture of the test setup

Figure 2. A picture of the test seup used for development

Software Overview

Knowledge or understanding of the software code is not required to operate or use the ChatterPi. The ChatterPi package consists of eight Python 3 modules and one configuration file, as shown in Figure 3.

The configuration file, config.ini, holds all of the user selectable parameters, including which pins are used for which functions, whether the audio source is the microphone input or stored .wav files, which servo control mode should be used, and the servo threshold levels. The config.py program simply reads these values and makes them available in memory during runtime.

The main.py program essentially simply loads the configuration parameters on start-up and calls control.py.  The functions in control.py are not folded into main.py in order to avoid a submodule having to import the main program, which can be problematics.

Most of the processing occurs in the control.py and audio.py modules. The control.py program handles most of the triggering (either a timer, an external trigger such as a PIR, or immediately upon startup, with the method specified in the config.ini file. It uses the GPIO Zero and PiGPIO libraries to monitor the triggering sensor and send output to the output trigger and led pins. PiGPIO is used as the GPIO layer underneath GPIO Zero because it uses DMA control for the Pulse Width Modulation (PWM) control used to the control the servo. Some other libraries, including the default one used by GPIO Zero, use software PWM, which is adequate for tasks such as controlling the brightness of LEDs, but not precise enough for servo control.

Unless the triggering mode is START, the file enters an infinite loop waiting for either a timer to expire (TIMER mode) or the external trigger to be generated (PIR mode). The wait functions meet the requirements and during development, interrupt driven approaches interfered with the audio output, probably due to timing conflicts. In TIMER mode, the timer is restarted after either the audio file finishes playing (if the source is FILES) or after a configurable pre-set time (if the source is MICROPHONE).

When triggered, an event handler is called that, depending upon the settings, sets off the TRIGGER_OUT to trigger another prop or device and turns on the LED eyes or other low power device. Then, if the audio source is FILES, it will call tracks.py, which will select the next .wav file to be played and call audio.py, passing the name of the .wav file to be played. If the audio source is MICROPHONE, audio.py is called without passing a file name.  When the call to audio.py returns, the event handler turns the LED eyes off and returns.

Audio playback, audio analysis, and servo control are all performed by the audio.py module. It defines one class, AUDIO. When the audio.play function is called, it checks on whether the audio source is MICROPHONE or FILES and opens a PyAudio stream appropriately. The stream call runs in a separate thread (this is automatically handled by PyAudio). For each chunk of the input stream, a callback function is called. This callback function is where the audio stream volume is analyzed. The average volume for each chunk is calculated, and the servo is commanded to the appropriate position based on that average volume and the threshold levels that the user has specified in the config file. The wave library is used to read the wave files from storage, and the struct library is used to help deconstruct the wave data to calculate volume and to help to separately analyze the left and right channels for stereo files. The number of levels, the specific thresholds, and whether a bandpass filter is applied before calculating the volume is based on the STYLE setting set by the user in the config file. In addition to the official documentation, I found a slide presentation, Introduction to PyAudio, by Jean Cruypenynck to be very helpful.

If the STYLE is set to 2, then bandpassFilter.py is called to process the digital audio stream and return a modified stream with the bandpass filter applied. The program is very short and simple. It uses two functions from the scipy signal processing library to filter out audio input below 500 Hz and over 2500 Hz. No bandpass filter is applied for STYLE 0 or STYLE 1.

When AMBIENT is set to ON, the ambient playback function in audio.py must also monitor for triggering events (either the timer or sensor), since it needs to interrupt itself and pass control back to control.py when such an event occurs.

The config.ini file can either be edited directly or through a GUI program named controlPanel.py. If the servo or controller subset of parameters are changed during execution, the changes will be reflected the next time a vocal track is triggered. Other changes will not take effect until after ChatterPi is stopped and then restarted.

maxVol.py is a utility program that can be launched from the control panel. It reads and analyzes each wave file in either the vocals or ambient subdirectories and writes them back with the volume levels increased to the maximum possible without clipping or distortion.

Screenshot of Chatter Pi Configuration Control Panel

Screenshot of Chatter Pi Configuration Control Panel

Software Installation, Setup, and Operations.

See the User’s Manual on GitHub for complete instructions. UPDATE: In addition to the source code, there’s now a Pi SD card image file available for download, so you won’t have to do any installations of dependencies or the source code, just install the image on a micro SD card, load it up in your Pi, and your ready to go! See the link in the README file at GitHub – ViennaMike/ChatterPi.

Project Roadmap

This version, 0.9, includes all the features currently planned for ChatterPi. That said, there are two additional features that might be added at a later time (or if anyone cares to add them to this open source project:

  • The ability to use .mp3 files. Simply playing MP3 files on a Raspberry Pi is easy, but they must be processed in real-time as a stream to drive the servo controller.
  • Add drop down pick lists for many of the options in the control panel, and allow lower case values to be entered, auto-correcting to upper case.
  • Add the ability to start and stop the execution of ChatterPi from the control panel.

Wrap Up

The code is open source and published on GitHub (https://github.com/ViennaMike/ChatterPi), and I would welcome anyone who wanted to work on adding any of these advanced features.

To report a bug, make a suggestion, or ask a question, please go to the GitHub repository for the project (https://github.com/ViennaMike/ChatterPi) and open an Issue. To do so, first click on the issues tab, and then use the green “New Issue” button. It’s a good idea to first browse through or search other reported issues to see if someone has already reported the same issue or asked the same question. You can then add comments or suggestions to existing issues, rather than opening a new, duplicative issue.

A Sliding Puzzle for the PyBadge and PyBadge LC

15 tile sliding puzzle

A plastic  sliding 15 puzzle using numbers

The sliding puzzle has a long history. The first one, a 4×4, 15 tile puzzle, was made in 1890.  A plastic version is shown to the right. The puzzles may have letters forming words or have pictures instead of, or in addition to, numbers. Basically, the tiles are randomized, and then must be returned, by sliding adjacent tiles into the open space one at a time, until the puzzle is returned to the original arrangement.

Many such puzzles have been made, in various sizes. In addition to physical puzzles, video versions can be found on computers and the web.

This is a sliding puzzle game written in #CircuitPython for the @Adafruit #PyBadge and PyBadge LC. It uses pictures for the puzzle, with numbers superimposed to make the puzzles easier to solve. It is configurable so that different images can be used, and it supports both 3×3 (8 tile) and 4×4 (15 tile) puzzles. The tiled graphics approach used in the Adafruit displayio library is ideally suited for a sliding puzzle game.

How it Works

PyBadge showing Santa image

Complete image for Santa puzzle

PyBadge showing scrambled puzzle

PyBadge showing scrambled puzzle

PyBadge showing puzzle solution

PyBadge showing solved puzzle

After initial setup and bookkeeping, the program has an infinite while loop. It roughly follows a state machine pattern with the the states being “intro”, “setup”, “play”, and “solved”.

“intro” displays the puzzle image and then asks the player to select either a 3×3 or 4×4 puzzle. Once a selection is made, the state transitions to “setup.” In this state, the puzzle is scrambled and the scrambled puzzle is displayed. Then the state transitions to “play.” In “play”, the program monitors the up, down, left, and right buttons and moves the tiles accordingly. After each move, the puzzle is checked to see if it is in the solved position. If so, the state transitions to “solved”. Once in the “solved” state, the program displays a “You Won” message, then the full image. It then waits indefinitely until the player either presses START to go back to the “intro” state and play again or turns the badge off. The software is open source and published on GitHub.

A Note on Solvability

If one scrambles the puzzle by starting with the solution and then making random allowed moves the puzzle can always be solved. However, with this approach, it takes many, many such movements to really randomize the puzzle. If, instead, one simply places each tile at random, it turns out that only half of the possible arrangements can be slid back to the solution. Borrowing from what others have done, my code chooses a totally random arrangement, then checks if it is solvable (see the “solvable” function in the code). If not, it randomizes the puzzle again, and repeats this process until a solvable arrangement is found and used. The rules for solvability are:

  1. If the grid width is odd (e.g., 3×3), then the number of inversions in a solvable situation is even.
  2. If the grid width is even (e.g. 4×4), and the blank is on an even row counting from the bottom (second-last, fourth-last etc), then the number of inversions in a solvable situation is odd.
  3. If the grid width is even, and the blank is on an odd row counting from the bottom (last, third-last, fifth-last etc) then the number of inversions in a solvable situation is even.

The plastic slider puzzle shown at the top of this post is not solvable. The blank is on an odd row from the bottom (the first), and there is just one inversion, an odd number.

How to Play the Game

To play, simply load the software onto the PyBadge and turn it on. The display will first show the complete puzzle image, and then will ask you to press the “A” button for a 3×3 (8 tile) puzzle or the “B” button for a 4×4 (15 tile) puzzle. One slot is always empty so that tiles can be moved. Once you have made your selection, you’ll see the puzzle image with the pieces, in the solved state, and then the puzzle will be scrambled for play. The 4×4 puzzle is significantly harder than the 3×3 puzzle and will take more moves to solve, but both are fairly easy with practice.

Use the 4 directional buttons to slide a tile at a time. The goal is to get the tiles arranged in numerical order, left to right and top to bottom, with the empty spot on the bottom right. Once you have done this, you win! After the complete image is displayed upon winning, you can press the START button to play again. Sometimes you need to press the button a couple of times.

Stuck? There are a number of heuristics for humans to use to solve these puzzles (as well as heuristic algorithms for computers). The approach that works for me is documented here.

Changing the Puzzle Images and Creating Your Own

The parameters.py file stores several parameters, including the name of the folder where the puzzle images are stored. To change, for example, from the Santa to the witch puzzle, simply edit the line: puzzle_graphics_folder = “santa” to puzzle_graphics_folder = “witch”. I have provided three sets of images for the puzzle: Santa, a witch, and a Valentine’s Day floral image:

Flowers and "Happy Valentine's Day" text.

 

Santa, sleigh, and reindeer

 

Flying witch in front of moon, with bat.

To make your own puzzles, you need to create 3 bmp images:

  • The full image, to be saved as “full.bmp” in a new folder
  • A tile image for the 3×3 puzzle, to be saves as “tiles3.bmp” in the same folder
  • A tile image for the 4×4 puzzle, to be saved as “tiles4.bmp” in the same folder. .

These images must be exactly the right size in order for the program to work. The full image and 4×4 tile image must be 160 pixels wide by 128 pixels high. The 3×3 itile image must be 159 pixels wide by 126 pixels high.

Start from the full image. To make the 4×4 tile image, black out the pixels on the lower right of the image (x coordinates 121 – 160, y coordinates 96 – 128). You can also put numbers on what will be each tile to make solving the puzzle easier To do this, I use an image editing program to add a layer with a set of grid lines creating a 4×4 grid. Then I blacken out the lower right tile and put numbers in the upper right of each tile. Then I delete the grid layer and save the image as a bmp file. Follow the same process for the 3×3 tile image, but first re-scale the total image to be 159 x 126 and use a 3×3 rather than 4×4 grid. Once you have saved the three files in a new folder, change the puzzle_graphics_folder line in the parameters.py program to point to the new folder name.