Embedded Synthesis for the T-Stick



John Sullivan


I am exploring ways of integrating digital sound synthesis within the hardware of the T-Stick DMI. To achieve this, I am developing embedded code for a digital synthesizer, which will run on the ESP32 microcontroller inside the T-Stick, as well as exploring modifications to the T-Stick hardware to allow for audio output.

The ESP32 controller was selected because there is already one built into the T-Stick hardware, and because it has enough computing power that it can hopefully run both the synthesizer and instrument processes at the same time. Other boards that could have potentially been used include the Teensy, Raspberry Pi Pico and Raspberry Pi Zero. These, however, would have required either fitting another board in the already fairly confined T-Stick casing, or adding an external expansion for producing audio output. As well, only the Teensy or Raspberry Pi Zero would have more power available to them than the current ESP32 board.

I am currently working to implement a wavetable synthesizer using the Sygaldry and ESP-IDF software libraries. The synthesizer will feature an array of programmable wavetables, each with individually adjustable frequency and amplitude controlled by the inputs of the T-Stick.

The Sygaldry library allows the synthesizer implementation to interface with the other components of the T-Stick in a modular way. We implement the wavetable synthesizer as just another Sygaldry component, which is included in our new copy of the T-Stick firmware.

The synthesizer works by using the I2S(Inter-IC Sound) protocol to send samples from the software on the ESP-32 to an external DAC and amplifier. To prevent lag in the audio output, the I2S communication loop has to run in a separate thread from the main instrument, which we manage by having the main instrument thread control the parameters through a pair of state objects. In the main loop we modify the properties of the inactive copy of the object, and only once this is complete do we switch it with the active one. Otherwise we would have issues with adjusting the synthesizer parameters while it is in the middle of outputting samples.

The synthesizer is structured to have a variable number of wavetable sources, at the moment I am aiming at four, but it may be changed later. Each of these wavetables has independently controllable frequency and amplitude.

A signal flow diagram of the synthesizer. Four wavetable oscillators are driven by independent frequency inputs, modulated by mix levels, and summed together for a final output.

A signal flow diagram of the current simple wavetable synthesizer.

The T-Stick interface provides us with a variety of user input sources, including an IMU that provides us with the current rotation and acceleration of the instrument, a touch sensor array, and a force sensitive resistor. At the moment, I am still brainstorming different ways to map this information to the musical output of the instrument.

Currently, the project is in the state of testing on a TinyPico dev board. Previously, I used an ESP-32 dev board called the TTGO T-Display. The next steps are to start trying to integrate the firmware with the rest of the T-Stick hardware, mainly the touch and pressure sensors.

A picture of the testing setup. A TTGO T-Display ESP32 development board plugged into a breadboard, wired to a button and amplifier, which is in turn wired to a small speaker.

The old testing setup. A TTGO T-Display ESP32 dev board is hooked up to a button and a combined I2S DAC and amplifier, which is further connected to a small speaker.

I have also set up another device for testing the output of the synthesizer directly, without needing to pass through a speaker. This is done through a Raspberry Pi Pico, programmed to work as an I2S receiver. It is hooked up to the I2S output of the synthesizer, and lets me directly view the samples output on my computer for analysis.

I am currently debugging a major issue with how the samples from the synthesizer get sent to the DAC for output. For some reason, although the synthesizer reports that it’s sending the correct samples, when we test the output it is sending completely incorrect data a large portion of the time. It is occasionally correct, and we get our desired signal output, but the majority of the time it outputs pseudo-random samples that are only vaguely correlated to the desired output.

I have ruled out this being an issue with the endianness or bitshifting of the samples, which can vary between different implementations of the I2S protocol. At the moment I’m trying to determine if it is some sort of clock issue causing samples to be split in the wrong place.


Currently Working On:

  • Debugging I2S communication
  • Brainstorming input mappings
  • Proper sensor communication

Next Steps:

  • Finalizing synthesizer implementation
  • Implementing input mappings
  • Proper audio-out jack



IDMIL Participants:

Research Areas: