Electronic Drum Project
Finally, after banging on furniture for decades, I decided that I need an electronic drum. It needs to be electronic, otherwise the rest of the inhabitants will object. I was not completely happy with a commercial drum (insensitive - I am a hand drummer - and rather inflexible - ok, it was not expensive), and rolling your own gives you total freedom to modify the design.

Design criteria

  • Flexibility - as much processing in modifiable software as possible
  • Reasonably easy to implement
  • Cheap
  • Primarliy designed for hand drumming in studio environment
  • Fast - minimal delay between hit and sound
  • More than 4 drum pads
  • Polyphonic

The solution

The flexibility, easy implementation and cheapness criteria points to a Raspberry Pi based solution, with ADCs (to detect drum hits) and DACs (to generate sound) connected to Raspberry by the SPI bus. The Raspberry contains the following data structures and algorithms:

    Signals: a table containing pointers to active sounds - each entry in the table is the result of one hit on one drum pad.

    Do forever:
        Check the ADCs: if a hit on a pad is detected:
            Store away the amplitude of the hit.
            Check what sound is associated with this pad, and set up a pointer to that sound in the Signals table.

        Check the Signals table: if there are pointers to active sounds:
            Mix (add) all the audio frames (mulitplied with hit strengths) for all active sounds, for the current point in time.
            Write to the DACs.
            If a sound in the Signals has ended, it is deactivated.

A lot of details are left out in the algorithm above, among other things there needs to be synchronization to a clock, so that each audio frame is delivered to the DACs at the right time. In this project the clock is 88.2 kHz.

In practice, there is no need to check the ADCs at 88.2 kHz, so each time the loop above is executed, only one drum pad is tested, and the DACs are updated several times.

The Need for Speed - h
andling real-time signals on a Raspberry Pi 3.

The algorithm above require undivided attention from the CPU for prolonged times as it has to react within a few milliseonds to a hit on a pad, and generate polyphonic audio data on the fly at 88.2 kHz. Most modern desktop operating systems do well when data can be buffered, either input or output or both. But what do you do when you have a stream of real-time input data and you need to calculate a stream of output data in real-time ?

 1. Use a real-time operating system.
 2. Dump the operating system and program the bare naked metal.
 3. Add more hardware in the form of a coprocessor.

All these have pros and cons, 1. is nice, but requires an operating system switch. Also, the context switch for interrupt processing might take some time.  2. is very fast but you loose all goodies of a desktop operating system. Number 3 is also effective, but high cost - and modern processors are often multicore and extremely fast, so it feels a little bit odd to add more hardware. Also, interprocessor communication software needs to be written, and coprocessor memory size can be somewhat limited.

Here is a fourth, rather unorthodox approach using a Raspberry Pi, but it should be applicable to other Linux based systems running on multicore processors:

 4. Write part of your application as a loadable kernel module (LKM), which monopolizes one processor. This has a similar performance as 2., but you still have all the desktop OS goodies available. Also, lots of memory will be available: almost one hour worth of sound (at 88.2 kHz x 4 channels) can be stored and played back using a Raspberry Pi 3.

So, the hardware consists of the Raspberry Pi 3 card, with an 8 channel 14 bit ADC and a 4 channel, 16 bit DAC connected to the Raspberry SPI 0 bus. The drumpad electret microphones are connected to the ADC, and the DAC is used to generate sounds. The LKM initializes the hardware (SPI, ADC and DAC), and creates a number of files in /proc for communication, but never returns from kernel space to user space. Instead, it starts to execute the algorithm described above in a tight loop.

On a single processor system, this would cause all other processes in the system to freeze. But on a multicore system, like Raspberry 3, there are still three fast processors available for other tasks.

    This is what the hardware looks like.
    The Raspberry Pi can be seen at the right far end of the main board.

Detailed description: software

Get the sources here: the LKM source Drum1.c, communication program Synticka.py and startup script SetupEnviron.sh.

In the Linux kernel, there is a system that detects if a CPU is unresponsive, and in this case the system will note that the processor running the drum application is no longer available. So it will sound the alarm. This must be disabled with the following shell command (run as root):

     # echo 1 > /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress

Also, interrupts are disabled as much as possible. When audio signal is generated, the interrupts are kept disabled in the LKM (however, ARM fast interrupts, FIQs, are always enabled). When no signal is generated, interrupts are enabled again.

The LKM is coded in C. It consists of the following main parts:
  • An array of pointers to sound samples, called Signals. The pointers in the array are structs and also hold other information, such as sound amplitude, and an index pointing to the current audio frame in the sound sample. Each audio frame consists of two signed 16 bit integers (two:s complement, one for left, one for right).
  • The first part of the table is used to hold pointers to currently active sound samples (samples that are used to generate sounds). At the present, 16 simultaneous stereo sounds are supported. The rest of the table contains pointers to all sound samples (active and inactive; this latter part of the table was not mentioned in the simplified algorithm above).
  • Each drum pad is associated with a parameter block of the type ADCparamType. These parameters are mostly related to the drum pad signal processing, but there is also an integer with the name SoundIndex. This index, which is used to index the Signals table, associates a drum pad with a sound sample that is played when the pad is hit.
  • When a pad is hit, the SoundIndex variable us used to fetch the pointer to the sound sample to be played. This pointer is then inserted into the first part of the Signals table, where pointers to active sounds are stored.
  • The main loop that generates sound and reacts to drumpad hits is situated in the DrumLoop routine. From this loop, the routine MixAndOut, which generates sound, is called several times. The loop is also calling ReadADCchannels (once per loop iteration), which will check an ADC channel for a drum pad hit, and, if a hit is detected, set up a pointer to the activated sound sample in the first part of the Signals table.
  • MixAndOut looks for pointers to active sounds in the first part of Signals. It will mix (add) all frames (multiplied with the corresponding hit ampilude) for the current point in time from all active sound samples. The result is then written to the DACs. The indices pointing at the current audio frame for each active sound sample is incremented. If the end of the sound sample is reached, then the sound is deactivated (amplitude set to zero).
  • ReadADCchannels is the most complicated routine in this group. It reads the ADC channels (one channel per loop iteration) and stores the result in a ring buffer (one ring buffer per channel). It then performs signal processing on the data in the ring buffer: the derivative of the signal is calculated, then squared and digitally low pass filtered. If this processed signal exceeds a set trig level, the sound sample associated with the drum pad is inserted into the first part of Signals (for active sounds). A flag Holdoff is also set. For subsequent reads from the same ADC channel Holdoff will be set, and the sound sample associated with the pad will not be activated again. Instead, the maximum amplitude of the hit seen so far is stored away, to be used for amplitude modulation of the pad sound. Only when the hit amplitude has fallen sufficiently below the trig level will the Holdoff flag be reset, and the drum pad will be ready for another hit. Observe that this is of course independent of the sound generation, so the sound generated by the hit can be active long after the drum pad is ready for next hit, making polyphony (from one pad) possible.
  • The 88.2 kHz clock is generated by the Raspberry Pi GPCLK0. To synchronize sound generation to the 88.2 kHz clock, the software will wait (active polling) for a high-to-low clock transient before updating the set of four DAC channels.
As mentioned above, the LKM is controlled from a python module (Synticka.py), communicating trough the /proc filesystem. The LKM creates a directory, drum, under proc, and three files in the drum directory: Command, ADC and DAC. Command is used to interact with the LKM, while DAC and ADC are used mostly for debugging.

How to use Synticka.py to control the LKM, Drum1

To start the software, type

    # SetupEnviron.sh

in a shell, as root. The script will start the KLM, Drum1.ko, so it will not return. In another shell, as a normal user, type:

    $ python -i Synticka.py

This will give a python prompt with all the interface routines loaded. In the following, the usage of the most important routines will be described:

    MakeSig(SoundIndex, LeftChannel, RightChannel, Amplitude=65535)

Makes a sound sample (also allocates memory for it) from one-dimensional numeric arrays (lists)  LeftChannel and RightChannel, and inserts a pointer to the sound sample into the Signals table at a position indicated by SoundIndex. LeftChannel and RightChannel should have the same length. Max. value for Amplitude is 65535 (0xFFFF, default), minimum is 0. If Amplitude is zero, the sound will be inactive (disabled).

    ReadSig(SoundIndex, SigDataIndex=0, SigDataIndexEnd=2000000000, ChIndex=-1)

Reads a sound sample pointed at by a Signals table element at SoundIndex. Default, the whole sample is read, unless
SigDataIndex is used to give a starting index, and/or SigDataIndexEnd is used to give a stop index. The result is delivered as a list of tuples, where the first element och each tuple is the left channel and the second tuple is the right channel. If ChIndex is given as 0, only the left channel will be delivered as a list if integers, or, if ChIndex is given as 1, the right channel.

    InitSig(SoundIndex, Size, Amplitude=65535)

This will make a sound sample consisting of zeros, and set a pointer to the sample in the Signals table at position indicated by SoundIndex. The sample will have size Size and amplitude Amplitude. Max. value for Amplitude is 65535 (0xFFFF, default), minimum is 0. If Amplitude is zero, the sound will be inactive (disabled).

    SetSigPars(SoundIndex, Amplitude=65535)

Will set parameters associated with the sound pointed at by Signals table element at SoundIndex. Currently only amplitude can be set:  max. value for Amplitude is 65535 (0xFFFF, default), minimum is 0. If Amplitude is zero, the sound will be inactive (disabled).
    SetADCpars(Channel, SigIndex=-1, TrigLevel=10000, FilterFact=100, FilterDerivativeGap=9)

Sets parameters associated with a drumpad ADC channel. Channel should be between 0 and 7, SigIndex is a Signals table index indicating a sound sample that is played at drumpad hits. TrigLevel is the hit sensitivity. FilterFact and FilterDerivativeGap controls the drumpad hit signal processing: smaller FilterFact allows for higher frequencies to pass trough the low-pass filter, which makes the drum faster, but less sensitive (= more suspectible to false triggers). FilterDerivativeGap controls the delta when calculating hit signal derivative (derivative = HitSignal[i] - HitSignal[i - FilterDerivativeGap]).


This command will cause the process currently running in the LKM to exit to the shell. It will not unload the LKM from the kernel.


Reads an audio file in Audio Interchange File Format (.aiff). Name should be an ASCII string. It returns a couple of arrays, for the left and right channel. These array can then be used with a call to MakeSig, e.g.:

    >>> DataLeft, DataRight = ReadAudioFile("AudioFile.aiff")
    Audio file  AudioFile.aiff  has sample rate:  44100  Hz, and is  147402 *4 bytes long.
    >>> MakeSig(25, DataLeft, DataRight)
    >>> SetADCpars(0, 25)                              # Associate pad 0 with sound number 25.

Detailed description: hardware

As mentioned, the hardware consists of the Raspberry card, with an 8 channel 14 bit ADC and a 4 channel, 16 bit DAC connected to the Raspberry SPI 0 bus. The drumpad electret microphones are connected to the ADC, and the DAC is used to generate sounds. The prototype-like hardware is built using a combination of wire-wrap and soldering.


    DAC and Raspberry interface

    Audio out amplifiers

    ADC with oscillator and reference buffer

    Electret microphone amplifiers

    Digital level conversion for the ADC

  • The 74*125 buffers are used as level converters. Some of the 74HCT125 buffers are also disabled by a GPIO signal from the Raspberry to avoid unnecessary SPI bus noise entering the ADC circuits.
  • The 4.096 V internal reference in the AD7856 ADC is buffered by the AD820 OP amp and used for DAC8555.
  • ARTGND is an artificial ground, should be adjusted to 2.048 V.

   Scope screen picture of SPI communication. Upper trace is SPI data, and lower trace is SPI  clock.
    As can be seen, the DACs are updated 4 times for each one-channel ADC read.

   Latency is very critical in drumming - even small delays between the drumpad hit and sound
    will confuse the drummer, especially if you are syncing to another drummer.
    The upper trace is the signal from a microphone, and the lower trace is sound output.
    The difference between the mic signal and the beginning of the sound (latency) is about 3 ms.

    A cheap electret microphone

    Drumpads. They consist of blocks of wood with the microphones mounted
    inside the wood in a hole. The block is covered with a few of layers of cloth

    to make the the surface softer.

Then, a little improvised joy ride to test the thing. My sincere apologies to the maker of this charming tune (Elektronomia - Sky High: https://www.youtube.com/watch?v=TW9d8vYrVFQ), the intention here is to test the drum. The video is not sped up, it is realtime, so it shows that the drum keeps up with the speed (in the video, up to 12 hits/second). The video might be a bit out of sync, as the video/sound sync is done by hand:

(Check it on Youtube: https://youtu.be/RSUQsVFTU0Y)

Finally - yes, the electric schematics are drafted with Gimp. This because setting up and importing all necessary components into a schematic capture program simply takes too much time. Then again, as there are plans for a circuit board, it has to be done properly at some point...

What to improve ?

  • A printed circuit board
  • As such, the drum is usable in a studio environment, but on stage, it might be suspicious to acoustic interference from powerful loudspeakers nearby, This could (and should) be prevented by HW or SW means.
  • Proper reconstruction filters on the outputs of the DACs - at present the sound is somewhat raw.
  • A more modern ADC could be used, powered by 3.3 V, which would eliminate the need for level conversion
  • MIDI output
  • There are four audio out channels, but presently channels 1 and 2 is simply copied to channels 3 and 4. Software should be modified to make channels 3 and 4 into independent drum sound channels.
  • This thing is now an (experimental) drum module ("drum brain"). It should not be hard to modify it to simultaneously function as a general sound module.

At this point in time the software is not exactly easy to install, and assembling the hardware requires surface mount IC soldering skills. You have been warned !  However, if this projects attracts intrest, we might do something about this.