Spectral features of sound has very unique signatures that can be leveraged in many applications including Keyword Spotting (KWS), preventive maintenance for heavy machinery and heart disease diagnosis.
Mel frequency cepstral coefficient (MFCC) is one of the most popular method for classifying sound signals. A typical digital implementation of a sound classification is shown in the above figure used for kyword spotting (KWS) for extracting speech features such as a simple word "Alexa". In a typical all-digital immplementation, a digital microphone is used to read real-time data using an
The first operation done on the input data is a pre-emphasis filter (high-pass or HPF) to remove the DC content from the signal. The pre-emphasis filter is typicall implemented as the following difference equation:
where
After the pre-emphasis filter, the data is multiplied with a window (hamming, hanning, etc.)) to avoid spectral leakage from FFT operation. After the windowing, fast-fourier transform (FFT) is applied to the signal to find the frequency content of the signal. Then the linear frequency scale if converted to Mel Log scale (
Typically the MFCC and the classifier are implemented on the Edge Node using a microcontroller. But with more and more computing moving to the edge, we are now at a point where some of the trivial or not-so-trivial computing to move to the sensor itself. In this work we propose to move some of the front-end signal processing (HPF, wondowing and FFT) to the microphone itself which may call as Bleeding-Edge Computing.