Skip to content

berndporr/dnf_executorch

Repository files navigation

Deep Neuronal Filter (DNF) -- executorch version

alt tag

The deep neuronal filter (DNF) is an extension of the classical LMS FIR adaptive noise canceller. Instead of using an adaptive FIR filter the DNF uses a realtime adaptive deep neural network. This requires high performance sample by sample forward and backward processing. The new (still experimental) feature of executorch is ideal for this purpose because it has realtime backprop doing gradient descent with a loss function.

Prerequisites

Install executorch as a library on your system:

git clone https://github.com/pytorch/executorch
cd executorch
source ~/venv/bin/activate
python install_executorch.py
cmake --preset linux -DEXECUTORCH_BUILD_EXTENSION_TRAINING=ON -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DEXECUTORCH_ENABLE_LOGGING=ON -DEXECUTORCH_LOG_LEVEL=ERROR .
cd cmake-out
make
sudo make install

Note for now logging should be enabled as the training extension is (still) experimental and errors should be reported back to executorch.

How to compile

Type:

cmake .
make

to compile the library and the demos.

Unit tests

make test

How to install

sudo make install

How to use it

Create the DNF model

Create the pte file with the export2executorch python script:

import torch
import export2executorch
nTaps = 50
nLayers = 5
gain = 0.1
export2executorch.dnf2executorch("dnf_executorch.pte",nTaps,nLayers,gain)

where nTaps is the number of taps of the delay line feeding into the deep net with nLayers layers. gain is the xavier gain and 0.1 should be OK for most cases. The lower the gain the longer it takes for the remover to build up an output but the more precise. If the gain is one the remover will instantly create an output but might not converge.

Include header

The library is header-only. Just include the header:

#include "dnf_executorch.h"

Init

DNF_executorch dnf("export2executorch", mu);
dnf.setLearning(true);

where mu is the learning rate (typically around 0.01). You cannot change the learning rate later but you can switch learning on and off with dnf.setLearning(bool);.

Realtime noise cancellation sample by sample

This code snipplet should happen, for example, in your sample-by-sample callback:

const double output_signal = dnf.filter(noisy_input_signal, ref_noise);

where ref_noise is the noise you'd like to be removed from noisy_input_signal.

Linking

Important is to have a flat linking approach. Don't package your code into another library file as executorch will most likely crash. Better link it all together in one single linking command in cmake:

find_package(executorch CONFIG REQUIRED)
target_link_libraries(my_cool_dnf_filter_app
  executorch
  
  # portable_ops_lib
  optimized_native_cpu_ops_lib
  
  # portable_kernels
  optimized_kernels
  
  extension_training
)

Note that executorch offers differently optimised libraries and make sure to include only one of these: portable, optimised or optimised for the computer you've compiled it on. In the example above the portable ones are commented out and the libraries just for the specific processor are linked.

Example

Simple instructional example which removes 50Hz from an ECG with just one layer which is identical to an FIR LMS filter.

See also the tests which do learning with one layer and five layers.

Class documentation

The doxygen generated files are here: https://berndporr.github.io/dnf_executorch/

Credits