Radar human activity recognition with machine learning
Wed May 1, 2024 · 1 min · 206 words · Me
Table of Contents
During my phd years, I made a demo of indoor human activity recognition using FMCW radar and machine learning. The schematic below shows the end to end system from the sensor node to the cloud. I was involved in the different stages of this work from the sensor configuration and the signal processing to building a convolutional neural network for Doppler maps classification.
This schematic shows the workflow of the demo from the sensor edge to the cloud.
Technologies
The technologies that were used to make this demo are:
Os: Ubuntu, Raspbian
C++: signal processing on the sensor's microcontrolller
Python:
Tensorflow: Training the model
MLFlow: Experiments tracking
PyQT/PyQtgraph: Graphical user interface and visualization
Threading / Queue: To manage threads execution
Scikit-learn, Numpy ...
Docker: Utilized for developing the demo on a laptop and deploying it on a Raspberry Pi using a Python Docker image tailored for Raspberry Pi, streamlining deployment across other devices.
This demo runs a convolutional neural network that was trained on a previously collected dataset in the same environment with different people. The following is the used architecture:
Finally, these videos were made to show the real time test of the demo.
For more technical information, check the evaluation report of NextPerception project