ModusToolbox™ reference examples
To support the IMAGIMOB Studio’s standard ML flow, three code examples are included in ModusToolbox™. While IMAGIMOB Studio focus’ on pre-processing, training, and validation, the ModusToolbox code examples focus on data collection and deployment. These code examples will be used throughout the tutorial in support of the ML development flow.
IMAGIMOB Studio can produce a C implementation of the preprocessing, or a C implementation of the pre-processing and model (combined). The Deploy with MTB-ML code example uses only the preprocessor along with the .h5 file that Imagimob produces. The .h5 model file is passed to the MTB-ML configurator to generate TensorFlow Lite for Microcontrollers (TFLM) source code. The Deploy with Imagimob code example uses the Imagimob produced C implementation of both the pre-processor and model, which means the MTB-ML configurator is not used.
Data collection
The mtb-example-ml-imagimob-data-collection (opens in a new tab) code example demonstrates how to collect data using Imagimob's Capture Server to train a model within IMAGIMOB Studio. The code example supports collecting data from two different sources, data can be collected from an IMU or from a digital microphone using PDM to PCM. The data is transmitted using UART to the Capture Server (opens in a new tab) where it stores it as a .data file for IMU or a .wav file for PDM data. The data can then be used in the Human Activity Detection or Baby Crying Detection Infineon starter projects or to generate a new model. For more information, view the mtb-example-ml-imagimob-data-collection (opens in a new tab) readme on GitHub.
To know how to collect data, refer to Data collection.
Deploy with MTB-ML
The mtb-example-ml-imagimob-MTB-ML-deploy (opens in a new tab) code example demonstrates how to deploy an Imagimob generated machine learning model with the MTB-ML flow. The code example includes two models that are generated from IMAGIMOB Studio's starter projects. The first model, Human Activity Detection uses data from an IMU which is then sent to the model to detect specific motions (sitting, standing, walking, running, or jumping). The Human Activity Detection model is setup to run out of the box. The second model, Baby Crying Detection uses data from the PDM which is sent to the model to detect whether a baby is crying or not. Both models require pre-processing produced by IMAGIMOB Studio's, Human Activity Detection uses imu_model.c/h while Baby Crying Detection uses pdm_model.c/.h. New models based on IMU (BMX160 or BMI160) or PDM data can be dropped into this project as-is. For more information, view the mtb-example-ml-imagimob-MTB-ML-deploy (opens in a new tab) readme on GitHub.
Deploy with Imagimob
The mtb-example-ml-imagimob-deploy (opens in a new tab) code example demonstrates how to deploy an Imagimob generated machine learning model. It comes pre-configured with a model generated from the Human Activity Recognition starter project in IMAGIMOB Studio. The code example collects accelerometer data from an IMU which is then sent to the machine learning model to detect specific motions (sitting, standing, walking, running, or jumping). It uses the model.c/h files generated from within IMAGIMOB Studio directly. New models based on the Human Activity Recognition project can be dropped into the project as-is. For more information view the mtb-example-ml-imagimob-deploy (opens in a new tab) readme on GitHub.
To know more about the deployment process, refer to Deployment.