Hello, I wonder how enriching a complex system with an AI model would be possible with optimization from a third-party vendor such as eIQ + Glow. As far as I understood, these only optimize the model and cpu usage during *inference*. The rest of the modules needed to achieve accuracy, for example data-preprocessing, feature engineering is to be written in C separately, in a hand written way, if we want the glow build to contain these steps too. So these pre-processing steps wouldnt have any optimization, right?
Going even more Into the bigger picture, a complex system that is probably already having its own build process, would not be able to leverage the optimization to HW from Glow ecc, right? There should be only one build process for the whole "complex" system..
Solved! Go to Solution.
Hi,
Integrating an AI model with eIQ and Glow can optimize the inference process, but data preprocessing and feature engineering steps need to be manually coded and optimized separately. For a complex system, you can still leverage hardware optimization by Glow if you integrate the generated bundles into your system’s build process.
Hi @FodorImo,
Thank you for contacting NXP Support.
Please have a look to the following Application Note here you can find an implementation of eIQ Glow Inference Engine to identify the process using this optimization:
Inferencing Deep Learning on Cortex M0/M0+ with the eIQTM Glow Inference Engine (nxp.com)
I hope this information will be helpful.
Have a great day!
Hi,
Integrating an AI model with eIQ and Glow can optimize the inference process, but data preprocessing and feature engineering steps need to be manually coded and optimized separately. For a complex system, you can still leverage hardware optimization by Glow if you integrate the generated bundles into your system’s build process.