This tutorial demonstrates how to convert a Tensorflow model to TensorFlow Lite using post training quantization and run the inference on an i.MX8 board using the eIQ™ ML Software Development Environment. It uses a TensorFlow mobilenet_v1 model pre-trained with ImageNet.
NOTES:
1. Except for the section "Build Custom Application to Run the Inference" this tutorial can also be applied to any i.MX RT platform.
2. For more information on quantization check the 'QuantizationOverview.pdf' attached to this post.
Read more...