NOTE:On the tensorflow github there are multiple model versions available for MobileNet_v1. The intention is to provide different options to fit various latency and size budgets. For this tutorial mobilenet_v1_1.0_224 is used. For a brief explanation of the naming convention:
v1 - model release version
1.0 - model size (determined by alpha parameter)
224 - input image size (224 x 224pixels)
Convert Mobilenet v1 Tensorflow Model to TF Lite
Create ‘convert.py’ python script in folder '~/tf_convert:'
Expected output: tflite model is created 'converted_model.tflite'
NOTE: notice that after quantization the model size was reduced ~x4 times in moving from 32-bits to 8-bit.
How to obtain input_arrays and output_arrays for conversion
This tutorial describes two ways to obtain this data: using tensorflow API to inspect the model and dump the information OR using a GUI tool to visualize and inspect the model. Both will be detailed in the following sections.
Method 1 - programmatically using Tensorflow API to dump the model graph operations in a file
Download and run the following script to dump the operations in the model:
Loaded model converted_model.tflite resolved reporter invoked average time: 571.325 ms 0.786024: 653 military uniform 0.0476249: 466 bulletproof vest 0.0457234: 907 Windsor tie 0.0245538: 458 bow tie 0.0194905: 514 cornet
Developing machine learning (ML) applications for embedded devices can be a daunting task. For the traditional embedded developer, the learning curve can be quite steep, as there are numerous decisions that must be made and new jargon that must be learned. Which framework should I choose? Which model best meets the requirements of my application and how do I know when it’s good enough? What “size” microcontroller or application processor do I need? The questions are many and figuring out how to get started can prove to be a challenge.
At NXP, we’re eager to be at the heart of your ML application development. We’ve just released our first machine learning software that integrates industry-leading technologies required to deploy ML-based applications to embedded devices. Whether you prefer to start with TensorFlow or Keras or Caffe frameworks, our new eIQ™ ML software development environment provides support for these popular frameworks and more, running on four inference engines – OpenCV, Arm® NN, Arm CMSIS-NN and TensorFlow Lite. Our goal with eIQ software is to provide broad enablement that helps inform your decision-making and allows you to create the best solution possible for your application.
To help you get started with eIQ software for i.MX applications processors, we’ve created a series of step-by-step tutorials that take you from unboxing a board, to deploying, to modeling, to inferencing at the edge using the i.MX 8M Mini EVK. We have examples of object detection, handwriting recognition, face detection and more – all implemented in a variety of frameworks and published in source to help get you running as quickly as possible.
Be sure to check back as we continue to explore the questions raised above and dive deeper on the key challenges that embedded developers face in creating ML-based applications. If there’s anything specific you’d like us to address, please let us know in the comments.