Converting a Keras model for eIQ use

Document created by Vanessa Maegima Employee on Mar 13, 2020
Version 1Show Document
  • View in full screen mode

NXP BSP currently does not support running a Keras application directly on i.MX. The customers that use this approach must convert their Keras model into one the supported inference engines in eIQ. In this post we will cover converting a Keras model (.h5) to a TfLite model (.tflite).

 

  • Install TensorFlow with the same eIQ TfLite supported version (you can find this information on Linux User's Guide). For L4.19.35_1.0.0 the TfLite version is v1.12.0.

$ pip3 install tensorflow==1.12.0

  • Run the following commands in a python3 environment to convert the .h5 model to a .tflite model:

>>> from tensorflow.contrib import lite
>>> converter = lite.TFLiteConverter.from_keras_model_file('model.h5') #path to your model 
>>> tfmodel = converter.convert()
>>> open("model.tflite", "wb").write(tfmodel)

The model can be deployed and used by TfLite inference engine in eIQ.

Attachments

    Outcomes