NXP BSP currently does not support running a Keras application directly on i.MX. The customers that use this approach must convert their Keras model into one the supported inference engines in eIQ. In this post we will cover converting a Keras model (.h5) to a TfLite model (.tflite).
$ pip3 install tensorflow==1.12.0
>>> from tensorflow.contrib import lite
>>> converter = lite.TFLiteConverter.from_keras_model_file('model.h5') #path to your model
>>> tfmodel = converter.convert()
>>> open("model.tflite", "wb").write(tfmodel)
The model can be deployed and used by TfLite inference engine in eIQ.
Execute the following command to also convert the Keras mobilenet.h5 model to the mobilenet.tflite model:
eiq-converter --plugin eiq-converter-tflite mobilenet.h5 mobilenet.tflite