Run a TensorFlow model on i.MX 8M Plus

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Run a TensorFlow model on i.MX 8M Plus

1,781 Views
pablomoreno555
Contributor I

Hi,

I want to test a TensorFlow custom model on the i.MX 8M Plus processor. I have generated the "saved_model.pb" file, how could I run it on the processor?

Best regards,

Pablo.

0 Kudos
4 Replies

1,776 Views
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello pablomoreno555,

You can start reading the machine learning guide document in your yocto documentation, it has example of how to upload  a different models.

 

Regards

 

 

0 Kudos

1,763 Views
pablomoreno555
Contributor I

Hi,

Following the documentation, I have converted my TensorFlow model into ".tflite". Then, I have performed the benchmark with this command:

# ./benchmark_model --graph=<name_of_the_model.tflite>

and everything is OK. However, when I try to run the model using PyeIQ, with the following command:

# pyeiq --run object_detection_ tflite --model <name_of_the_model.tflite> -- labels <name_of_the_labelmap.txt>

I get this error:

INFO: Created TensorFlow Lite delegate for NNAPI.
Failed to apply NNAPI delegate.
Traceback (most recent call last):
File "/usr/bin/pyeiq", line 144, in <module>
PyeIQ().main()
File "/usr/bin/pyeiq", line 138, in main
self.run(self.args.run)
File "/usr/bin/pyeiq", line 119, in run
self.pyeiq_set[target]().run()
File "/usr/lib/python3.7/site-packages/eiq/modules/detection/object_detection_ssd.py", line 133, in run
self.start()
File "/usr/lib/python3.7/site-packages/eiq/modules/detection/object_detection_ssd.py", line 129, in start
self.labels = self.load_labels(self.labels)
File "/usr/lib/python3.7/site-packages/eiq/modules/utils.py", line 133, in load_labels
return {int(num): text.strip() for num, text in lines}
File "/usr/lib/python3.7/site-packages/eiq/modules/utils.py", line 133, in <dictcomp>
return {int(num): text.strip() for num, text in lines}
File "/usr/lib/python3.7/site-packages/eiq/modules/utils.py", line 132, in <genexpr>
lines = (p.match(line).groups() for line in f.readlines())
AttributeError: 'NoneType' object has no attribute 'groups'

I wanted to know how could I solve it.

Thank you very much.

Best regards,

0 Kudos

1,748 Views
pablomoreno555
Contributor I

I have converted the TensorFlow model into *.tflite using the following script:

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model('saved_model')

converter.optimizations = [tf.lite.Optimize.DEFAULT]

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.experimental_new_converter = True

converter.target_spec.supported_types = [tf.int8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
quantized_tflite_model = converter.convert()

open('quant_model.tflite' , "wb").write(quantized_tflite_model)
 
If I run this model with the image classification example:
 
# ./label_image -m <converted_model.tflite> -i grace_hopper.bmp -l <labelmap.TXT>
 
I get a proper output, but when I run it with pyeiq, using the command:
 
# pyeiq --run object_detection_tflite --model converted_model.tflite --labels labelmap.TXT
 
I get the following error:
 
ValueError: Cannot set tensor: Got value of type UINT8 but expected type FLOAT32 for input 0, name: serving_default_input:0
 
Could anyone help me, please?
 
Thank you very much.
0 Kudos

1,741 Views
Bio_TICFSL
NXP TechSupport
NXP TechSupport

You need change the dtype from np.float32 to np.uint8:

input_data = np.array(np.random.random_sample(input_shape), dtype=np.uint8)

You can always check with

print(interpreter.get_input_details())

which dtype is required

 

Regards

0 Kudos