Executing tflite model in iMX95 NPU

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Executing tflite model in iMX95 NPU

1,470 Views
PoojaSk
Contributor II

Hi,

The i.MX95 board is flashed with latest Linux Bsp L6.12.3-1.0.0_MX95 . Tried executing Unet using benchmark_model after converting using the int8 tflite to TFLite for Neutron using model tool from eiq toolkit. The conversion was successful but on running the model using NPU with the following command:

/usr/bin/tensorflow-lite-2.18.0/examples/benchmark_model --graph=/opt/unet_model_quantized_b1_converted.tflite --external_delegate_path=/usr/lib/libneutron_delegate.so

Got segmentation fault with the following logs:

PoojaSk_0-1746533496396.png

But the Neutron Graph layer which is causing the issue was formed on conversion to NPU supported tflite. So how can I proceed.

Thanks,
Pooja


0 Kudos
Reply
6 Replies

1,294 Views
Chavira
NXP TechSupport
NXP TechSupport

HI @PoojaSk!

 

I understand your procedure but can you try converting directly the Pytorch model to TFlite directly and then convert the tflite to neutron using eiq toolkit?

 

 

 

 

0 Kudos
Reply

1,444 Views
Chavira
NXP TechSupport
NXP TechSupport

Hi @PoojaSk!

Thank you for contacting NXP Support!

 

Are you using a custom model?

Can you describe the steps that you have applied to convert your model?

 

Best Regards!

Chavira

 

0 Kudos
Reply

1,418 Views
PoojaSk
Contributor II

Hi @Chavira 

The model used is Unet Pytorch model used for segmentation link . The steps used for conversion is as follows:

  • The Unet Pytorch model is loaded and converted to onnx with required batch size 
  • Then the onnx is converted to tensorflow (saved model )using "onnx2tf" eg :onnx2tf -i unet_b16.onnx -ois data:16,3,256,256 -oiqt -b16
  • Then the tensorflow model is converted to TFLite using a script which uses TFLiteConveter
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# converter.input_shapes = {'input_1': input_shape}
converter.inference_input_type = tf.int8 
converter.inference_output_type = tf.int8 
tflite_model_quant = converter.convert()
  • TFLite converted int8 model is then converted to TensorFlow lie for Neutron(eiq-converter-neutron) without specifying any custom parameters just specifying Neutron target as imx95

Thanks,
Pooja

Tags (1)
0 Kudos
Reply

1,403 Views
PoojaSk
Contributor II

Hi, 

Also please don't consider the previous log, I think it was because the model was converted to NPU supported format but delegate was not specified. This is the exact log with command I used for execution. The model is converted using steps mentioned in previous comment and I am getting segmentation fault. with verbose=1 didn't find the exact issue

PoojaSk_0-1746601707741.png


Thanks,
Pooja

0 Kudos
Reply

1,348 Views
Chavira
NXP TechSupport
NXP TechSupport

HI @PoojaSk!

 

Try converting the Pythorch model to Tflite directly following Google documentation.

 

After that try to covert the TFlite file to TFlite for Neutron using eIQ tools.

 

Best Regards!

Chavira

0 Kudos
Reply

1,314 Views
PoojaSk
Contributor II

Hi @Chavira 

I tried converting the Pytorch model of Unet using the documentation. But after that on converting to TFlite for Neutron using eIQ tools, there is no difference in the layers, the same model is saved as the converted one. Was able to execute the converted tflite using NPU, but as per documentation it is the neutron graph node which is executed in NPU rest of the operations falls back to CPU. But the model converted using this step lack these neutron graph node, so how can it be confirmed it uses NPU? I have shared benchmark logs and profiling logs for your reference.

PoojaSk_0-1747120831156.pngPoojaSk_1-1747120876389.png

Thanks,
Pooja

 

0 Kudos
Reply