Evaluation environtment for Tflite C array on PC with tensorflowlite microcontroller?

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Evaluation environtment for Tflite C array on PC with tensorflowlite microcontroller?

279 Views
nnxxpp
Contributor III

I have .tflite model and I convert it to tflite C array to run in Tensorflowlite microcontroller (it can not load file). I need to verify that performance of .tflite model and tflite C array is same. How can I evaluate tflite C array on PC with tflite microcontroller? I can not evaluate on board, it takes too much time. I see in the Github repo of Tflite microcontroller https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/examples/person_detection but there is not guide for this evaluation (only some test cases).

Could you please support me? Thanks a lot. 

0 Kudos
9 Replies

236 Views
ZhangJennie
NXP TechSupport
NXP TechSupport

What's your chip part number?

0 Kudos

233 Views
nnxxpp
Contributor III

@ZhangJennie 

I used MIMXRT1060-EVKB, but I think that specific board is not needed. Here I am concerned about the performance of .tflite C array model with TFLM. 

I can evaluate tflite model (.tflite extension) by using Python script. TFLM can not load .tflite model, so we need to convert it to tflite C array. But not sure that performance of .tflite C array model with TFLM is same or not with the  that performance of .tflite model with Python.

Thank you.

0 Kudos

205 Views
Alex_Wang
NXP Employee
NXP Employee

Hi, @nnxxpp 

For the problem you mentioned, if the input and output type of the model is int8 when you deploy the python platform, and the input and output type of the model converted to c array type is also int8 when you deploy it on the microcontroller through the TFLM framework, the effect will not be different between them, which can be ignored and can maintain a better effect.

To sum up, it is no problem to maintain the same type in the process of model quantization, and at present, the NPU in our TFLM only supports int8.

Hope this helps you.

Best regards, Alex

0 Kudos

185 Views
nnxxpp
Contributor III

@Alex_Wang 

I read again README.md in folder "doc" of tflm_cifar10 in SDK for MIMXRT1060-EVKB. It said that "The configuration of the model was modified to match the neural network structure in the CMSIS-NN CIFAR-10 example." I attach readme file here.

As my understanding,  CMSIS-NN supports only for int8. Is that right? It means that if input and output of model is float32, it does not work. Please confirm it.

Moreover, NXP only shared tflite C array of model. Could you share with my .tflite Cifar10 model (before converting to tflite C array), that trained for the example in the SDK. Because I want to check it with Neutron to see more details about data type of input, output, weight, biases. I am using Tensorflow2, after full quantization bias is still int32 (not int8) and result seems not correct.

Thank you. 

0 Kudos

180 Views
Alex_Wang
NXP Employee
NXP Employee

Hi, @nnxxpp 

Yes, your understanding is correct. CMSIS-NN is a software library for running deep learning models on microcontrollers, which primarily supports INT8-type data. If your model inputs and outputs are of type float32, you may need to convert them to type int8 to run on CMSIS-NN.

The SDK did not release the model before the conversion. Currently only data under SDK.

Hope this helps you.

Best regards, Alex

0 Kudos

163 Views
nnxxpp
Contributor III

@Alex_Wang 

Sure. Thank you so much. 

I have one more question. I download SDK 2.15 from MIMXRT2060-EVKB, and there is an example with iEQ tflm_cifar10. This project includes TFLM and I want to know exact version of TFLM (tensorflow lite for microcontroller that NXP uses). Here is the TFLM repo https://github.com/tensorflow/tflite-micro.

Please share with me a version or commit SHA of TFLM for the project tflm_cifar10 in MCUXPresso SDK 2.15. I want to use this version on PC to evaluate performance of tflite model with TFLM. Thank you.

Moreover, if I want to use specific version of TFLM in tflm_cifar10 project for MIMXRT1060-EVKB. Do you have any suggestions for me?

0 Kudos

134 Views
Alex_Wang
NXP Employee
NXP Employee

Hi, @nnxxpp 

Currently, SDK2.15 is Updated TensorFlow Lite for Microcontrollers to version 23-

09-18. You can find it in C:\Users\nxgxxxxxx\mcuxpresso\02\SDKPackages\SDK_2_15_000_EVK-MIMXRT1060\middleware\eiq\tensorflow-lite\readme txt file.

Alex_Wang_0-1715584359855.png

Best regards, Alex

132 Views
nnxxpp
Contributor III

@Alex_Wang 

Sure, thanks a lot.

203 Views
nnxxpp
Contributor III

@Alex_Wang 

If input and output is float32, what is the happen?

You said that "To sum up, it is no problem to maintain the same type in the process of model quantization, and at present, the NPU in our TFLM only supports int8.". As my understanding, MIMXRT1060-EVKB does not have NPU (NPU for iMX 8 - it need full int8 quantization), here is the reference eIQ® ML Software Development Environment | NXP Semiconductors. It means that for MIMXRT1060-EVKB, we can quantize int8 model, but keep input and output be float32. Is that right? 

0 Kudos