How GPU to accelerate tensorflow lite mode computing in i.MX 8M nano

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

How GPU to accelerate tensorflow lite mode computing in i.MX 8M nano

1,638 Views
刘国华
Contributor III

Dear All 

Right now we are trying to enable the GPU accelerate the tflite computing on the I.MX8M nano board , but the performance is not as expectation,  I summarized the tflite performance on i.MX8 nano board.

Test result based on running the label_image sample code.

 

For the FW built by Arrow (L5.4.3-1.0.0, tflite ver = 1.13.2), the sample program gave the same result whenever GPU acceleration is enabled (~80ms). As you mentioned the tflite didn't compiled with GPU acceleration, this result should be the expected.

 

For the latest stock FW from NXP (L5.4.47-2.2.0, tflite ver = 2.2.0), the performance was worsened if NNAPI (GPU acceleration) is enabled. (48ms vs 400ms)

 

In summary, with CPU mode, tflite ran faster on ver2.2 than on ver1.13. The GPU gave negative performance gain.

 

Attached text file is the detailed log. Can you help to give some  comments, thank

 

I just find some information about the accelerate of tensor flow lite model , maybe you can test it the second time.

The first iteration of model inference using the NN API always takes many times longer,

because of model graph initialization needed by the GPU module. The iterations

following the graph initialization will be performed many times faster.

国华刘_0-1608517973486.png

But I have  already excluded this factor.

In fact, this initialization takes around 4 seconds.

Please refer to the case 3 in my log.

That test case uses python sample code which shows the warm-up time and inference time separately.

 

Labels (1)
0 Kudos
2 Replies

1,620 Views
刘国华
Contributor III

Dear Zhiming

Thanks for your reply . We are following the user guide which you supplied . But the result is also not good .

We use the new BSP 5.4.47 with the I.MX8M nano DDR4 EVK 

root@imx8mnevk:/usr/bin/tensorflow-lite-2.2.0/examples# ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt -a 0
Loaded model mobilenet_v1_1.0_224_quant.tflite
resolved reporter
invoked
average time: 48.01 ms
0.780392: 653 military uniform
0.105882: 907 Windsor tie
0.0156863: 458 bow tie
0.0117647: 466 bulletproof vest
0.00784314: 835 suit
 

root@imx8mnevk:/usr/bin/tensorflow-lite-2.2.0/examples# ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt -a 1
Loaded model mobilenet_v1_1.0_224_quant.tflite
resolved reporter
INFO: Created TensorFlow Lite delegate for NNAPI.
Applied NNAPI delegate.
W [query_hardware_caps:66]Unsupported evis version
invoked
average time: 441.539 ms
0.784314: 653 military uniform
0.105882: 907 Windsor tie
0.0156863: 458 bow tie
0.00784314: 466 bulletproof vest
0.00392157: 835 suit

When i enable the GPU accelerate , we can get one error message W [query_hardware_caps:66]Unsupported evis version
invoked. 


 

0 Kudos

1,629 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Hi

 

The attached file explain how it works.

 

 

BR

Zhiming

0 Kudos