How to use the object detection model trained by eiq on imx8mp board or pc?

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

How to use the object detection model trained by eiq on imx8mp board or pc?

1,068 次查看
taotaowang
Contributor I

Hello,

I trained a model according with "eIQ_Toolkit_UG.pdf" followed below routine. 

import my dataset -> select model -> detection -> balanced -> npu -> trainer -> validate -> deploy -> export model -> tensorflow lite model saved.

Below is the validate result. 

taotaowang_0-1696902574578.png

below is the export model sumarry.

taotaowang_1-1696902783100.png

Model property is like below.

taotaowang_2-1696902857616.png

As upper properties display, this model output name is StatefulPartitionedCall and type is float32[1,2034,7] , But where can i get the output_data defination?  How can i use this model in my inference program?

In my inference program i used below codes to get the inference result but it seems not correct.

 

  output_data = interpreter.get_tensor(output_details[0]['index'])
  print(output_data)
  for i in range(output_data.shape[1]):
        score = output_data[0][i][0]
        class_id = output_data[0][i][1]
        x_min = output_data[0][i][2]
        y_min = output_data[0][i][3]
        x_max = output_data[0][i][4]
        y_max = output_data[0][i][5]
        other = output_data[0][i][6]
 
Please give me some advises about the eiq detection model inference realization.
 
Thanks !
Taotao Wang

 

 

0 项奖励
回复
2 回复数

115 次查看
jake4
Contributor I

Hi @Zhiming_Liu 

In the  eIQ Documentationcouldnt find this information.

We use the eiqtool for training a object_detection model with mobilenet ssd v3 for a imx8mp NPU.

The example given in eIQ_Toolkit_v1.16.0\workspace\models\mobilenet_ssd_v3\mobilenet_ssd_v3.ipynb

the inference is done by using the RTview and tensorflow, which our imx8mp we dont want to include . We would like to use the tensorflow lite inference.

We get a output of tensor (Since I have 1 class ) 

name: StatefulPartitionedCall:0
tensor: float32[1,2034,6]
location: 392
 

By splitting into scores and bounding box for each (1, 2034, 2), (1, 2034, 4)

And then follow the output treatment as in mobilenet_ssd_v3.ipynb, we dont get the bounding boxes and the scores as expected.

So wondering, whats the output format signature is?

We could guess (from https://community.nxp.com/t5/eIQ-Machine-Learning-Software/How-to-interpret-the-output-from-a-mobile...

The model predicts 2034 detections per class. The [1,2034,4] tensor corresponds to the box locations in terms of pixels [top, left, bottom, right] of the objects detected.

And [1,2034,2] tensor corresponds to scores of our class and background.

 

We couldnt get a correct meaningful output from the model trained by the eiq tool, which seem to follow a unique way of combining the outputs bounding boxes, that can be only interpreted by the rtview engine. But can we have a detailed explanation of the output. And example to show without using rtview or tensorflow libs?

Thanks.

标记 (1)
0 项奖励
回复

1,011 次查看
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Please refer the eIQ document with HELP-->eIQ Documentation

0 项奖励
回复