How to use the object detection model trained by eiq on imx8mp board or pc?

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

How to use the object detection model trained by eiq on imx8mp board or pc?

1,224 Views
taotaowang
Contributor I

Hello,

I trained a model according with "eIQ_Toolkit_UG.pdf" followed below routine. 

import my dataset -> select model -> detection -> balanced -> npu -> trainer -> validate -> deploy -> export model -> tensorflow lite model saved.

Below is the validate result. 

taotaowang_0-1696902574578.png

below is the export model sumarry.

taotaowang_1-1696902783100.png

Model property is like below.

taotaowang_2-1696902857616.png

As upper properties display, this model output name is StatefulPartitionedCall and type is float32[1,2034,7] , But where can i get the output_data defination?  How can i use this model in my inference program?

In my inference program i used below codes to get the inference result but it seems not correct.

 

  output_data = interpreter.get_tensor(output_details[0]['index'])
  print(output_data)
  for i in range(output_data.shape[1]):
        score = output_data[0][i][0]
        class_id = output_data[0][i][1]
        x_min = output_data[0][i][2]
        y_min = output_data[0][i][3]
        x_max = output_data[0][i][4]
        y_max = output_data[0][i][5]
        other = output_data[0][i][6]
 
Please give me some advises about the eiq detection model inference realization.
 
Thanks !
Taotao Wang

 

 

0 Kudos
Reply
2 Replies

271 Views
jake4
Contributor I

Hi @Zhiming_Liu 

In the  eIQ Documentationcouldnt find this information.

We use the eiqtool for training a object_detection model with mobilenet ssd v3 for a imx8mp NPU.

The example given in eIQ_Toolkit_v1.16.0\workspace\models\mobilenet_ssd_v3\mobilenet_ssd_v3.ipynb

the inference is done by using the RTview and tensorflow, which our imx8mp we dont want to include . We would like to use the tensorflow lite inference.

We get a output of tensor (Since I have 1 class ) 

name: StatefulPartitionedCall:0
tensor: float32[1,2034,6]
location: 392
 

By splitting into scores and bounding box for each (1, 2034, 2), (1, 2034, 4)

And then follow the output treatment as in mobilenet_ssd_v3.ipynb, we dont get the bounding boxes and the scores as expected.

So wondering, whats the output format signature is?

We could guess (from https://community.nxp.com/t5/eIQ-Machine-Learning-Software/How-to-interpret-the-output-from-a-mobile...

The model predicts 2034 detections per class. The [1,2034,4] tensor corresponds to the box locations in terms of pixels [top, left, bottom, right] of the objects detected.

And [1,2034,2] tensor corresponds to scores of our class and background.

 

We couldnt get a correct meaningful output from the model trained by the eiq tool, which seem to follow a unique way of combining the outputs bounding boxes, that can be only interpreted by the rtview engine. But can we have a detailed explanation of the output. And example to show without using rtview or tensorflow libs?

Thanks.

Tags (1)
0 Kudos
Reply

1,167 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Please refer the eIQ document with HELP-->eIQ Documentation

0 Kudos
Reply
%3CLINGO-SUB%20id%3D%22lingo-sub-1736135%22%20slang%3D%22en-US%22%20mode%3D%22CREATE%22%3EHow%20to%20use%20the%20object%20detection%20model%20trained%20by%20eiq%20on%20imx8mp%20board%20or%20pc%3F%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1736135%22%20slang%3D%22en-US%22%20mode%3D%22CREATE%22%3E%3CP%3EHello%2C%3C%2FP%3E%3CP%3EI%20trained%20a%20model%20according%20with%20%22eIQ_Toolkit_UG.pdf%22%20followed%20below%20routine.%26nbsp%3B%3C%2FP%3E%3CP%3Eimport%20my%20dataset%20-%26gt%3B%20select%20model%20-%26gt%3B%20detection%20-%26gt%3B%20balanced%20-%26gt%3B%20npu%20-%26gt%3B%20trainer%20-%26gt%3B%20validate%20-%26gt%3B%20deploy%20-%26gt%3B%20export%20model%20-%26gt%3B%20tensorflow%20lite%20model%20saved.%3C%2FP%3E%3CP%3EBelow%20is%20the%20validate%20result.%26nbsp%3B%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22taotaowang_0-1696902574578.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3Cspan%20class%3D%22lia-inline-image-display-wrapper%22%20image-alt%3D%22taotaowang_0-1696902574578.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fcommunity.nxp.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F244193iBAC19A2C9A259C10%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22taotaowang_0-1696902574578.png%22%20alt%3D%22taotaowang_0-1696902574578.png%22%20%2F%3E%3C%2Fspan%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3Ebelow%20is%20the%20export%20model%20sumarry.%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22taotaowang_1-1696902783100.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3Cspan%20class%3D%22lia-inline-image-display-wrapper%22%20image-alt%3D%22taotaowang_1-1696902783100.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fcommunity.nxp.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F244194iFB1952283A999213%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22taotaowang_1-1696902783100.png%22%20alt%3D%22taotaowang_1-1696902783100.png%22%20%2F%3E%3C%2Fspan%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3EModel%20property%20is%20like%20below.%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22taotaowang_2-1696902857616.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3Cspan%20class%3D%22lia-inline-image-display-wrapper%22%20image-alt%3D%22taotaowang_2-1696902857616.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fcommunity.nxp.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F244195iDD439CE97B54A530%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22taotaowang_2-1696902857616.png%22%20alt%3D%22taotaowang_2-1696902857616.png%22%20%2F%3E%3C%2Fspan%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3EAs%20upper%20properties%20display%2C%20this%20model%20output%20name%20is%26nbsp%3B%3CSTRONG%3EStatefulPartitionedCall%20and%20type%20is%26nbsp%3Bfloat32%5B1%2C2034%2C7%5D%20%2C%20But%20where%20can%20i%20get%20the%20output_data%20defination%3F%26nbsp%3B%20How%20can%20i%20use%20this%20model%20in%20my%20inference%20program%3F%3C%2FSTRONG%3E%3C%2FP%3E%3CP%3E%3CSTRONG%3EIn%20my%20inference%20program%20i%20used%20below%20codes%20to%20get%20the%20inference%20result%20but%20it%20seems%20not%20correct.%3C%2FSTRONG%3E%3C%2FP%3E%3CBR%20%2F%3E%3CDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E%20%3D%20%3C%2FSPAN%3E%3CSPAN%3Einterpreter%3C%2FSPAN%3E%3CSPAN%3E.get_tensor(%3C%2FSPAN%3E%3CSPAN%3Eoutput_details%3C%2FSPAN%3E%3CSPAN%3E%5B%3C%2FSPAN%3E%3CSPAN%3E0%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3E'index'%3C%2FSPAN%3E%3CSPAN%3E%5D)%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Eprint%3C%2FSPAN%3E%3CSPAN%3E(%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E)%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Efor%3C%2FSPAN%3E%20%3CSPAN%3Ei%3C%2FSPAN%3E%20%3CSPAN%3Ein%3C%2FSPAN%3E%20%3CSPAN%3Erange%3C%2FSPAN%3E%3CSPAN%3E(%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E.shape%5B%3C%2FSPAN%3E%3CSPAN%3E1%3C%2FSPAN%3E%3CSPAN%3E%5D)%3A%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Escore%3C%2FSPAN%3E%3CSPAN%3E%20%3D%20%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E%5B%3C%2FSPAN%3E%3CSPAN%3E0%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3Ei%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3E0%3C%2FSPAN%3E%3CSPAN%3E%5D%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Eclass_id%3C%2FSPAN%3E%3CSPAN%3E%20%3D%20%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E%5B%3C%2FSPAN%3E%3CSPAN%3E0%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3Ei%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3E1%3C%2FSPAN%3E%3CSPAN%3E%5D%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Ex_min%3C%2FSPAN%3E%3CSPAN%3E%20%3D%20%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E%5B%3C%2FSPAN%3E%3CSPAN%3E0%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3Ei%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3E2%3C%2FSPAN%3E%3CSPAN%3E%5D%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Ey_min%3C%2FSPAN%3E%3CSPAN%3E%20%3D%20%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E%5B%3C%2FSPAN%3E%3CSPAN%3E0%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3Ei%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3E3%3C%2FSPAN%3E%3CSPAN%3E%5D%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Ex_max%3C%2FSPAN%3E%3CSPAN%3E%20%3D%20%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E%5B%3C%2FSPAN%3E%3CSPAN%3E0%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3Ei%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3E4%3C%2FSPAN%3E%3CSPAN%3E%5D%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Ey_max%3C%2FSPAN%3E%3CSPAN%3E%20%3D%20%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E%5B%3C%2FSPAN%3E%3CSPAN%3E0%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3Ei%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3E5%3C%2FSPAN%3E%3CSPAN%3E%5D%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%3C%2FSPAN%3E%3CSPAN%3Eother%3C%2FSPAN%3E%3CSPAN%3E%20%3D%20%3C%2FSPAN%3E%3CSPAN%3Eoutput_data%3C%2FSPAN%3E%3CSPAN%3E%5B%3C%2FSPAN%3E%3CSPAN%3E0%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3Ei%3C%2FSPAN%3E%3CSPAN%3E%5D%5B%3C%2FSPAN%3E%3CSPAN%3E6%3C%2FSPAN%3E%3CSPAN%3E%5D%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%26nbsp%3B%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3EPlease%20give%20me%20some%20advises%20about%20the%20eiq%20detection%20model%20inference%20realization.%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%26nbsp%3B%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3EThanks%20!%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%3ETaotao%20Wang%3C%2FSPAN%3E%3C%2FDIV%3E%3C%2FDIV%3E%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-2148289%22%20slang%3D%22en-US%22%20mode%3D%22CREATE%22%20translate%3D%22no%22%3ERe%3A%20How%20to%20use%20the%20object%20detection%20model%20trained%20by%20eiq%20on%20imx8mp%20board%20or%20pc%3F%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2148289%22%20slang%3D%22en-US%22%20mode%3D%22CREATE%22%3E%3CP%3EHi%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fcommunity.nxp.com%2Ft5%2Fuser%2Fviewprofilepage%2Fuser-id%2F151788%22%20target%3D%22_blank%22%3E%40Zhiming_Liu%3C%2FA%3E%26nbsp%3B%3C%2FP%3E%3CP%3EIn%20the%26nbsp%3B%26nbsp%3B%3CSTRONG%3EeIQ%20Documentation%3C%2FSTRONG%3Ecouldnt%20find%20this%20information.%3C%2FP%3E%3CP%3EWe%20use%20the%20eiqtool%20for%20training%20a%20object_detection%20model%20with%20mobilenet%20ssd%20v3%20for%20a%20imx8mp%20NPU.%3C%2FP%3E%3CP%3EThe%20example%20given%20in%26nbsp%3BeIQ_Toolkit_v1.16.0%5Cworkspace%5Cmodels%5Cmobilenet_ssd_v3%5Cmobilenet_ssd_v3.ipynb%3C%2FP%3E%3CP%3Ethe%20inference%20is%20done%20by%20using%20the%20RTview%20and%20tensorflow%2C%20which%20our%20imx8mp%20we%20dont%20want%20to%20include%20.%20We%20would%20like%20to%20use%20the%20tensorflow%20lite%20inference.%3C%2FP%3E%3CP%3EWe%20get%20a%20output%20of%20tensor%20(Since%20I%20have%201%20class%20)%26nbsp%3B%3C%2FP%3E%3CDIV%20class%3D%22%22%3E%3CSPAN%20class%3D%22%22%3Ename%3A%20%3CSTRONG%3EStatefulPartitionedCall%3A0%3C%2FSTRONG%3E%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%20class%3D%22%22%3Etensor%3A%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3Efloat32%5B1%2C2034%2C6%5D%3C%2FSTRONG%3E%3C%2FDIV%3E%3CDIV%20class%3D%22%22%3Elocation%3A%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3E392%3C%2FSTRONG%3E%3C%2FDIV%3E%3CDIV%20class%3D%22%22%3E%26nbsp%3B%3C%2FDIV%3E%3CP%3EBy%20splitting%20into%20scores%20and%20bounding%20box%20for%20each%26nbsp%3B(1%2C%202034%2C%202)%2C%26nbsp%3B(1%2C%202034%2C%204)%3C%2FP%3E%3CP%3EAnd%20then%20follow%20the%20output%20treatment%20as%20in%26nbsp%3Bmobilenet_ssd_v3.ipynb%2C%20we%20dont%20get%20the%20bounding%20boxes%20and%20the%20scores%20as%20expected.%3C%2FP%3E%3CP%3ESo%20wondering%2C%20whats%20the%20output%20format%20signature%20is%3F%3C%2FP%3E%3CP%3EWe%20could%20guess%20(from%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fcommunity.nxp.com%2Ft5%2FeIQ-Machine-Learning-Software%2FHow-to-interpret-the-output-from-a-mobilenet-V3-correctly%2F%22%20target%3D%22_blank%22%3Ehttps%3A%2F%2Fcommunity.nxp.com%2Ft5%2FeIQ-Machine-Learning-Software%2FHow-to-interpret-the-output-from-a-mobilenet-V3-correctly%2F%3C%2FA%3E)%26nbsp%3B%3C%2FP%3E%3CP%3E%3CSPAN%3EThe%20model%20predicts%202034%20detections%20per%20class.%20The%20%5B1%2C2034%2C4%5D%20tensor%20corresponds%20to%20the%20box%20locations%20in%20terms%20of%20pixels%20%5B%3C%2FSPAN%3E%3CSPAN%3Etop%2C%20left%2C%20bottom%2C%20right%5D%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%3Eof%20the%20objects%20detected.%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%3CSPAN%3EAnd%26nbsp%3B%5B1%2C2034%2C2%5D%20tensor%20corresponds%20to%20scores%20of%20our%20class%20and%20background.%3C%2FSPAN%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CSPAN%3EWe%20couldnt%20get%20a%20correct%20meaningful%20output%20from%20the%20model%20trained%20by%20the%20eiq%20tool%2C%20which%20seem%20to%20follow%20a%20unique%20way%20of%20combining%20the%20outputs%20bounding%20boxes%2C%20that%20can%20be%20only%20interpreted%20by%20the%20rtview%20engine.%20But%20can%20we%20have%20a%20detailed%20explanation%20of%20the%20output.%20And%20example%20to%20show%20without%20using%20rtview%20or%20tensorflow%20libs%3F%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%3CSPAN%3EThanks.%3C%2FSPAN%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-1740795%22%20slang%3D%22en-US%22%20mode%3D%22CREATE%22%20translate%3D%22no%22%3ERe%3A%20How%20to%20use%20the%20object%20detection%20model%20trained%20by%20eiq%20on%20imx8mp%20board%20or%20pc%3F%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1740795%22%20slang%3D%22en-US%22%20mode%3D%22CREATE%22%3E%3CP%3EPlease%20refer%20the%20eIQ%20document%20with%20%3CSTRONG%3EHELP--%26gt%3BeIQ%20Documentation%3C%2FSTRONG%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E