about custom model inference using nndetection in imx8

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

about custom model inference using nndetection in imx8

356 Views
sunghyun
Contributor I

I'm testing ssd_mobilenet_v2 detection using imx8 evm board.

At first, I ran 'nndetection.py' using the tflite file(mobilenet_ssd_v2_coco_quant_postprocess.tflite) that was inside the firmware and confirmed that it inferred correctly.

To train custom model, I used eIQ software, but result models using eIQ didn't operated inference (IPS and FPS are displayed, but detection was not occured).

20230725_111214.jpg

Looking for the same model as the existing model, only the tensorflow1 SSD model among the models created in Google Coral worked through the nndetection.py code.

So, I tried to download frozen graph(ssd_mobilnet_v2_quantized_300x300_coco_2019_01_03) in model zoo1 github and convert result model to tflite (not using eIQ).

Structure and weight of custom model values are the same as the existing model, but inference still did not work. 

ssd_mobilenet_v2_coco_quant_postprocess.png

 <existing model - inference working>

ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.png

 

ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03_2.png

<custom model - inference not working>

The difference I discovered in model graph are TOCO/MLIR convert.

I wonder that procedures should be followed to train a model that operates inside imx8.

 

0 Kudos
Reply
0 Replies