I'm testing ssd_mobilenet_v2 detection using imx8 evm board.
At first, I ran 'nndetection.py' using the tflite file(mobilenet_ssd_v2_coco_quant_postprocess.tflite) that was inside the firmware and confirmed that it inferred correctly.
To train custom model, I used eIQ software, but result models using eIQ didn't operated inference (IPS and FPS are displayed, but detection was not occured).
Looking for the same model as the existing model, only the tensorflow1 SSD model among the models created in Google Coral worked through the nndetection.py code.
So, I tried to download frozen graph(ssd_mobilnet_v2_quantized_300x300_coco_2019_01_03) in model zoo1 github and convert result model to tflite (not using eIQ).
Structure and weight of custom model values are the same as the existing model, but inference still did not work.
<existing model - inference working>
<custom model - inference not working>
The difference I discovered in model graph are TOCO/MLIR convert.
I wonder that procedures should be followed to train a model that operates inside imx8.