My platform: EVK RT1062.
Sample code : Tenorflow_lite_micro_cifar10
Duplicate: I try to modify the inference file(model_data.h) with my model
( .h5 file ==>EIQ tool==> .tflite ==> xxd ==>.model_data.h)
Then I got the error message. (quantize format with int8 and int32)
Would you have any idea about this bug?
===================================================
// int8
CIFAR-10 example using a TensorFlow Lite Micro model.
Detection threshold: 60%
Model: cifarnet_quant_int8
Didn't find op for builtin opcode 'QUANTIZE' version '1'
Failed to get registration from op code ADD
Failed starting model allocation.
AllocateTensors() failed
Failed initializing model
===================================================
// int32
Detection threshold: 60%
Model: cifarnet_quant_int8
Didn't find op for builtin opcode 'RESHAPE' version '1'
Failed to get registration from op code ADD
Failed starting model allocation.
AllocateTensors() failed
Failed initializing model
Hello@crist_xu
I have very similar challenge (refers to example from SDK2.10.1 named evkmimxrt1170_tensorflow_lite_micro_label_image_cm7), and made all changes you suggested, such us:
tflite::AllOpsResolver micro_op_resolver;
// tflite::MicroOpResolver µ_op_resolver = MODEL_GetOpsResolver(s_errorReporter);
However, there is issue left which has been also mentioned by @marcowang in his first post, it means:
Failed to get registration from op code ADD
Any thoughts? Thanks in advance!
Steps to reproduce:
Hi @MarcinChelminsk , i couldn't send private message. It says i reached maximum number of private messages.
Try placing
s_microOpResolver.AddAdd();
at the first, that's the only difference I could see comparing to my code. Also try cleaning the project and build again.
@Ramson, thank you very much for your comment, however unfortunately it does not work on my end, I ma still looking for solution. I have also tried with changing resolver to AllOpsResolver (as mentioned here in troubleshooting: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-FAQ/ta-p/1099741 ) however no success so far
Hi,
In order to optimize the code size, the newest sdk(sdk 2.10.0) usually registered the nodes needed, so if you change a model ,you need to re-register the nodes inside your model:
So, please help check, if your code is using the line:71 to include all the op-codes into the project not the line 73 of the model.cpp:
And also please check if there is a function named AddQuantize() inside all_op_resolvers.cpp:
Regards,
Crist
Hi @marcowang ,
Thanks for your interest in the NXP MIMXRT product, I would like to provide service for you.
You are using this SDK code:
SDK_2_10_0_EVK-MIMXRT1060\boards\evkmimxrt1060\eiq_examples\tensorflow_lite_micro_cifar10
And you replace your own model.
Could you please give me more details about your own model information, then I can try to reproduce your issues at first.
You mentioned: ( .h5 file ==>EIQ tool==> .tflite ==> xxd ==>.model_data.h)
Please help to provide the related files for the issue reproduction.
Do you refer to any document about your own model generation? I just want to follow your steps and do it again, try to check whether I can also reproduce your issues.
Best Regards,
kerry