My platform: EVK RT1062.
Sample code : Tenorflow_lite_micro_cifar10
Duplicate: I try to modify the inference file(model_data.h) with my model
( .h5 file ==>EIQ tool==> .tflite ==> xxd ==>.model_data.h)
Then I got the error message. (quantize format with int8 and int32)
Would you have any idea about this bug?
===================================================
// int8
CIFAR-10 example using a TensorFlow Lite Micro model.
Detection threshold: 60%
Model: cifarnet_quant_int8
Didn't find op for builtin opcode 'QUANTIZE' version '1'
Failed to get registration from op code ADD
Failed starting model allocation.
AllocateTensors() failed
Failed initializing model
===================================================
// int32
Detection threshold: 60%
Model: cifarnet_quant_int8
Didn't find op for builtin opcode 'RESHAPE' version '1'
Failed to get registration from op code ADD
Failed starting model allocation.
AllocateTensors() failed
Failed initializing model