@Alex_Wang
I read again README.md in folder "doc" of tflm_cifar10 in SDK for MIMXRT1060-EVKB. It said that "The configuration of the model was modified to match the neural network structure in the CMSIS-NN CIFAR-10 example." I attach readme file here.
As my understanding, CMSIS-NN supports only for int8. Is that right? It means that if input and output of model is float32, it does not work. Please confirm it.
Moreover, NXP only shared tflite C array of model. Could you share with my .tflite Cifar10 model (before converting to tflite C array), that trained for the example in the SDK. Because I want to check it with Neutron to see more details about data type of input, output, weight, biases. I am using Tensorflow2, after full quantization bias is still int32 (not int8) and result seems not correct.
Thank you.