Hello,
I am trying to run TensorFlow Lite image recognition on the iMX RT1060 EVK board (with iMX RT1062).
My client provided me with a trained TensorFlow model that is 12MB.
When I try to initialize Inference with this model the program just hangs after entering AllocateTensors() function. Here is a snippet (it is taken from a tensorflow_lite_label_image example from the SDK):
void InferenceInit(std::unique_ptr<tflite::FlatBufferModel> &model,
std::unique_ptr<tflite::Interpreter> &interpreter,
TfLiteTensor** input_tensor, bool isVerbose)
{
model = tflite::FlatBufferModel::BuildFromBuffer(my_model, my_model_len);
if (!model)
{
LOG(FATAL) << "Failed to load model.\r\n";
return;
}
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
if (!interpreter)
{
LOG(FATAL) << "Failed to construct interpreter.\r\n";
return;
}
int input = interpreter->inputs()[0];
const std::vector<int> inputs = interpreter->inputs();
const std::vector<int> outputs = interpreter->outputs();
if (interpreter->AllocateTensors() != kTfLiteOk)
{
LOG(FATAL) << "Failed to allocate tensors!\r\n";
return;
}
(...)
if I pause debugger after that, I see:

I am able to run the same project with 0.5MB mobilenet model with no problems.
Does anybody have some clues about what is going on?