AllocateTensors() fails for bigger models (never returns)

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

AllocateTensors() fails for bigger models (never returns)

ソリューションへジャンプ
3,193件の閲覧回数
Zu
Contributor II

Hello,

I am trying to run TensorFlow Lite image recognition on the iMX RT1060 EVK board (with iMX RT1062).
My client provided me with a trained TensorFlow model that is 12MB.
When I try to initialize Inference with this model the program just hangs after entering AllocateTensors() function. Here is a snippet (it is taken from a tensorflow_lite_label_image example from the SDK):

void InferenceInit(std::unique_ptr<tflite::FlatBufferModel> &model,
                   std::unique_ptr<tflite::Interpreter> &interpreter,
                   TfLiteTensor** input_tensor, bool isVerbose)
{
	model = tflite::FlatBufferModel::BuildFromBuffer(my_model, my_model_len);
  if (!model)
  {
    LOG(FATAL) << "Failed to load model.\r\n";
    return;
  }

  tflite::ops::builtin::BuiltinOpResolver resolver;

  tflite::InterpreterBuilder(*model, resolver)(&interpreter);
  if (!interpreter)
  {
    LOG(FATAL) << "Failed to construct interpreter.\r\n";
    return;
  }

  int input = interpreter->inputs()[0];

  const std::vector<int> inputs = interpreter->inputs();
  const std::vector<int> outputs = interpreter->outputs();

  if (interpreter->AllocateTensors() != kTfLiteOk)
  {
    LOG(FATAL) << "Failed to allocate tensors!\r\n";
    return;
  }
(...)

 if I pause debugger after that, I see:

Zu_0-1614186157202.png

I am able to run the same project with 0.5MB mobilenet model with no problems.

Does anybody have some clues about what is going on?

0 件の賞賛
1 解決策
3,175件の閲覧回数
Zu
Contributor II

Hi David,

thank you for your answer!
I was able to eventually overcome this issue by increasing the Heap size from 8MB to 13MB.
(Yes it was just the example project with swapped model)

This is quite a huge amount of memory though and in the next stages of the project, we would like to move to a custom PCB that will not likely have additional RAM on board. Could you point me to some materials about better memory management when using TensorFlow Lite?

元の投稿で解決策を見る

タグ(1)
0 件の賞賛
4 返答(返信)
3,180件の閲覧回数
david_piskula
NXP Employee
NXP Employee

Hello,

what are your memory settings? Can you share your code? Or did you simply take the label image example and replace the model with the 12MB without any other changes?

Best Regards,

David

0 件の賞賛
3,176件の閲覧回数
Zu
Contributor II

Hi David,

thank you for your answer!
I was able to eventually overcome this issue by increasing the Heap size from 8MB to 13MB.
(Yes it was just the example project with swapped model)

This is quite a huge amount of memory though and in the next stages of the project, we would like to move to a custom PCB that will not likely have additional RAM on board. Could you point me to some materials about better memory management when using TensorFlow Lite?

タグ(1)
0 件の賞賛
3,169件の閲覧回数
david_piskula
NXP Employee
NXP Employee

Hello Zu,

if you want to optimize memory consumption, I suggest you look into model quantization. TensorFlow supports various quantization methods.

Furthermore, NXP supports Glow and TF Lite Micro, which both allow for memory consumption and performance optimizations.

Keep in mind, that memory consumption is very dependent on the model you use in your application. The model has to be stored in flash and all intermediate results and weights must be stored in ram during inference.

If your original issue was resolved, please mark this thread as resolved. If you run into new problems or have new questions, feel free to create a new thread.

Good luck with your development,

David

0 件の賞賛
3,164件の閲覧回数
Zu
Contributor II

Hi David,

thank you for your response and thank you for your suggestion about memory optimization.

Have a great day!
Zuza

0 件の賞賛