<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: AllocateTensors() fails for bigger models (never returns) in eIQ Machine Learning Software</title>
    <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1236500#M347</link>
    <description>&lt;P&gt;Hi David,&lt;BR /&gt;&lt;BR /&gt;thank you for your answer!&lt;BR /&gt;I was able to eventually overcome this issue by increasing the Heap size from 8MB to 13MB.&lt;BR /&gt;(Yes it was just the example project with swapped model)&lt;BR /&gt;&lt;BR /&gt;This is quite a huge amount of memory though and in the next stages of the project, we would like to move to a custom PCB that will not likely have additional RAM on board. Could you point me to some materials about better memory management when using TensorFlow Lite?&lt;/P&gt;</description>
    <pubDate>Thu, 25 Feb 2021 09:18:45 GMT</pubDate>
    <dc:creator>Zu</dc:creator>
    <dc:date>2021-02-25T09:18:45Z</dc:date>
    <item>
      <title>AllocateTensors() fails for bigger models (never returns)</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1235915#M343</link>
      <description>&lt;P&gt;Hello,&lt;BR /&gt;&lt;BR /&gt;I am trying to run TensorFlow Lite image recognition on the iMX RT1060 EVK board (with iMX RT1062).&lt;BR /&gt;My client provided me with a trained TensorFlow model that is 12MB.&lt;BR /&gt;When I try to initialize Inference with this model the program just hangs after entering AllocateTensors() function. Here is a snippet (it is taken from a tensorflow_lite_label_image example from the SDK):&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;void InferenceInit(std::unique_ptr&amp;lt;tflite::FlatBufferModel&amp;gt; &amp;amp;model,
                   std::unique_ptr&amp;lt;tflite::Interpreter&amp;gt; &amp;amp;interpreter,
                   TfLiteTensor** input_tensor, bool isVerbose)
{
	model = tflite::FlatBufferModel::BuildFromBuffer(my_model, my_model_len);
  if (!model)
  {
    LOG(FATAL) &amp;lt;&amp;lt; "Failed to load model.\r\n";
    return;
  }

  tflite::ops::builtin::BuiltinOpResolver resolver;

  tflite::InterpreterBuilder(*model, resolver)(&amp;amp;interpreter);
  if (!interpreter)
  {
    LOG(FATAL) &amp;lt;&amp;lt; "Failed to construct interpreter.\r\n";
    return;
  }

  int input = interpreter-&amp;gt;inputs()[0];

  const std::vector&amp;lt;int&amp;gt; inputs = interpreter-&amp;gt;inputs();
  const std::vector&amp;lt;int&amp;gt; outputs = interpreter-&amp;gt;outputs();

  if (interpreter-&amp;gt;AllocateTensors() != kTfLiteOk)
  {
    LOG(FATAL) &amp;lt;&amp;lt; "Failed to allocate tensors!\r\n";
    return;
  }
(...)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;if I pause debugger after that, I see:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Zu_0-1614186157202.png" style="width: 400px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/137997i207D7ADE3269803F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Zu_0-1614186157202.png" alt="Zu_0-1614186157202.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;I am able to run the same project with 0.5MB mobilenet model with no problems.&lt;BR /&gt;&lt;BR /&gt;Does anybody have some clues about what is going on?&lt;/P&gt;</description>
      <pubDate>Wed, 24 Feb 2021 17:06:45 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1235915#M343</guid>
      <dc:creator>Zu</dc:creator>
      <dc:date>2021-02-24T17:06:45Z</dc:date>
    </item>
    <item>
      <title>Re: AllocateTensors() fails for bigger models (never returns)</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1236451#M346</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;
&lt;P&gt;what are your memory settings? Can you share your code? Or did you simply take the label image example and replace the model with the 12MB without any other changes?&lt;/P&gt;
&lt;P&gt;Best Regards,&lt;/P&gt;
&lt;P&gt;David&lt;/P&gt;</description>
      <pubDate>Thu, 25 Feb 2021 08:18:32 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1236451#M346</guid>
      <dc:creator>david_piskula</dc:creator>
      <dc:date>2021-02-25T08:18:32Z</dc:date>
    </item>
    <item>
      <title>Re: AllocateTensors() fails for bigger models (never returns)</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1236500#M347</link>
      <description>&lt;P&gt;Hi David,&lt;BR /&gt;&lt;BR /&gt;thank you for your answer!&lt;BR /&gt;I was able to eventually overcome this issue by increasing the Heap size from 8MB to 13MB.&lt;BR /&gt;(Yes it was just the example project with swapped model)&lt;BR /&gt;&lt;BR /&gt;This is quite a huge amount of memory though and in the next stages of the project, we would like to move to a custom PCB that will not likely have additional RAM on board. Could you point me to some materials about better memory management when using TensorFlow Lite?&lt;/P&gt;</description>
      <pubDate>Thu, 25 Feb 2021 09:18:45 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1236500#M347</guid>
      <dc:creator>Zu</dc:creator>
      <dc:date>2021-02-25T09:18:45Z</dc:date>
    </item>
    <item>
      <title>Re: AllocateTensors() fails for bigger models (never returns)</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1236518#M348</link>
      <description>&lt;P&gt;Hello Zu,&lt;/P&gt;
&lt;P&gt;if you want to optimize memory consumption, I suggest you look into model quantization. TensorFlow supports various quantization methods.&lt;/P&gt;
&lt;P&gt;Furthermore, NXP supports&amp;nbsp;&lt;A href="https://www.nxp.com/design/software/development-software/eiq-ml-development-environment/eiq-for-glow-neural-network-compiler:eIQ-Glow" target="_self"&gt;Glow&lt;/A&gt;&amp;nbsp;and&amp;nbsp;&lt;A href="https://www.nxp.com/design/software/development-software/eiq-ml-development-environment/eiq-inference-with-tensorflow-lite-micro:EIQ-TFLITE-MICRO" target="_self"&gt;TF Lite Micro&lt;/A&gt;, which both allow for memory consumption and performance optimizations.&lt;/P&gt;
&lt;P&gt;Keep in mind, that memory consumption is very dependent on the model you use in your application. The model has to be stored in flash and all intermediate results and weights must be stored in ram during inference.&lt;/P&gt;
&lt;P&gt;If your original issue was resolved, please mark this thread as resolved. If you run into new problems or have new questions, feel free to create a new thread.&lt;/P&gt;
&lt;P&gt;Good luck with your development,&lt;/P&gt;
&lt;P&gt;David&lt;/P&gt;</description>
      <pubDate>Thu, 25 Feb 2021 09:45:08 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1236518#M348</guid>
      <dc:creator>david_piskula</dc:creator>
      <dc:date>2021-02-25T09:45:08Z</dc:date>
    </item>
    <item>
      <title>Re: AllocateTensors() fails for bigger models (never returns)</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1236569#M349</link>
      <description>&lt;P&gt;Hi David,&lt;BR /&gt;&lt;BR /&gt;thank you for your response and thank you for your suggestion about memory optimization.&lt;BR /&gt;&lt;BR /&gt;Have a great day!&lt;BR /&gt;Zuza&lt;/P&gt;</description>
      <pubDate>Thu, 25 Feb 2021 11:06:28 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/AllocateTensors-fails-for-bigger-models-never-returns/m-p/1236569#M349</guid>
      <dc:creator>Zu</dc:creator>
      <dc:date>2021-02-25T11:06:28Z</dc:date>
    </item>
  </channel>
</rss>

