<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>eIQ Machine Learning Software中的主题 Re: Error when creating NN API delegate on i.MX8qmmek. &amp;quot;NNAPI acceleration is unsupported on this platform&amp;quot;</title>
    <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097761#M259</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Marco,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for the reply.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Yes, I was doing as you suggested on BSP 5.4.3_2.0.0 and it worked fine. Only in 5.4.24_2.1.0, I get the error "UseNNAPI is not supported. Use ModifyGraphWithDelegate instead.".&lt;/P&gt;&lt;P&gt;( I am currently not with the Target,&amp;nbsp;so cannot exact ordering of words maybe wrong)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;P&gt;Ullas Bharadwaj&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 23 Jul 2020 15:16:47 GMT</pubDate>
    <dc:creator>ullasbharadwaj</dc:creator>
    <dc:date>2020-07-23T15:16:47Z</dc:date>
    <item>
      <title>Error when creating NN API delegate on i.MX8qmmek. "NNAPI acceleration is unsupported on this platform"</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097755#M253</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Community,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am running AI inference on i.MX8qmmek with BSP 5.4.3_2.0.0. I have a custom TfLite application written in C++ to run inference on mobilenet/ mobilenet+SSD models.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The application seems to use GPU/CPU neon acceleration as the inference times are almost 4x times using GPU compared CPU only computations.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However, the problem comes when I compare the "label_image" application with my custom application.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Model used: mobilenet 0.25 (128x128) quantized&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The GPU accelerated inference times are as follows:&lt;BR /&gt;1. "label_image" sample application - 1.6 ms&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;2. custom application - 11 ms&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The CPU neon accelerated inference times are as follows:&lt;BR /&gt;1. "label_image" sample application - 2.7 ms&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;2.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;custom&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;application - 56 ms&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I cannot understand where this difference is coming from. One of my observation when using GPU acceleration is,&lt;BR /&gt;with the "label_image" sample application, the console shows &lt;STRONG&gt;INFO: Created TensorFlow Lite delegate for NNAPI.&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Applied NNAPI delegate.invoked.&amp;nbsp;&lt;/STRONG&gt;However with my custom appluication, it shows&amp;nbsp;&lt;STRONG&gt;INFO: Created TensorFlow Lite delegate for NNAPI. NNAPI acceleration is unsupported on this platform.&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The code snippet I am using for this is as below:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;unique_ptr&amp;lt;tflite::FlatBufferModel&amp;gt; model = tflite::FlatBufferModel::BuildFromFile(get_modelPath().c_str());&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;tflite::ops::builtin::BuiltinOpResolver resolver;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;unique_ptr&amp;lt;tflite::Interpreter&amp;gt; interpreter;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;tflite::InterpreterBuilder(*model.get(), resolver)(&amp;amp;interpreter);&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;interpreter-&amp;gt;UseNNAPI(true);&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;interpreter-&amp;gt;SetNumThreads(2);&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;TfLiteDelegatePtrMap delegates_;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;auto delegate = TfLiteDelegatePtr(nullptr, [](TfLiteDelegate*) {});&lt;BR /&gt; if (!delegate) {&lt;BR /&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;cout &amp;lt;&amp;lt; "NNAPI acceleration is unsupported on this platform.";&lt;BR /&gt; } else {&lt;BR /&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;delegates_.emplace("NNAPI", std::move(delegate));&lt;BR /&gt; }&lt;BR /&gt; for (const auto&amp;amp; delegate : delegates_) {&lt;BR /&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if (interpreter-&amp;gt;ModifyGraphWithDelegate(delegate.second.get()) !=&lt;BR /&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;kTfLiteOk) {&lt;BR /&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cout &amp;lt;&amp;lt; "Failed to apply " &amp;lt;&amp;lt; delegate.first &amp;lt;&amp;lt; " delegate.";&lt;BR /&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;} else {&lt;BR /&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cout &amp;lt;&amp;lt; "Applied " &amp;lt;&amp;lt; delegate.first &amp;lt;&amp;lt; " delegate.";&lt;BR /&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;BR /&gt; }&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;interpreter-&amp;gt;AllocateTensors();&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;memcpy(interpreter-&amp;gt;typed_input_tensor&amp;lt;uchar&amp;gt;(0), resized_image.data, resized_image.total() *resized_image.elemSize());&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;interpreter-&amp;gt;invoke(); //time is calculated for this function call.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I tried understanding the cause but no luck until now. Help is much appreciated:-)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 29 Jun 2020 09:03:39 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097755#M253</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-06-29T09:03:39Z</dc:date>
    </item>
    <item>
      <title>Re: Error when creating NN API delegate on i.MX8qmmek. "NNAPI acceleration is unsupported on this platform"</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097756#M254</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Ullas,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I see you are using the following code:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;EM style="color: #51626f; background-color: #ffffff; border: 0px;"&gt;auto delegate = TfLiteDelegatePtr(nullptr, [](TfLiteDelegate*) {});&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;That is actually the code that fails (you actually get a NULL pointer).&lt;/P&gt;&lt;P&gt;Have you actually tried to allocate a new TfLiteDelegate instead?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;Marco&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 15 Jul 2020 13:23:35 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097756#M254</guid>
      <dc:creator>Marco_Zaccheria</dc:creator>
      <dc:date>2020-07-15T13:23:35Z</dc:date>
    </item>
    <item>
      <title>Re: Error when creating NN API delegate on i.MX8qmmek. "NNAPI acceleration is unsupported on this platform"</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097757#M255</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Marco,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for your reply.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You are right, I used&amp;nbsp;&lt;STRONG&gt;auto delegate = TfLiteDelegatePtr(tflite::NnApiDelegate(), [](TfLiteDelegate*) {});&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However, the performance in terms of inference times (for interpreter-&amp;gt;invoke())&amp;nbsp; is reduced drastically for non-quanitized SSD models compared to the previous BSP version (5.4.3_2.0.0) where I just used useNNAPI(true). So there is something&amp;nbsp; wrong in the below code of creating a delegate? Can you please help how you are enabling it?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Here is the summary....&lt;/P&gt;&lt;P&gt;1. BSP Version: 5.4.3_2.0.0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Enabling Acceleration : interpreter-&amp;gt;UseNNAPI(true)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp; Result: All models run perfectly fine&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;2. BSP Version: 5.4.24_2_1.0&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;Enabling Acceleration :&amp;nbsp; Using below code&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;using TfLiteDelegatePtr = tflite::Interpreter::TfLiteDelegatePtr;&lt;BR /&gt;using TfLiteDelegatePtrMap = std::map&amp;lt;std::string, TfLiteDelegatePtr&amp;gt;;&lt;/P&gt;&lt;P&gt;TfLiteDelegatePtrMap delegates_;&lt;/P&gt;&lt;P&gt;auto delegate = TfLiteDelegatePtr(tflite::NnApiDelegate(), [](TfLiteDelegate*) {});&lt;/P&gt;&lt;P&gt;if (!delegate) {&lt;BR /&gt; cout &amp;lt;&amp;lt; "NNAPI acceleration is unsupported on this platform.";&lt;BR /&gt; } else {&lt;BR /&gt; delegates_.emplace("NNAPI", std::move(delegate));&lt;BR /&gt; }&lt;BR /&gt; for (const auto&amp;amp; delegate : delegates_) {&lt;BR /&gt; if (interpreter-&amp;gt;ModifyGraphWithDelegate(delegate.second.get()) !=&lt;BR /&gt; kTfLiteOk) {&lt;BR /&gt; cout &amp;lt;&amp;lt; "Failed to apply " &amp;lt;&amp;lt; delegate.first &amp;lt;&amp;lt; " delegate.";&lt;BR /&gt; } else {&lt;BR /&gt; cout &amp;lt;&amp;lt; "Applied " &amp;lt;&amp;lt; delegate.first &amp;lt;&amp;lt; " delegate.";&lt;BR /&gt; }&lt;BR /&gt; }&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Result: SSD models,especially non quantized models take drastically long inference times. Ex: SSD Mobilenet v2 Coco takes &amp;gt; 500 ms&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;P&gt;Ullas Bharadwaj&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 15 Jul 2020 13:39:29 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097757#M255</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-07-15T13:39:29Z</dc:date>
    </item>
    <item>
      <title>Re: Error when creating NN API delegate on i.MX8qmmek. "NNAPI acceleration is unsupported on this platform"</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097758#M256</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/nxf60449"&gt;nxf60449&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Can you check this and update the ticket?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Manish&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 18 Jul 2020 02:20:21 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097758#M256</guid>
      <dc:creator>manish_bajaj</dc:creator>
      <dc:date>2020-07-18T02:20:21Z</dc:date>
    </item>
    <item>
      <title>Re: Error when creating NN API delegate on i.MX8qmmek. "NNAPI acceleration is unsupported on this platform"</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097759#M257</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello &lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/manishbajaj"&gt;manishbajaj&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sure, I'll take a look and let you know&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Alifer&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 20 Jul 2020 14:28:07 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097759#M257</guid>
      <dc:creator>Alifer_Moraes</dc:creator>
      <dc:date>2020-07-20T14:28:07Z</dc:date>
    </item>
    <item>
      <title>Re: Error when creating NN API delegate on i.MX8qmmek. "NNAPI acceleration is unsupported on this platform"</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097760#M258</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Ullas,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;in the example mentioned, the delegate is actually used only to figure out what acceleration is available on the platform the example is running on.&lt;/P&gt;&lt;P&gt;We will figure out why the code is not running, but for the sake of your test you can avoid using the delegate, you can just do something like:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE class="language-cpp line-numbers"&gt;&lt;CODE&gt;interpreter&lt;SPAN class="operator token"&gt;-&lt;/SPAN&gt;&lt;SPAN class="operator token"&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="token function"&gt;UseNNAPI&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;&lt;SPAN class="token function"&gt;get_useGPU&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;&lt;SPAN class="line-numbers-rows"&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please let me know whether it resolves you issue.&lt;/P&gt;&lt;P&gt;Again, we are also trying to figure out&amp;nbsp;whether we need to modify the original code.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;Marco&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 23 Jul 2020 15:00:39 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097760#M258</guid>
      <dc:creator>Marco_Zaccheria</dc:creator>
      <dc:date>2020-07-23T15:00:39Z</dc:date>
    </item>
    <item>
      <title>Re: Error when creating NN API delegate on i.MX8qmmek. "NNAPI acceleration is unsupported on this platform"</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097761#M259</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Marco,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for the reply.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Yes, I was doing as you suggested on BSP 5.4.3_2.0.0 and it worked fine. Only in 5.4.24_2.1.0, I get the error "UseNNAPI is not supported. Use ModifyGraphWithDelegate instead.".&lt;/P&gt;&lt;P&gt;( I am currently not with the Target,&amp;nbsp;so cannot exact ordering of words maybe wrong)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;P&gt;Ullas Bharadwaj&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 23 Jul 2020 15:16:47 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097761#M259</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-07-23T15:16:47Z</dc:date>
    </item>
    <item>
      <title>Re: Error when creating NN API delegate on i.MX8qmmek. "NNAPI acceleration is unsupported on this platform"</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097762#M260</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Ullas,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;could you please send me the full log showing the issue with 5.4.24?&lt;/P&gt;&lt;P&gt;Furthermore, have you already checked the up-to-date version of the label_image example?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;Marco&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 29 Jul 2020 16:00:49 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Error-when-creating-NN-API-delegate-on-i-MX8qmmek-quot-NNAPI/m-p/1097762#M260</guid>
      <dc:creator>Marco_Zaccheria</dc:creator>
      <dc:date>2020-07-29T16:00:49Z</dc:date>
    </item>
  </channel>
</rss>

