<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Fake info in PyeIQ article? in eIQ Machine Learning Software</title>
    <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045715#M142</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Aleksandr,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;TensorFlow Lite does not have Python bindings, like C++ (CPU, GPU/NPU), for delegates. By default, it uses NNAPI delegate (when you run a demo, you can see it by the following log message: INFO: Created TensorFlow Lite delegate for NNAPI). NNAPI delegate automatically delegates the inference to GPU/NPU.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;About Arm NN, it does work with GPU as described in the table (the table is to inform if it is supported or not, and not necessary is the default - we will try to let this more clear in the next version), but you do need to change in the code from Cpu to VsiNpu in order to run inference on GPU/NPU.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;PyeIQ focuses on MPlus, so we decided that our default, in this case, would be CPU, ONLY because this particular model (fire detection: float32) is not quantized (uint8) and when this happens CPU works better. If you had a quantized model, please change to VsiNpu that will work way faster :smileyhappy:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;BR /&gt;Diego&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 30 Jul 2020 18:50:50 GMT</pubDate>
    <dc:creator>diego_dorta</dc:creator>
    <dc:date>2020-07-30T18:50:50Z</dc:date>
    <item>
      <title>Fake info in PyeIQ article?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045714#M141</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;the article&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.nxp.com/migration-blogpost/11269"&gt;PyeIQ - A Python Framework for eIQ on i.MX Processors&lt;/A&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;shows that either armnn and tflite successfully work with GPU. When I followed links I noticed that pyeIQ's&lt;/P&gt;&lt;P&gt;armnn/inference.py intendly sets backends as CpuRef and CpuAcc, but NO GpuAcc.&lt;/P&gt;&lt;P&gt;furthermore&lt;/P&gt;&lt;P&gt;tflite/inference.py doesn't use gpu_delegates, while default backend is cpu.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is that article fake or misunderstanding?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 30 Jul 2020 16:29:50 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045714#M141</guid>
      <dc:creator>korabelnikov</dc:creator>
      <dc:date>2020-07-30T16:29:50Z</dc:date>
    </item>
    <item>
      <title>Re: Fake info in PyeIQ article?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045715#M142</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Aleksandr,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;TensorFlow Lite does not have Python bindings, like C++ (CPU, GPU/NPU), for delegates. By default, it uses NNAPI delegate (when you run a demo, you can see it by the following log message: INFO: Created TensorFlow Lite delegate for NNAPI). NNAPI delegate automatically delegates the inference to GPU/NPU.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;About Arm NN, it does work with GPU as described in the table (the table is to inform if it is supported or not, and not necessary is the default - we will try to let this more clear in the next version), but you do need to change in the code from Cpu to VsiNpu in order to run inference on GPU/NPU.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;PyeIQ focuses on MPlus, so we decided that our default, in this case, would be CPU, ONLY because this particular model (fire detection: float32) is not quantized (uint8) and when this happens CPU works better. If you had a quantized model, please change to VsiNpu that will work way faster :smileyhappy:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;BR /&gt;Diego&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 30 Jul 2020 18:50:50 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045715#M142</guid>
      <dc:creator>diego_dorta</dc:creator>
      <dc:date>2020-07-30T18:50:50Z</dc:date>
    </item>
    <item>
      <title>Re: Fake info in PyeIQ article?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045716#M143</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/korabelnikov@arrival.com"&gt;korabelnikov@arrival.com&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As Diego mentioned this article and sample example is not fake and is demo package implemented by NXP.&lt;/P&gt;&lt;P&gt;Did you tried sample example your self or can you help me understand the reason of misunderstanding?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Manish&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 30 Jul 2020 18:59:21 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045716#M143</guid>
      <dc:creator>manish_bajaj</dc:creator>
      <dc:date>2020-07-30T18:59:21Z</dc:date>
    </item>
    <item>
      <title>Re: Fake info in PyeIQ article?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045717#M144</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thank you for a fast response!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm digging into both armnn and tfite in relation to gpu, so&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Could you please clarify few details:&lt;/P&gt;&lt;P&gt;1. does BSP contain default tflite branch without gpu_delegates?&lt;/P&gt;&lt;P&gt;2. am I correct armnn doesn't support GpuAcc backend? what is&amp;nbsp;&lt;SPAN style="color: #51626f; background-color: #ffffff;"&gt;VsiNpu?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #51626f; background-color: #ffffff;"&gt;I think demos should have a simple flag cpu/gpu/npu to change backend&amp;nbsp; where it applicable.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #51626f; background-color: #ffffff;"&gt;Table with cpu/gpu support confuses when code actually have no such gpu option. (armnn/inference.py)&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 31 Jul 2020 11:25:51 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045717#M144</guid>
      <dc:creator>korabelnikov</dc:creator>
      <dc:date>2020-07-31T11:25:51Z</dc:date>
    </item>
    <item>
      <title>Re: Fake info in PyeIQ article?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045718#M145</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thank you for a response,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I've found a demo, that i missed,&amp;nbsp;&lt;A class="link-titled" href="https://pyeiq.dev/2_applications_demos/2_7_switch_detection.html#inference-engine-and-algorithm" title="https://pyeiq.dev/2_applications_demos/2_7_switch_detection.html#inference-engine-and-algorithm"&gt;Switch Detection Video - pyeiq&lt;/A&gt;&amp;nbsp;that could change backend. I wll check it up&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 31 Jul 2020 11:27:54 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045718#M145</guid>
      <dc:creator>korabelnikov</dc:creator>
      <dc:date>2020-07-31T11:27:54Z</dc:date>
    </item>
    <item>
      <title>Re: Fake info in PyeIQ article?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045719#M146</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;DIV&gt;Thanks for your feedback. We will certainly take your comments into consideration as we look to continuously improve our collateral.&lt;P&gt;&lt;/P&gt;I would also suggest that in the future you simply ask questions and seek clarity on areas where you are struggling or do not understand or perhaps find an error on our part, rather than immediately assuming NXP intentionally published a fake article. This is much more conducive to a healthy and vibrant community.&lt;P&gt;&lt;/P&gt;Thanks again and glad you were able to make some progress here quickly. Let us know if there is anything else we can do to be of assistance.&lt;P&gt;&lt;/P&gt;Ragan&lt;/DIV&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 31 Jul 2020 13:23:32 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045719#M146</guid>
      <dc:creator>Ragan_Dunham</dc:creator>
      <dc:date>2020-07-31T13:23:32Z</dc:date>
    </item>
    <item>
      <title>Re: Fake info in PyeIQ article?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045720#M147</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Aleksandr,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The switch_video is a special case, such as it is classified as an application and not a demo. Part of it is written in C++, that's why you can choose between CPU or GPU/NPU. This application was developed for real-time comparison in the performance when you run inference on CPU or use hardware acceleration with GPU/NPU.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Diego&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 31 Jul 2020 14:26:37 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045720#M147</guid>
      <dc:creator>diego_dorta</dc:creator>
      <dc:date>2020-07-31T14:26:37Z</dc:date>
    </item>
    <item>
      <title>Re: Fake info in PyeIQ article?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045721#M148</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Aleksandr,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Answering your questions:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;1. does BSP contain default tflite branch without gpu_delegates?&lt;BR /&gt; No, it is not BSP related. TensorFlow Lite bindings for Python do not offer a way for delegating, it uses NNAPI delegate (which is going to delegate for GPU/NPU if available, otherwise it uses CPU). Actually there is only one possible delegate you can choose, which is TPU, but it is not a hardware solution provided and supported by NXP.&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;2. am I correct armnn doesn't support GpuAcc backend? what is VsiNpu?&lt;BR /&gt; The ArmNN of our BSP does not support GpuAcc, it supports VsiNpu instead. VsiNpu is the backend provided for hardware acceleration, it is similar to TensorFlow Lite NNAPI delegate, which means it is going to delegate inference for GPU/NPU.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Regards,&lt;/DIV&gt;&lt;DIV&gt;Alifer&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 03 Aug 2020 15:02:13 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1045721#M148</guid>
      <dc:creator>Alifer_Moraes</dc:creator>
      <dc:date>2020-08-03T15:02:13Z</dc:date>
    </item>
    <item>
      <title>Re: Fake info in PyeIQ article?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1190431#M327</link>
      <description>&lt;P&gt;Sorry for a angry heading, I was very frustrated by inconsistent information and many failed builds while trying to use gpu. (I did it wrong way, my bad)&lt;/P&gt;&lt;P&gt;Thanks again)&lt;/P&gt;</description>
      <pubDate>Fri, 27 Nov 2020 22:02:22 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/Fake-info-in-PyeIQ-article/m-p/1190431#M327</guid>
      <dc:creator>korabelnikov</dc:creator>
      <dc:date>2020-11-27T22:02:22Z</dc:date>
    </item>
  </channel>
</rss>

