<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>eIQ Machine Learning SoftwareのトピックSSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
    <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087947#M236</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Community,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am using i.MX8qmmek with BSP 5.4.3_2.0.0. I have my custom C++ application for running inference using TfLite and OpenCV. The appliaction with TfLite was able to use the GPU acceleration. Now, I would like to use ArmNN as my inference engine.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However, the Linux user guide does not suggest the example with SSD mobilenet inference. When I tried using the&amp;nbsp;&lt;STRONG&gt;TfLiteMobileNetSsd-Armnn&lt;/STRONG&gt; demo appliaction, i get the following error&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;ArmNN v20190800&lt;BR /&gt;Failed to parse operator #0 within subgraph #0 error: Operator not supported. subgraph:0 operator:0 opcode_index:3 opcode:6 / DEQUANTIZE at function ParseUnsupportedOperator [/usr/src/debug/armnn/19.08-]&lt;BR /&gt;Armnn Error: Buffer #176 has 0 bytes. For tensor: [1,300,300,3] expecting: 1080000 bytes and 270000 elements. at function CreateConstTensor [/usr/src/debug/armnn/19.08-r1/git/src/armnnTfLiteParser/TfLit]&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So, is it possible to run any of SSD MobileNet models using ARM NN on GPU? Is there a sample code to do that?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;P&gt;Ullas Bharadwaj&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Wed, 24 Jun 2020 15:36:16 GMT</pubDate>
    <dc:creator>ullasbharadwaj</dc:creator>
    <dc:date>2020-06-24T15:36:16Z</dc:date>
    <item>
      <title>SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087947#M236</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Community,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am using i.MX8qmmek with BSP 5.4.3_2.0.0. I have my custom C++ application for running inference using TfLite and OpenCV. The appliaction with TfLite was able to use the GPU acceleration. Now, I would like to use ArmNN as my inference engine.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However, the Linux user guide does not suggest the example with SSD mobilenet inference. When I tried using the&amp;nbsp;&lt;STRONG&gt;TfLiteMobileNetSsd-Armnn&lt;/STRONG&gt; demo appliaction, i get the following error&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;ArmNN v20190800&lt;BR /&gt;Failed to parse operator #0 within subgraph #0 error: Operator not supported. subgraph:0 operator:0 opcode_index:3 opcode:6 / DEQUANTIZE at function ParseUnsupportedOperator [/usr/src/debug/armnn/19.08-]&lt;BR /&gt;Armnn Error: Buffer #176 has 0 bytes. For tensor: [1,300,300,3] expecting: 1080000 bytes and 270000 elements. at function CreateConstTensor [/usr/src/debug/armnn/19.08-r1/git/src/armnnTfLiteParser/TfLit]&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So, is it possible to run any of SSD MobileNet models using ARM NN on GPU? Is there a sample code to do that?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;P&gt;Ullas Bharadwaj&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 24 Jun 2020 15:36:16 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087947#M236</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-06-24T15:36:16Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087948#M237</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/ullasbharadwaj"&gt;ullasbharadwaj&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Can you please share information on your product use case? Please share model and exact command you used.&lt;/P&gt;&lt;P&gt;As indicated we are adding sample example using PYeIQ, New release should have an example for ARMNN too.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Manish&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 30 Jun 2020 17:07:58 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087948#M237</guid>
      <dc:creator>manish_bajaj</dc:creator>
      <dc:date>2020-06-30T17:07:58Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087949#M238</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi &lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/manishbajaj"&gt;manishbajaj&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;I am evaluating multiple object detection models using TfLite, Arm NN and OpenCV. Hence I am trying to use the ArmNN sample C++ application.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am using TfLiteMobileNetSsd-Armnn (maybe I am wrong with exact spelling) sample application.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;$: TfLiteMobileNetSsd-Armnn -m /path_to_any_ssd_mobilenet_model/ -d /path_to_test_images/ -c VsiNpu -l labels.txt&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Can you please confirm this if I can run the sample application on GPU?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;P&gt;Ullas Bharadwaj&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 30 Jun 2020 20:54:34 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087949#M238</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-06-30T20:54:34Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087950#M239</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/ullasbharadwaj"&gt;ullasbharadwaj&lt;/A&gt;‌&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Are you able to run above example on CPU/Core ?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Manish&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 02 Jul 2020 18:16:11 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087950#M239</guid>
      <dc:creator>manish_bajaj</dc:creator>
      <dc:date>2020-07-02T18:16:11Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087951#M240</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;No, I was not able to run it even on CPU.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 06 Jul 2020 08:46:36 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087951#M240</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-07-06T08:46:36Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087952#M241</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/ullasbharadwaj"&gt;ullasbharadwaj&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Current BSP version support ARMNN 19.08.&amp;nbsp;&lt;/P&gt;&lt;P&gt;ARMNN 20.02&amp;nbsp;&amp;nbsp;&lt;STRONG style="font-size: 12.0pt; "&gt;TfLite Parser:&lt;/STRONG&gt;&lt;SPAN style="font-size: 10.0pt; color: black;"&gt; &lt;/SPAN&gt;&lt;SPAN style="font-size: 12.0pt;"&gt;Added support for DEQUANTIZE.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12.0pt;"&gt;I will update once our BSP moves to 20.02.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 12.0pt;"&gt;-Manish&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 06 Jul 2020 13:55:07 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087952#M241</guid>
      <dc:creator>manish_bajaj</dc:creator>
      <dc:date>2020-07-06T13:55:07Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087953#M242</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Yes, I got it. Please update here. Thank you :-)&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 06 Jul 2020 14:06:29 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087953#M242</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-07-06T14:06:29Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087954#M243</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi&amp;nbsp;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/manishbajaj"&gt;manishbajaj&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;As per your suggestion, I have dropped to use SSD MobileNet but now trying to use just the MobileNet variant.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When I compare TfLite and ArmNN, with CPU Acc, ArmNN performs better than TfLite. On GPU, TfLite performs better than ArmNN.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do you know if it is the expected behavior? Since GPU acc for both to TfLite and ArmNN is added in recent releases, is there some optimizations pending from NXP?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 07 Jul 2020 09:48:07 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087954#M243</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-07-07T09:48:07Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087955#M244</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/ullasbharadwaj"&gt;ullasbharadwaj&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Share your model and numbers you are seeing. Performance need not be same on different inference engine. There are various factor that can cause the difference.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Manish&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 07 Jul 2020 15:47:05 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087955#M244</guid>
      <dc:creator>manish_bajaj</dc:creator>
      <dc:date>2020-07-07T15:47:05Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087956#M245</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/manishbajaj"&gt;manishbajaj&lt;/A&gt;‌&lt;/P&gt;&lt;P&gt;Model: mobilenet_v1_0.25_128_quant (attached)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Results:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;ArmNN&lt;/STRONG&gt; : &lt;EM&gt;&lt;STRONG&gt;Cpu Acc&lt;/STRONG&gt;&lt;/EM&gt; -&amp;gt; &lt;STRONG&gt;3.105ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;ArmNN : &lt;EM&gt;VsiNpu Acc&lt;/EM&gt; -&amp;gt; 3.203ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;This is indicating &lt;STRONG&gt;CpuAcc is better than VsiNpu Acc &lt;/STRONG&gt;for ArmNN. No improvement with VsiNpu for this model.&amp;nbsp; But for mobilenet_v1_1.0_224_quant, results seems to improve with Vsi Npu (13ms) compared to CpuAcc (52ms).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;TfLitte: &lt;EM&gt;Cpu Acc&lt;/EM&gt; -&amp;gt;&amp;nbsp; 3.6ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;TfLite: VsiNpu -&amp;gt; 1.8ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;This is indicating &lt;STRONG&gt;VsiNpu Acc is better than CpuAcc &lt;/STRONG&gt;for TfLite&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Question: Why does TfLite performs better than ArmNN on GPU and not on CPU?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 08 Jul 2020 10:16:34 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087956#M245</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-07-08T10:16:34Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087957#M246</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/nxf60449"&gt;nxf60449&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Can you look into it and update the ticket?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Manish&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 08 Jul 2020 19:19:50 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087957#M246</guid>
      <dc:creator>manish_bajaj</dc:creator>
      <dc:date>2020-07-08T19:19:50Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087958#M247</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello &lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/ullasbharadwaj"&gt;ullasbharadwaj&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;I've ran some tests using the same models you attached on iMX8QM Mek using BSP 5.4.24-2.1.0 (newest).&lt;BR /&gt;It seems like each Tflite-Armnn example uses a specific model, so the examples that uses the models you shared are:&lt;/P&gt;&lt;P&gt;TfLiteMobileNetQuantizedSoftmax-Armnn -&amp;gt; model: mobilenet_v1_0.25_128_quant.tflite&lt;BR /&gt;TfLiteMobilenetQuantized-Armnn -&amp;gt; model: mobilenet_v1_1.0_224_quant.tflite&lt;/P&gt;&lt;P&gt;Are they the same you tried?&lt;/P&gt;&lt;P&gt;In the tests I ran, the inference time using VsiNpu was always faster than CpuRef, I got the following logs (I attached the complete log):&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;root@imx8qmmek:~# TfLiteMobilenetQuantized-Armnn -m model/ -d data/ -c CpuAcc&lt;BR /&gt;ArmNN v20190801&lt;BR /&gt;&lt;STRONG&gt;Average time per test case: 139.598 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;root@imx8qmmek:~# TfLiteMobilenetQuantized-Armnn -m model/ -d data/ -c VsiNpu &lt;BR /&gt;ArmNN v20190801&lt;BR /&gt;&lt;STRONG&gt;Average time per test case: 12.214 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;root@imx8qmmek:~# TfLiteMobileNetQuantizedSoftmax-Armnn -m model/ -d data/ -c CpuAcc&lt;BR /&gt;ArmNN v20190801&lt;BR /&gt;&lt;STRONG&gt;Average time per test case: 11.714 ms&lt;/STRONG&gt;&lt;BR /&gt;root@imx8qmmek:~/armnntest# TfLiteMobileNetQuantizedSoftmax-Armnn -m model/ -d data/ -c VsiNpu&lt;BR /&gt;ArmNN v20190801&lt;BR /&gt;&lt;STRONG&gt;Average time per test case: 2.464 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Alifer&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 08 Jul 2020 21:20:34 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087958#M247</guid>
      <dc:creator>Alifer_Moraes</dc:creator>
      <dc:date>2020-07-08T21:20:34Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087959#M248</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi&amp;nbsp;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/nxf60449"&gt;nxf60449&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;Thanks for taking your time to run these tests.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Yes, the models I used are the same as you mentioned.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The results I mentioned are obtained by running them on dual A72 cores using "taskset -c 4-5". I am sorry that I missed out to mention the detail.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So I ran similar tests as you without using Taskset. The results I got are as follows:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;*************************************************&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Acceleration : CpuAcc&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;*************************************************&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Model&lt;SPAN style="background-color: #ffffff; font-weight: 400;"&gt;: &lt;SPAN&gt;mobilenet_v1_1.0_224_quant.tflite&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;ArmNN ---&amp;gt;&amp;nbsp;138.2&amp;nbsp;ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #3d3d3d;"&gt;&lt;STRONG&gt;TfLite ---&amp;gt;&amp;nbsp;116.6 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #3d3d3d;"&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #51626f; background-color: #ffffff;"&gt;&lt;STRONG&gt;Model&lt;/STRONG&gt;: mobilenet_v1_0.25_128_quant.tflite&lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG&gt;ArmNN ---&amp;gt; 9.338 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;TfLite ---&amp;gt; 6.44ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #3d3d3d;"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;&lt;STRONG&gt;*************************************************&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;&lt;STRONG&gt;Acceleration :&amp;nbsp;VsiNpu&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;&lt;STRONG&gt;*************************************************&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;&lt;STRONG&gt;Model&lt;SPAN style="background-color: #ffffff; font-weight: 400;"&gt;:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;mobilenet_v1_1.0_224_quant.tflite&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;ArmNN ---&amp;gt; 14.23 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #3d3d3d;"&gt;&lt;STRONG&gt;TfLite ---&amp;gt; 12.28 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #3d3d3d;"&gt;&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;&lt;SPAN style="color: #51626f; background-color: #ffffff;"&gt;&lt;STRONG&gt;Model&lt;/STRONG&gt;: mobilenet_v1_0.25_128_quant.tflite&lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG&gt;ArmNN ---&amp;gt;&amp;nbsp;5.20&amp;nbsp;ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;&lt;STRONG&gt;TfLite ---&amp;gt; 2.25 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;&lt;STRONG&gt;**************************************************&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;&lt;STRONG&gt;1.&amp;nbsp;&amp;nbsp;&lt;/STRONG&gt;When you do not run them exclusievely on A72 cores, you can see improvement in performance with VsiNpu.&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;But on &lt;STRONG&gt;A72&lt;/STRONG&gt; with&amp;nbsp;&lt;STRONG style="background-color: #ffffff; color: #51626f; "&gt;mobilenet_v1_0.25_128_quant.tflite, I do not see improvement with VsiNpu using ArmNN. May I know if you can also see this behavior and probably know the cause?&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;2.&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;However, &lt;STRONG&gt;TfLite Interpreter always performs better than ArmNN. Should not ArmNN be more optimized&amp;nbsp;compared to TfLite?&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Maybe I am wrong, I believed ArmNN to perform better than TfLite.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P style="font-weight: 400;"&gt;&lt;STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 09 Jul 2020 09:13:34 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087959#M248</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-07-09T09:13:34Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087960#M249</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/ullasbharadwaj"&gt;ullasbharadwaj&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As per our Alifer test, We don't see same number as you are seeing. Can you confirm that you using latest BSP released? Are you running same test as run by us?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P style="color: #51626f; background-color: #ffffff; border: 0px;"&gt;&lt;SPAN style="color: #51626f; background-color: #ffffff; border: 0px; font-weight: inherit;"&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;You data Model&lt;/STRONG&gt;: mobilenet_v1_0.25_128_quant.tflite&lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;ArmNN ---&amp;gt;&amp;nbsp;5.20&amp;nbsp;ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #51626f; background-color: #ffffff; border: 0px;"&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;TfLite ---&amp;gt; 2.25 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #51626f; background-color: #ffffff; border: 0px;"&gt;&lt;/P&gt;&lt;P style="color: #51626f; background-color: #ffffff; border: 0px;"&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;&lt;SPAN style="background-color: #ffffff; font-weight: 400;"&gt;root@imx8qmmek:~/armnntest# TfLiteMobileNetQuantizedSoftmax-Armnn -m model/ -d data/ -c VsiNpu&lt;/SPAN&gt;&lt;BR style="background-color: #ffffff; font-weight: 400;" /&gt;&lt;SPAN style="background-color: #ffffff; font-weight: 400;"&gt;ArmNN v20190801&lt;/SPAN&gt;&lt;BR style="background-color: #ffffff; font-weight: 400;" /&gt;&lt;STRONG style="background-color: #ffffff; border: 0px; font-weight: bold;"&gt;Average time per test case: &lt;/STRONG&gt;&lt;/STRONG&gt;&lt;STRONG&gt;2.464 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #51626f; background-color: #ffffff; border: 0px;"&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #51626f; background-color: #ffffff; border: 0px;"&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;&lt;/STRONG&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="border: 0px; font-weight: 400;"&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;Model&lt;SPAN style="border: 0px; font-weight: 400;"&gt;:&lt;SPAN style="border: 0px; font-weight: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="border: 0px; font-weight: inherit;"&gt;mobilenet_v1_1.0_224_quant.tflite&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;ArmNN ---&amp;gt; 14.23 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #3d3d3d; border: 0px; font-weight: 400;"&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;TfLite ---&amp;gt; 12.28 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="color: #3d3d3d; border: 0px; font-weight: 400;"&gt;&lt;/P&gt;&lt;P style="border: 0px; font-weight: 400;"&gt;root@imx8qmmek:~# TfLiteMobilenetQuantized-Armnn -m model/ -d data/ -c VsiNpu&lt;BR /&gt;ArmNN v20190801&lt;BR /&gt;&lt;STRONG style="border: 0px; font-weight: bold;"&gt;Average time per test case: 12.214 ms&lt;/STRONG&gt;&lt;/P&gt;&lt;P style="border: 0px; font-weight: 400;"&gt;&lt;/P&gt;&lt;P style="border: 0px; font-weight: 400;"&gt;Performance difference between ARMNN and TFLite run time can be attributed based on&amp;nbsp;various parameter. List of operator supported by run time environment, current Version supported etc.&lt;/P&gt;&lt;P style="border: 0px; font-weight: 400;"&gt;&lt;/P&gt;&lt;P style="border: 0px; font-weight: 400;"&gt;-Manish&amp;nbsp;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 10 Jul 2020 19:40:48 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087960#M249</guid>
      <dc:creator>manish_bajaj</dc:creator>
      <dc:date>2020-07-10T19:40:48Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087961#M250</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi&amp;nbsp;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/manishbajaj"&gt;manishbajaj&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am using BSP version 5.4.3_2.0.0. Please find the attached screenshots of the inference times on the target (imx8qmmek). I am not getting your numbers with ArmNN.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So can we say, TfLite is best optimized compared to ArmNN? Or it is dependant on the model always?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards&lt;/P&gt;&lt;P&gt;Ullas Bharadwaj&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 13 Jul 2020 10:35:44 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087961#M250</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-07-13T10:35:44Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087962#M251</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;A class="jx-jive-macro-user" href="https://community.nxp.com/people/ullasbharadwaj"&gt;ullasbharadwaj&lt;/A&gt;‌,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We don't see same number as you are seeing. TFLite performance might be bit better then ARMNN and might depend on model and supported version of (TFLite, ARMNN ) too.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I will suggest to try new version of BSP 5.4.24 too.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Manish&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 13 Jul 2020 15:23:56 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087962#M251</guid>
      <dc:creator>manish_bajaj</dc:creator>
      <dc:date>2020-07-13T15:23:56Z</dc:date>
    </item>
    <item>
      <title>Re: SSD MobileNet Inference using ArmNN on i.MX8qmmek</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087963#M252</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thank you. I will give it a try too.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 13 Jul 2020 15:27:11 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/SSD-MobileNet-Inference-using-ArmNN-on-i-MX8qmmek/m-p/1087963#M252</guid>
      <dc:creator>ullasbharadwaj</dc:creator>
      <dc:date>2020-07-13T15:27:11Z</dc:date>
    </item>
  </channel>
</rss>

