<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>i.MX ProcessorsのトピックRe: How to use the NPU delegate for running inference from Python</title>
    <link>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1763726#M216384</link>
    <description>&lt;P&gt;Hi Brian,&lt;/P&gt;&lt;P&gt;Thank you for your support.&lt;/P&gt;&lt;P&gt;I had printed the delegate to check if it is showing as External (as expected when using the NPU).&lt;/P&gt;&lt;P&gt;I shall look into the code you had shared.&lt;/P&gt;&lt;P&gt;Thanks once again!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 27 Nov 2023 05:13:34 GMT</pubDate>
    <dc:creator>amrithkrish</dc:creator>
    <dc:date>2023-11-27T05:13:34Z</dc:date>
    <item>
      <title>How to use the NPU delegate for running inference from Python</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1758902#M215871</link>
      <description>&lt;P&gt;Hello team,&lt;/P&gt;&lt;P&gt;I have a custom TfLite model for object detection. I want to run the inference using the same from imx8MPlus board.&amp;nbsp;&lt;/P&gt;&lt;P&gt;The Python script I have written is performing the inference with the default delegate - "&lt;SPAN&gt;&lt;SPAN class=""&gt;XNNPACK delegate for CPU."&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class=""&gt;I wish to use NPU for running the inference from the board. I tried changing the delegate to libvx_delegate in the tflite.Interpreter in my script as shown :&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt; delegate = tflite.load_delegate('/usr/lib/libvx_delegate.so')
    ModelInterpreter = tflite.Interpreter(model_path=ModelPath,experimental_delegates=[delegate])&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;However, when I printed the delegate I used, it is showing as :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Delegate used :  &amp;lt;tflite_runtime.interpreter.Delegate object at 0xffff70472390&amp;gt;&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;What should I change/ add in my Python script so that my script will use NPU for inference?&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Thanks in advance!&lt;/P&gt;</description>
      <pubDate>Fri, 17 Nov 2023 08:23:21 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1758902#M215871</guid>
      <dc:creator>amrithkrish</dc:creator>
      <dc:date>2023-11-17T08:23:21Z</dc:date>
    </item>
    <item>
      <title>Re: How to use the NPU delegate for running inference from Python</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1762095#M216206</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/222715"&gt;@amrithkrish&lt;/a&gt;,&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for contacting NXP Support!&lt;/P&gt;&lt;P&gt;I have reviewed your Python code, and it seems that you are correctly loading the external delegate.&lt;BR /&gt;However, I'm not sure why are you trying to print the delegate.&lt;/P&gt;&lt;P&gt;If you want to obtain the output from your model using the NPU you will need to implement a code using the following example:&lt;/P&gt;&lt;P&gt;&lt;A href="https://github.com/nxp-imx/tflite-vx-delegate-imx/blob/lf-6.1.36_2.1.0/examples/python/label_image.py" target="_blank"&gt;tflite-vx-delegate-imx/examples/python/label_image.py at lf-6.1.36_2.1.0 · nxp-imx/tflite-vx-delegate-imx · GitHub&lt;/A&gt;&lt;/P&gt;&lt;P&gt;I hope this information will be helpful.&lt;/P&gt;&lt;P&gt;Have a great day!&lt;/P&gt;</description>
      <pubDate>Wed, 22 Nov 2023 16:55:30 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1762095#M216206</guid>
      <dc:creator>brian14</dc:creator>
      <dc:date>2023-11-22T16:55:30Z</dc:date>
    </item>
    <item>
      <title>Re: How to use the NPU delegate for running inference from Python</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1763726#M216384</link>
      <description>&lt;P&gt;Hi Brian,&lt;/P&gt;&lt;P&gt;Thank you for your support.&lt;/P&gt;&lt;P&gt;I had printed the delegate to check if it is showing as External (as expected when using the NPU).&lt;/P&gt;&lt;P&gt;I shall look into the code you had shared.&lt;/P&gt;&lt;P&gt;Thanks once again!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 27 Nov 2023 05:13:34 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1763726#M216384</guid>
      <dc:creator>amrithkrish</dc:creator>
      <dc:date>2023-11-27T05:13:34Z</dc:date>
    </item>
    <item>
      <title>Re: How to use the NPU delegate for running inference from Python</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1766873#M216666</link>
      <description>&lt;P&gt;Hi Brian,&lt;/P&gt;&lt;P&gt;I checked the code that was shared and had implemented a Python script similar to it. However my custom model is taking around 35 seconds for performing one inference when libvx_delegate.so was used. (I had ignored the warm- up time).&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;So my questions are :&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;1. Is this expected?&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;2. Does the model have any influence in the inference time taken ? (Like the model size or something?)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;3. Can this time be reduced if NNStreamer is used instead of running it from Python?&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance!&lt;/P&gt;</description>
      <pubDate>Fri, 01 Dec 2023 03:03:07 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1766873#M216666</guid>
      <dc:creator>amrithkrish</dc:creator>
      <dc:date>2023-12-01T03:03:07Z</dc:date>
    </item>
    <item>
      <title>Re: How to use the NPU delegate for running inference from Python</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1767350#M216710</link>
      <description>&lt;P&gt;Thank you for your reply.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;1. Is this expected?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;It depends on your model but with 35 seconds for inference this is a really low performance.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2. Does the model have any influence in the inference time taken? (Like the model size or something?)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Yes, on your implementation the model takes most of the time.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;3. Can this time be reduced if NNStreamer is used instead of running it from Python?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;It seems that the problem is on your model optimization. I suggest you review your model in detail profiling and running benchmarks for your model.&lt;BR /&gt;Please have a look on the iMX Machine Learning User's Guide, specifically on section 9 NN Execution on Hardware Accelerators:&lt;BR /&gt;&lt;A href="https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf" target="_blank"&gt;i.MX Machine Learning User's Guide (nxp.com)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Have a great day!&lt;/P&gt;</description>
      <pubDate>Fri, 01 Dec 2023 14:47:10 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1767350#M216710</guid>
      <dc:creator>brian14</dc:creator>
      <dc:date>2023-12-01T14:47:10Z</dc:date>
    </item>
    <item>
      <title>Re: How to use the NPU delegate for running inference from Python</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1769584#M216928</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/207096"&gt;@brian14&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I had one more doubt. When I tried running my &lt;STRONG&gt;custom yolov3 model&lt;/STRONG&gt; using the &lt;STRONG&gt;inference script&lt;/STRONG&gt; you had mentioned before, I'm getting - &lt;STRONG&gt;Segmentation error&lt;/STRONG&gt;", but the script runs fine with other models.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Why is this error produced for this particular model? Can the size of the model be a factor for this?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 06 Dec 2023 07:59:40 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1769584#M216928</guid>
      <dc:creator>amrithkrish</dc:creator>
      <dc:date>2023-12-06T07:59:40Z</dc:date>
    </item>
    <item>
      <title>Re: How to use the NPU delegate for running inference from Python</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1772356#M217161</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/222715"&gt;@amrithkrish&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;Based on your error, it is possible that this error is related to the model size.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 11 Dec 2023 14:21:39 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1772356#M217161</guid>
      <dc:creator>brian14</dc:creator>
      <dc:date>2023-12-11T14:21:39Z</dc:date>
    </item>
    <item>
      <title>Re: How to use the NPU delegate for running inference from Python</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1772823#M217198</link>
      <description>&lt;P&gt;Thank you for the information.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 12 Dec 2023 06:30:45 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/How-to-use-the-NPU-delegate-for-running-inference-from-Python/m-p/1772823#M217198</guid>
      <dc:creator>amrithkrish</dc:creator>
      <dc:date>2023-12-12T06:30:45Z</dc:date>
    </item>
  </channel>
</rss>

