<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK in i.MX Processors</title>
    <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1671264#M207736</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/205171"&gt;@Amal_Antony3331&lt;/a&gt;,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for your clarification.&lt;/P&gt;
&lt;P&gt;The reason could be that you are not specifying the external delegate. You can do it, using an argument for the external delegate. &lt;BR /&gt;In our BSP examples for TensorFlow Lite you will find the example label_image.py, you can base on that code to implement the external delegate, or you can use this example to implement your application using the arguments &lt;STRONG&gt;--image&lt;/STRONG&gt; to set an image path, &lt;STRONG&gt;--model_file&lt;/STRONG&gt; to set the model, &lt;STRONG&gt;--ext_delegate&lt;/STRONG&gt; to set the external delegate.&lt;/P&gt;
&lt;P&gt;Example of arguments parser in Python used on label_image.py for external delegate.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Brian_Ibarra_1-1686934564356.png" style="width: 400px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/228160iA7F073142E424A5F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Brian_Ibarra_1-1686934564356.png" alt="Brian_Ibarra_1-1686934564356.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;I made a test and these are the results:&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;$ python3 label_image.py&lt;/LI-CODE&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Brian_Ibarra_2-1686934578816.png" style="width: 473px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/228161i00D4838FACEE57ED/image-dimensions/473x129?v=v2" width="473" height="129" role="button" title="Brian_Ibarra_2-1686934578816.png" alt="Brian_Ibarra_2-1686934578816.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;$ python3 label_image.py --ext_delegate=/usr/lib/libvx_delegate.so&lt;/LI-CODE&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Brian_Ibarra_3-1686934592811.png" style="width: 679px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/228162iFFE3990E4EC6F887/image-dimensions/679x219?v=v2" width="679" height="219" role="button" title="Brian_Ibarra_3-1686934592811.png" alt="Brian_Ibarra_3-1686934592811.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;You can see the difference between unset external delegate and set up for use of the NPU on iMX8M Plus 157.3 to 4.0ms.&lt;/P&gt;
&lt;P&gt;In conclusion, you will need to review the &lt;STRONG&gt;label_image.py&lt;/STRONG&gt; to implement the argument parser and the code section used to load the external delegate in your &lt;STRONG&gt;val.py&lt;/STRONG&gt; file.&lt;/P&gt;
&lt;P&gt;Link to the label_image.py code:&amp;nbsp;&lt;A href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/python/label_image.py" target="_blank"&gt;tensorflow/tensorflow/lite/examples/python/label_image.py at master · tensorflow/tensorflow · GitHub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;I hope this answer will be helpful.&lt;/P&gt;
&lt;P&gt;Best regards, Brian.&lt;/P&gt;</description>
    <pubDate>Fri, 16 Jun 2023 16:59:06 GMT</pubDate>
    <dc:creator>brian14</dc:creator>
    <dc:date>2023-06-16T16:59:06Z</dc:date>
    <item>
      <title>Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1669429#M207611</link>
      <description>&lt;P&gt;Dear team,&lt;/P&gt;&lt;P&gt;I'm using the Linux 6.1.1_1.0.0​ SDK on i.MX8M+ custom board.&lt;/P&gt;&lt;P&gt;I tried to run the validation script available on Ultralytics yolov5 repsoitory on open source yolov5s model on a detection dataset.&lt;/P&gt;&lt;P&gt;Class&amp;nbsp;&amp;nbsp; Images&amp;nbsp;&amp;nbsp; Instances&amp;nbsp;&amp;nbsp; P&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; R&amp;nbsp;&amp;nbsp;&amp;nbsp; mAP50&amp;nbsp;&amp;nbsp; mAP50-95: 100%| 128/128 [04:46&amp;lt;00:00, 2.24s/it]&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; all&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 128&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 929&amp;nbsp;&amp;nbsp; 0.726 0.581&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0.679&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0.427&lt;BR /&gt;&lt;BR /&gt;Speed: 3.9ms pre-process, 2181.9ms inference, 27.2ms NMS per image at shape (1, 3, 640, 640)&lt;/P&gt;&lt;P&gt;The inference time observed is 2seconds.Is it due to default inference backend is CPU from Tflite implementation?&lt;/P&gt;&lt;P&gt;How can we enable/use the GPU/NPU hardware accelerator using the VX Delegate on i.MX8M+ ?&lt;/P&gt;&lt;P&gt;Any help is appreciated.&lt;/P&gt;&lt;P&gt;Regards&lt;BR /&gt;Amal&lt;/P&gt;</description>
      <pubDate>Wed, 14 Jun 2023 15:31:49 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1669429#M207611</guid>
      <dc:creator>Amal_Antony3331</dc:creator>
      <dc:date>2023-06-14T15:31:49Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1669639#M207621</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/205171"&gt;@Amal_Antony3331&lt;/a&gt;,&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;For this situation, I have two suggestions:&lt;BR /&gt;&lt;BR /&gt;The first one is to use the benchmark model located on the current BSP.&lt;BR /&gt;This benchmark model will allow you to get the average inference time for your model.&lt;BR /&gt;You can use as follow:&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;1. Go to the TensorFlow Lite examples folder:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;$ cd /usr/bin/tensorflow-lite-2.x.x/examples&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;2. Perform benchmark using CPU with 4 cores running.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;$ ./benchmark_model --graph=yolov5s-32fp-256.tflite --num_runs=50 --num_threads=4&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;3. Perform benchmark using NPU with VX Delegate.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;$ ./benchmark_model --graph=yolov5s-32fp-256.tflite --num_runs=50 --external_delegate_path=/usr/lib/libvx_delegate.so&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;With steps 2 and 3 you will see a difference between inference times around 100ms (Around 125ms for CPU and around 30ms for NPU).&lt;BR /&gt;&lt;BR /&gt;The second suggestion is to use GStreamer + NNStreamer. With these tools, you will be able to run the model on live video streaming. But also you can modify the pipeline to run on a static image.&lt;BR /&gt;The important part of this pipeline that allows it to run on the NPU is "custom=Delegate:External,ExtDelegateLib:libvx_delegate.so".&lt;BR /&gt;&lt;BR /&gt;Pipeline example:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;$ gst-launch-1.0 --no-position v4l2src device=/dev/video3 ! \ video/x-raw,width=640,height=480,framerate=30/1! \ tee name=t t. ! queue max-size-buffers=2 leaky=2 ! \ imxvideoconvert_g2d ! video/x-raw,width=256,height=256,format=RGBA ! \ videoconvert ! video/x-raw,format=RGB ! \ tensor_converter ! \ tensor_filter framework=tensorflow-lite model=yolov5s_quant_256.tflite \ custom=Delegate:External,ExtDelegateLib:libvx_delegate.so ! \ tensor_decoder mode=bounding_boxes option1=yolov5 option2=coco_label.txt \ option4=640:480 option5=256:256 ! \ mix. t. ! queue max-size-buffers=2 ! \ imxcompositor_g2d name=mix sink_0::zorder=2 sink_1::zorder=1 ! waylandsink&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I hope this information will be helpful.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Have a great day!&lt;/P&gt;</description>
      <pubDate>Wed, 14 Jun 2023 22:23:16 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1669639#M207621</guid>
      <dc:creator>brian14</dc:creator>
      <dc:date>2023-06-14T22:23:16Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1669743#M207624</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/207096"&gt;@brian14&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Thank you so much for the response.&lt;/P&gt;&lt;P&gt;One more query for the clarification: I'm using a custom trained yolov5 model. So If I try with benchmark_model to get the average inference time, on which dataset/labels this operation is performing?&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Also, as benchmark_model uses vx_delegate, the val.py (from yolov5 repository) should also be able to use it right&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 15 Jun 2023 04:15:03 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1669743#M207624</guid>
      <dc:creator>Amal_Antony3331</dc:creator>
      <dc:date>2023-06-15T04:15:03Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1670534#M207679</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/205171"&gt;@Amal_Antony3331&lt;/a&gt;,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The benchmark_model is part of a benchmark tool provided by TensorFlow framework. For this benchmark_model you don't need to specify an input data or labels file.&lt;/P&gt;
&lt;P&gt;You will find more information about benchmark tools for TensorFlow Lite at the following link:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.tensorflow.org/lite/performance/measurement#benchmark_tools" target="_blank"&gt;Performance measurement &amp;nbsp;|&amp;nbsp; TensorFlow Lite&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Also, you will find information about benchmark_model command line and how to use it at this link:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/benchmark/README.md" target="_blank"&gt;tensorflow/tensorflow/lite/tools/benchmark/README.md at master · tensorflow/tensorflow · GitHub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Finally, I'm not sure about the val.py, I'm assuming that you are exporting and using the model yolov5 in a .tflite format to run using gstreamer and nnstreamer in videos or photos.&lt;/P&gt;
&lt;P&gt;I hope this information will be helpful.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Have a great day!&lt;/P&gt;</description>
      <pubDate>Thu, 15 Jun 2023 19:00:17 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1670534#M207679</guid>
      <dc:creator>brian14</dc:creator>
      <dc:date>2023-06-15T19:00:17Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1670776#M207699</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/207096"&gt;@brian14&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the response.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class=""&gt;Could you please guide me on how to enable vx_delegate for a custom python code written for inferencing and benchmarking a .tflite model.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Jun 2023 05:23:03 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1670776#M207699</guid>
      <dc:creator>Amal_Antony3331</dc:creator>
      <dc:date>2023-06-16T05:23:03Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1670895#M207712</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/205171"&gt;@Amal_Antony3331&lt;/a&gt;,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I'm not sure about your request, but I think two forms:&lt;/P&gt;
&lt;P&gt;For the first one, I'm assuming that you are trying to automate inferencing and benchmarking for the same model. In that case, you could write a script in Python or bash scripting applying a GStreamer pipeline or implementing the benchmark_model as the first example.&lt;BR /&gt;You can follow this guide:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://brettviren.github.io/pygst-tutorial-org/pygst-tutorial.html" target="_blank" rel="noopener"&gt;Python GStreamer Tutorial (brettviren.github.io)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;On the other hand, I'm assuming that you need to develop a custom Python code based on a model such as yolo, mobilenet, resnet, etc., and then convert it to tflite. For this case, you can follow these guides:&lt;BR /&gt;&lt;A href="https://www.tensorflow.org/lite/models/convert" target="_blank" rel="noopener"&gt;Model conversion overview &amp;nbsp;|&amp;nbsp; TensorFlow Lite&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://medium.com/techwasti/tensorflow-lite-converter-dl-example-febe804b8673" target="_blank" rel="noopener"&gt;Tensorflow Lite Converter Example!! | by Maheshwar Ligade | techwasti | Medium&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Finally, you can check about our eIQ software, with this tool you will be able to convert your models, quantize or train a model.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.nxp.com/design/software/development-software/eiq-ml-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT" target="_blank" rel="noopener"&gt;eIQ® Toolkit | NXP Semiconductors&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Note: Take into account that you require to always export to tflite model to properly apply the VX Delegate.&lt;/P&gt;
&lt;P&gt;Best regards, Brian.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Jun 2023 07:30:52 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1670895#M207712</guid>
      <dc:creator>brian14</dc:creator>
      <dc:date>2023-06-16T07:30:52Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1671039#M207721</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/207096"&gt;@brian14&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I can provide you some more points to clarify my concern.&lt;/P&gt;&lt;P&gt;I have a custom .tflite model that we have trained on specific classes according to our requirement.&lt;/P&gt;&lt;P&gt;I tried to run the same .tflite model on two BSP versions:&lt;/P&gt;&lt;P&gt;Case 1:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;root@imx8mpevk:/usr/lib/python3.9/yolov5# uname -a
Linux imx8mpevk 5.10.35-lts-5.10.y+gdd2583ce6e52 #1 SMP PREEMPT Tue Jun 8 14:42:10 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm able to run the val.py in vx-delegate&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;root@imx8mpevk:/usr/lib/python3.9/yolov5#python3 val.py --weights custom_model-int8.tflite --data data/coco128_custom.yaml --img 64

As expected getting some delegate logs also. See below

YOLOv5 � v7.0-23-g5dc1ce4 Python-3.9.4 torch-1.7.1 CPU

Loading custom_model-int8.tflite for TensorFlow Lite inference...
Initialize supported_builtins
Check Resize(0)
Check Resize(0)
Check  StridedSlice
Check  StridedSlice
Check  StridedSlice
Check  StridedSlice
Check  StridedSlice
Check  StridedSlice
Check  StridedSlice
Check  StridedSlice
Check  StridedSlice
vx_delegate Delegate::Init
Initialize supported_builtins
Delegate::Prepare node:0xaaaae35aae40
Applied VX delegate.

Speed: 3.7ms pre-process, 223.5ms inference, 4.3ms NMS per image at shape (1, 3, 640, 640)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Inference time observed is 223.5 ms&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Case 2:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;root@imx8mpevk:~/benchmark/yolov5# uname -a
Linux imx8mpevk 6.1.1+g29549c7073bf #1 SMP PREEMPT Thu Mar  2 14:54:17 UTC 2023 aarch64 GNU/Linux&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I tried running the val.py with same all conditions remaining same as above.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;root@imx8mpevk:~/benchmark/yolov5# python3 val.py --weights custom_model-int8.tflite --data data/custom_coco128.yaml --img 640

YOLOv5 � v7.0-23-g5dc1ce4 Python-3.10.6 torch-1.11.0 CPU

Loading custom_model-int8.tflite for TensorFlow Lite inference...
Forcing --batch-size 1 square inference (1,3,640,640) for non-PyTorch models

Speed: 4.1ms pre-process, 2070.1ms inference, 2.8ms NMS per image at shape (1, 3, 640, 640)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here the inference time observed is 2070 ms. Also no logs related to vx_delegate.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If the exported .tflite model works on one BSP version, it should work on latest BSP version also right.&lt;/P&gt;&lt;P&gt;What could be the possible reason for this behavior?&lt;/P&gt;&lt;P&gt;Thanks in advance&lt;/P&gt;</description>
      <pubDate>Fri, 16 Jun 2023 10:05:06 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1671039#M207721</guid>
      <dc:creator>Amal_Antony3331</dc:creator>
      <dc:date>2023-06-16T10:05:06Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1671264#M207736</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/205171"&gt;@Amal_Antony3331&lt;/a&gt;,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for your clarification.&lt;/P&gt;
&lt;P&gt;The reason could be that you are not specifying the external delegate. You can do it, using an argument for the external delegate. &lt;BR /&gt;In our BSP examples for TensorFlow Lite you will find the example label_image.py, you can base on that code to implement the external delegate, or you can use this example to implement your application using the arguments &lt;STRONG&gt;--image&lt;/STRONG&gt; to set an image path, &lt;STRONG&gt;--model_file&lt;/STRONG&gt; to set the model, &lt;STRONG&gt;--ext_delegate&lt;/STRONG&gt; to set the external delegate.&lt;/P&gt;
&lt;P&gt;Example of arguments parser in Python used on label_image.py for external delegate.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Brian_Ibarra_1-1686934564356.png" style="width: 400px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/228160iA7F073142E424A5F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Brian_Ibarra_1-1686934564356.png" alt="Brian_Ibarra_1-1686934564356.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;I made a test and these are the results:&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;$ python3 label_image.py&lt;/LI-CODE&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Brian_Ibarra_2-1686934578816.png" style="width: 473px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/228161i00D4838FACEE57ED/image-dimensions/473x129?v=v2" width="473" height="129" role="button" title="Brian_Ibarra_2-1686934578816.png" alt="Brian_Ibarra_2-1686934578816.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;$ python3 label_image.py --ext_delegate=/usr/lib/libvx_delegate.so&lt;/LI-CODE&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Brian_Ibarra_3-1686934592811.png" style="width: 679px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/228162iFFE3990E4EC6F887/image-dimensions/679x219?v=v2" width="679" height="219" role="button" title="Brian_Ibarra_3-1686934592811.png" alt="Brian_Ibarra_3-1686934592811.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;You can see the difference between unset external delegate and set up for use of the NPU on iMX8M Plus 157.3 to 4.0ms.&lt;/P&gt;
&lt;P&gt;In conclusion, you will need to review the &lt;STRONG&gt;label_image.py&lt;/STRONG&gt; to implement the argument parser and the code section used to load the external delegate in your &lt;STRONG&gt;val.py&lt;/STRONG&gt; file.&lt;/P&gt;
&lt;P&gt;Link to the label_image.py code:&amp;nbsp;&lt;A href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/python/label_image.py" target="_blank"&gt;tensorflow/tensorflow/lite/examples/python/label_image.py at master · tensorflow/tensorflow · GitHub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;I hope this answer will be helpful.&lt;/P&gt;
&lt;P&gt;Best regards, Brian.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Jun 2023 16:59:06 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1671264#M207736</guid>
      <dc:creator>brian14</dc:creator>
      <dc:date>2023-06-16T16:59:06Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1671761#M207787</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/207096"&gt;@brian14&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have modified the &lt;SPAN&gt;Modified the common.py&lt;/SPAN&gt; (&lt;A href="https://github.com/ultralytics/yolov5/blob/master/models/common.py#L457)" target="_blank"&gt;https://github.com/ultralytics/yolov5/blob/master/models/common.py#L457)&lt;/A&gt; as follows:&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;'Linux': '/usr/lib/libvx_delegate.so' instead of 'Linux': 'libedgetpu.so.1',&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Run the val.py script with the opensource yolov5s.tflite model(renamed as yolov5s-int8_edgetpu.tflite).&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt; $ python3 val.py --weights yolov5s-int8_edgetpu.tflite --data data/coco128.yaml --img 640&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;it is observed that some logs related to VX delegate are printed, but the mAP value is Zero.&lt;/SPAN&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;root@imx8mpevk:~/benchmark/yolov5# python3 val.py --weights yolov5s-int8_edgetpu.tflite --data data/coco128.yaml --img 640
/usr/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension:
  warn(f"Failed to load image Python extension: {e}")
val: data=data/coco128.yaml, weights=['yolov5s-int8_edgetpu.tflite'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.6, max_det=300, task=val, device=, workers=8, single_cls=
False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=runs/val, name=exp, exist_ok=False, half=False, dnn=False
YOLOv5 &amp;lt;li-emoji id="lia_rocket" src="&lt;LI-EMOJI id="lia_rocket" title=":rocket:"&gt;&lt;/LI-EMOJI&gt;" class="lia-unicode-emoji" title=":rocket:"&amp;gt;&amp;lt;/li-emoji&amp;gt; v7.0-23-g5dc1ce4 Python-3.10.6 torch-1.11.0 CPU

Loading yolov5s-int8_edgetpu.tflite for TensorFlow Lite Edge TPU inference...
Vx delegate: allowed_cache_mode set to 0.
Vx delegate: device num set to 0.
Vx delegate: allowed_builtin_code set to 0.
Vx delegate: error_during_init set to 0.
Vx delegate: error_during_prepare set to 0.
Vx delegate: error_during_invoke set to 0.
Forcing --batch-size 1 square inference (1,3,640,640) for non-PyTorch models
val: Scanning /home/root/benchmark/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 [00:00&amp;lt;?, ?it/s]
                 Class     Images  Instances          P          R      mAP50   mAP50-95:   0%|          | 0/128 [00:00&amp;lt;?, ?it/s]W [HandleLayoutInfer:278]Op 162: default layout infere
nce pass.
W [HandleLayoutInfer:278]Op 162: default layout inference pass.
W [HandleLayoutInfer:278]Op 162: default layout inference pass.
W [HandleLayoutInfer:278]Op 162: default layout inference pass.
W [HandleLayoutInfer:278]Op 162: default layout inference pass.
W [HandleLayoutInfer:278]Op 162: default layout inference pass.
                 Class     Images  Instances          P          R      mAP50   mAP50-95:  16%|█▋        | 21/128 [00:27&amp;lt;00:28,  3.71it/s]WARNING &amp;lt;li-emoji id="lia_warning" src="&lt;LI-EMOJI id="lia_warning" title=":warning:"&gt;&lt;/LI-EMOJI&gt;" class="lia-unicode-emoji" title=":warning:"&amp;gt;&amp;lt;/li-emoji&amp;gt; NMS time limit 0.550s exce
eded
                 Class     Images  Instances          P          R      mAP50   mAP50-95:  18%|█▊        | 23/128 [00:28&amp;lt;00:57,  1.84it/s]WARNING &amp;lt;li-emoji id="lia_warning" src="&lt;LI-EMOJI id="lia_warning" title=":warning:"&gt;&lt;/LI-EMOJI&gt;" class="lia-unicode-emoji" title=":warning:"&amp;gt;&amp;lt;/li-emoji&amp;gt; NMS time limit 0.550s exce
eded
                 Class     Images  Instances          P          R      mAP50   mAP50-95:  26%|██▌       | 33/128 [00:32&amp;lt;00:24,  3.93it/s]WARNING &amp;lt;li-emoji id="lia_warning" src="&lt;LI-EMOJI id="lia_warning" title=":warning:"&gt;&lt;/LI-EMOJI&gt;" class="lia-unicode-emoji" title=":warning:"&amp;gt;&amp;lt;/li-emoji&amp;gt; NMS time limit 0.550s ex
ceeded
                 Class     Images  Instances          P          R      mAP50   mAP50-95:  29%|██▉       | 37/128 [00:34&amp;lt;00:32,  2.79it/s]WARNING &amp;lt;li-emoji id="lia_warning" src="&lt;LI-EMOJI id="lia_warning" title=":warning:"&gt;&lt;/LI-EMOJI&gt;" class="lia-unicode-emoji" title=":warning:"&amp;gt;&amp;lt;/li-emoji&amp;gt; NMS time limit 0.550s ex
ceeded
                 Class     Images  Instances          P          R      mAP50   mAP50-95:  34%|███▎      | 43/128 [00:38&amp;lt;00:33,  2.53it/s]WARNING &amp;lt;li-emoji id="lia_warning" src="&lt;LI-EMOJI id="lia_warning" title=":warning:"&gt;&lt;/LI-EMOJI&gt;" class="lia-unicode-emoji" title=":warning:"&amp;gt;&amp;lt;/li-emoji&amp;gt; NMS time limit 0.550s
exceeded
                 Class     Images  Instances          P          R      mAP50   mAP50-95:  37%|███▋      | 47/128 [00:39&amp;lt;00:27,  2.96it/s]WARNING &amp;lt;li-emoji id="lia_warning" src="&lt;LI-EMOJI id="lia_warning" title=":warning:"&gt;&lt;/LI-EMOJI&gt;" class="lia-unicode-emoji" title=":warning:"&amp;gt;&amp;lt;/li-emoji&amp;gt; NMS time limit 0.550s
exceeded
                 Class     Images  Instances          P          R      mAP50   mAP50-95:  41%|████      | 52/128 [00:42&amp;lt;00:28,  2.64it/s]WARNING &amp;lt;li-emoji id="lia_warning" src="&lt;LI-EMOJI id="lia_warning" title=":warning:"&gt;&lt;/LI-EMOJI&gt;" class="lia-unicode-emoji" title=":warning:"&amp;gt;&amp;lt;/li-emoji&amp;gt; NMS time limit 0.550s
exceeded
                 Class     Images  Instances          P          R      mAP50   mAP50-95:  80%|███████▉  | 102/128 [00:57&amp;lt;00:09,  2.74it/s]WARNING &amp;lt;li-emoji id="lia_warning" src="&lt;LI-EMOJI id="lia_warning" title=":warning:"&gt;&lt;/LI-EMOJI&gt;" class="lia-unicode-emoji" title=":warning:"&amp;gt;&amp;lt;/li-emoji&amp;gt; NMS time limi
t 0.550s exceeded
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 128/128 [01:04&amp;lt;00:00,  1.98it/s]
                   all        128        929          0          0          0          0
Speed: 4.2ms pre-process, 344.9ms inference, 135.9ms NMS per image at shape (1, 3, 640, 640)&lt;/LI-CODE&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Here the inference time is 344 ms and mAP value is zero for all classes.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;What could be the possible reason for this?&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Mon, 19 Jun 2023 08:49:31 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1671761#M207787</guid>
      <dc:creator>Amal_Antony3331</dc:creator>
      <dc:date>2023-06-19T08:49:31Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1671776#M207790</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/207096"&gt;@brian14&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please find the attached&amp;nbsp; model file that I tried out.&lt;/P&gt;</description>
      <pubDate>Mon, 19 Jun 2023 08:58:07 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1671776#M207790</guid>
      <dc:creator>Amal_Antony3331</dc:creator>
      <dc:date>2023-06-19T08:58:07Z</dc:date>
    </item>
    <item>
      <title>Re: Enable VX Delegate on Linux 6.1.1_1.0.0​ SDK</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1672934#M207883</link>
      <description>&lt;P&gt;Thank you for your reply.&lt;/P&gt;&lt;P&gt;For your answer I can see that you are using a model compiled for Edge TPU. This is the Tensor Processor Unit for Google devices, and it is not compatible with iMX8MPlus NPU.&lt;BR /&gt;I think you will need to work more with common.py to effectively use our NPU. I could see that the common.py you are using is prepared to work with other embedded systems, especially with TPU in Coral Device by Google.&lt;/P&gt;&lt;P&gt;The suggestion is still to use the label_image.py script described on my last reply and implement YOLOv5 model in a TensorFlow Lite model format.&lt;/P&gt;&lt;P&gt;Have a great day!&lt;/P&gt;</description>
      <pubDate>Tue, 20 Jun 2023 17:51:27 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Enable-VX-Delegate-on-Linux-6-1-1-1-0-0-SDK/m-p/1672934#M207883</guid>
      <dc:creator>brian14</dc:creator>
      <dc:date>2023-06-20T17:51:27Z</dc:date>
    </item>
  </channel>
</rss>

