<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Apply Deeplab+yolo examples on NPU - VERY SLOW in i.MX Processors</title>
    <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364453#M182406</link>
    <description>&lt;P&gt;Thank you very much&lt;/P&gt;</description>
    <pubDate>Mon, 01 Nov 2021 09:05:25 GMT</pubDate>
    <dc:creator>horst127</dc:creator>
    <dc:date>2021-11-01T09:05:25Z</dc:date>
    <item>
      <title>Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1363455#M182275</link>
      <description>&lt;P&gt;Hey,&lt;/P&gt;&lt;P&gt;a few days I am now dealing with applying a custom model to the i.MX8 Plus NPU. I struggle with a custom object detection model which takes about 400 ms on the NPU and 800 on the CPU, where 3 Resizing layers are falling back to the CPU (only takes about 20 ms in total) and the REST of the time is taken from the NPU (the first sequence of operations about 200! ms).&lt;/P&gt;&lt;P&gt;However, since this is not reproducible in a public forum, I applied the included deeplab_v3 and yolo_v4 examples in the eIQ_toolkit. As the system architecture on the i.MX8 I used the newest release on your website. I quantized all your models using your eIQ GUI to int8 and run the cmd&lt;/P&gt;&lt;P&gt;$ /usr/bin/tensorflow-lite-2.4.1/examples# ./benchmark_model --graph=/home/user/deeplab_bilinear_best_int.tflite --use_nnapi=true&lt;/P&gt;&lt;P&gt;STARTING!&lt;BR /&gt;Log parameter values verbosely: [0]&lt;BR /&gt;Graph: [/home/bryan/deeplab_bilinear_best_int.tflite]&lt;BR /&gt;Use NNAPI: [1]&lt;BR /&gt;NNAPI accelerators available: [vsi-npu]&lt;BR /&gt;Loaded model /home/user/deeplab_bilinear_best_int.tflite&lt;BR /&gt;INFO: Created TensorFlow Lite delegate for NNAPI.&lt;BR /&gt;WARNING: Operator RESIZE_BILINEAR (v3) refused by NNAPI delegate: Operator refused due performance reasons.&lt;BR /&gt;WARNING: Operator RESIZE_BILINEAR (v3) refused by NNAPI delegate: Operator refused due performance reasons.&lt;BR /&gt;Explicitly applied NNAPI delegate, and the model graph will be partially executed by the delegate w/ 2 delegate kernels.&lt;BR /&gt;The input model file size (MB): 2.72458&lt;BR /&gt;Initialized session in 11.44ms.&lt;BR /&gt;Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.&lt;BR /&gt;count=1 curr=12650423&lt;/P&gt;&lt;P&gt;Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.&lt;BR /&gt;count=50 first=247367 curr=247257 min=245131 max=248995 avg=247147 std=637&lt;/P&gt;&lt;P&gt;Inference timings in us: Init: 11440, First inference: 12650423, Warmup (avg): 1.26504e+07, Inference (avg): 247147&lt;BR /&gt;Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.&lt;BR /&gt;Peak memory footprint (MB): init=3.95312 overall=50.3867&lt;/P&gt;&lt;P&gt;Summarized Profiler:&lt;/P&gt;&lt;P&gt;Operator-wise Profiling Info for Regular Benchmark Runs:&lt;BR /&gt;============================== Run Order ==============================&lt;BR /&gt;[node type] [start] [first] [avg ms] [%] [cdf%] [mem KB] [times called] [Name]&lt;BR /&gt;TfLiteNnapiDelegate 0.000 125.411 125.517 50.774% 50.774% 0.000 1 [XXXX]:71&lt;/P&gt;&lt;P&gt;RESIZE_BILINEAR 125.518 15.918 16.052 6.494% 57.268% 0.000 1 [XXXX]:64&lt;BR /&gt;TfLiteNnapiDelegate 141.572 18.538 18.528 7.495% 64.763% 0.000 1 [XXX1]:72&lt;BR /&gt;RESIZE_BILINEAR 160.102 86.804 87.107 35.237% 100.000% 0.000 1 [Identity]:70&lt;/P&gt;&lt;P&gt;Deeplab took around 250 ms while yolov4 also took about 190 ms. That sounds very slow for me for a HW which I supposed to run neural networks. Is that a normal behaviour? If not, what is wrong? PS: The &lt;SPAN&gt;mobilenet_v1_1.0_224_quant.tflite &lt;/SPAN&gt;runs with the documented ~2ms.&lt;/P&gt;&lt;P&gt;I am happy for any hints.&lt;/P&gt;</description>
      <pubDate>Thu, 28 Oct 2021 17:53:11 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1363455#M182275</guid>
      <dc:creator>horst127</dc:creator>
      <dc:date>2021-10-28T17:53:11Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364264#M182381</link>
      <description>&lt;P&gt;Can you share&amp;nbsp;&lt;SPAN&gt;deeplab_bilinear_best_int.tflite? I think this should still be model issue.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 01 Nov 2021 03:18:14 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364264#M182381</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2021-11-01T03:18:14Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364399#M182397</link>
      <description>&lt;P&gt;Sure, file is attached. This is created with the eIQ Toolkit. A tflite-version with tensorflow-python code is given in the other answer.&lt;/P&gt;</description>
      <pubDate>Mon, 01 Nov 2021 08:27:49 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364399#M182397</guid>
      <dc:creator>horst127</dc:creator>
      <dc:date>2021-11-01T08:27:49Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364421#M182398</link>
      <description>&lt;P&gt;And here is the model created with tf-code (instead using the nxp eIQ Gui):&lt;/P&gt;&lt;PRE&gt;&lt;SPAN&gt;def &lt;/SPAN&gt;&lt;SPAN&gt;load_images_float32&lt;/SPAN&gt;(img_path):&lt;BR /&gt;    files = &lt;SPAN&gt;sorted&lt;/SPAN&gt;(listdir(img_path))&lt;BR /&gt;    image_list = [np.asarray(Image.open(img_path + file_path).resize(&lt;SPAN&gt;size&lt;/SPAN&gt;=(&lt;SPAN&gt;512&lt;/SPAN&gt;&lt;SPAN&gt;,&lt;/SPAN&gt;&lt;SPAN&gt;512&lt;/SPAN&gt;))&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;dtype&lt;/SPAN&gt;=np.float32) &lt;SPAN&gt;for&lt;BR /&gt;&lt;/SPAN&gt;                  file_path &lt;SPAN&gt;in &lt;/SPAN&gt;files]&lt;BR /&gt;    &lt;SPAN&gt;return &lt;/SPAN&gt;tf.stack(image_list)&lt;BR /&gt;&lt;SPAN&gt;def &lt;/SPAN&gt;&lt;SPAN&gt;representative_data_gen&lt;/SPAN&gt;():&lt;BR /&gt;    images = load_images_float32(&lt;SPAN&gt;image_path&lt;/SPAN&gt;)&lt;BR /&gt;    &lt;SPAN&gt;for &lt;/SPAN&gt;input_value &lt;SPAN&gt;in &lt;/SPAN&gt;tf.data.Dataset.from_tensor_slices(images).batch(&lt;SPAN&gt;1&lt;/SPAN&gt;).take(&lt;SPAN&gt;100&lt;/SPAN&gt;):&lt;BR /&gt;        &lt;SPAN&gt;yield &lt;/SPAN&gt;[input_value]&lt;BR /&gt;tflite_filepath = pathlib.Path(&lt;SPAN&gt;save_path&lt;/SPAN&gt;)&lt;BR /&gt;model = tf.keras.models.load_model(load_&lt;SPAN&gt;path&lt;/SPAN&gt;)&lt;BR /&gt;converter = tf.lite.TFLiteConverter.from_keras_model(model)&lt;BR /&gt;converter.optimizations = [tf.lite.Optimize.DEFAULT]&lt;BR /&gt;converter.representative_dataset = representative_data_gen&lt;BR /&gt;converter.inference_input_type = tf.uint8&lt;BR /&gt;converter.inference_output_type = tf.uint8&lt;BR /&gt;converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS&lt;SPAN&gt;,&lt;BR /&gt;&lt;/SPAN&gt; tf.lite.OpsSet.SELECT_TF_OPS]&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;tflite_model = converter.convert()&lt;BR /&gt;tflite_filepath.write_bytes(tflite_model)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 01 Nov 2021 08:26:52 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364421#M182398</guid>
      <dc:creator>horst127</dc:creator>
      <dc:date>2021-11-01T08:26:52Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364450#M182405</link>
      <description>&lt;P&gt;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/191876"&gt;@horst127&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;OK ,i will do some tests and give you feedback&lt;/P&gt;</description>
      <pubDate>Mon, 01 Nov 2021 09:01:33 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364450#M182405</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2021-11-01T09:01:33Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364453#M182406</link>
      <description>&lt;P&gt;Thank you very much&lt;/P&gt;</description>
      <pubDate>Mon, 01 Nov 2021 09:05:25 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364453#M182406</guid>
      <dc:creator>horst127</dc:creator>
      <dc:date>2021-11-01T09:05:25Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364775#M182443</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/191876"&gt;@horst127&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I have researched your model structure and mobilenet structure, your network&amp;nbsp;structure is complex compared with mobilenet.&lt;/P&gt;
&lt;P&gt;You should consider design&amp;nbsp;lightweight network structure and perform network tailoring and knowledge distillation before achieving ideal performance on embedded devices&lt;/P&gt;</description>
      <pubDate>Tue, 02 Nov 2021 02:30:42 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364775#M182443</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2021-11-02T02:30:42Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364939#M182471</link>
      <description>&lt;P&gt;Thanks for your reply. But this is not my custom model, it is a NXP example model provided with the eIQ Toolkit. I assumed that the example models are suited for the corresponding HW? Even the (also provided with the eIQ Toolkit) tiny yolo v4 runs with about 300ms (I think because a bunch of operations are not supported), which is about factor 10 times slower than reported numbers on a GPU. Is that normal?&lt;/P&gt;</description>
      <pubDate>Tue, 02 Nov 2021 08:11:13 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1364939#M182471</guid>
      <dc:creator>horst127</dc:creator>
      <dc:date>2021-11-02T08:11:13Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1365415#M182516</link>
      <description>&lt;P&gt;&lt;SPAN&gt;I assumed that the example models are suited for the corresponding HW?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;--&amp;gt;The newest eiq tools contains&amp;nbsp;&lt;STRONG&gt;deeplab_bilinear_float.tflite&lt;/STRONG&gt; and&amp;nbsp;&lt;STRONG&gt;deeplab_nearest_float.tflite&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;As chapter&amp;nbsp;6.1 Image segmentation in EIQ user guide said,stil need Quantize the model to leverage its performance&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;1. Navigate to the workspace\models\deeplab_v3 folder.
2. Convert the "deeplab" model to RTM as follows:
deepview-converter deeplab_nearest_best.h5 deeplab_nearest_best.rtm
3. Quantize the model to leverage its performance benefits as follows:
deepview-converter --default_shape 1,512,512,3 --quantize ^ --quantize_format
uint8 --quant_normalization signed --samples imgs ^ deeplab_nearest_best.h5
deeplab_nearest_best_uint8.rtm
4. Run the Python script to see the result of the image segmentation as follows:
python runner_demo.py -m deeplab_nearest_best.rtm -i imgs\image1.jpg ^ -o
image1_out_nearest_best_rtm.jpg http://127.0.0.1:10818/v1
NXP Semiconductors
Model Zoo
eIQ Toolkit&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 03 Nov 2021 02:40:54 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1365415#M182516</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2021-11-03T02:40:54Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1365546#M182536</link>
      <description>&lt;P&gt;I did the quantization, thats why I send you also the tensorflow code. Anyway, I copied exactly your command (step 3), but replaced .rtm with .tflite, and it needs 200ms, as I mentioned. To profile on the target, I ran $ modelrunner -H 10819 -e tflite -c 1. Do you get other runtimes?&lt;/P&gt;</description>
      <pubDate>Wed, 03 Nov 2021 08:46:11 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1365546#M182536</guid>
      <dc:creator>horst127</dc:creator>
      <dc:date>2021-11-03T08:46:11Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367295#M182686</link>
      <description>&lt;P&gt;Do you have any other results than me?&lt;/P&gt;</description>
      <pubDate>Fri, 05 Nov 2021 10:15:49 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367295#M182686</guid>
      <dc:creator>horst127</dc:creator>
      <dc:date>2021-11-05T10:15:49Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367332#M182703</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/191876"&gt;@horst127&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Can you share how to get this benchmark results?&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;============================== Run Order ==============================
[node type] [start] [first] [avg ms] [%] [cdf%] [mem KB] [times called] [Name]
TfLiteNnapiDelegate 0.000 125.411 125.517 50.774% 50.774% 0.000 1 [XXXX]:71

RESIZE_BILINEAR 125.518 15.918 16.052 6.494% 57.268% 0.000 1 [XXXX]:64
TfLiteNnapiDelegate 141.572 18.538 18.528 7.495% 64.763% 0.000 1 [XXX1]:72
RESIZE_BILINEAR 160.102 86.804 87.107 35.237% 100.000% 0.000 1 [Identity]:70&lt;/LI-CODE&gt;
&lt;P&gt;I am running the&amp;nbsp;&amp;nbsp;./benchmark_model but i can't see the detailed results&lt;/P&gt;</description>
      <pubDate>Fri, 05 Nov 2021 12:27:19 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367332#M182703</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2021-11-05T12:27:19Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367333#M182704</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;you can run&lt;/P&gt;&lt;P&gt;$ ./benchmark_model --graph=&amp;lt;path_to_tflite&amp;gt; --use_nnapi=true --enable_op_profiling=true&lt;/P&gt;</description>
      <pubDate>Fri, 05 Nov 2021 12:29:30 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367333#M182704</guid>
      <dc:creator>horst127</dc:creator>
      <dc:date>2021-11-05T12:29:30Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367362#M182706</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Deeplab and mobilenet have different purposes, so the structure of the models is different – deeplab targets semantic segmentation while mobilenet targets detection. mobilenet has ‘224’ input size, this is other difference – the deeplab example has 512x512 pixels input size. This has a pretty big impact on inference time.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Since this is provided as an example by AuZone,&amp;nbsp; Can you provide tyour expectations of&amp;nbsp; this TfLiteNnapiDelegate time?I will contact with R&amp;amp;D team to check if this is possible.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Nov 2021 13:29:44 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367362#M182706</guid>
      <dc:creator>Zhiming_Liu</dc:creator>
      <dc:date>2021-11-05T13:29:44Z</dc:date>
    </item>
    <item>
      <title>Re: Apply Deeplab+yolo examples on NPU - VERY SLOW</title>
      <link>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367392#M182708</link>
      <description>&lt;P&gt;I do not have any expected times for deeplab. But if we consider the popular yolov4 (tiny) model, you can find a lot of reported GPU and CPU runtimes in the internet. But, as mentioned in my first post, running this provided example (320x320 input) on the i.mx 8m plus npu, is about (or even more) than 10 times slower (about 200ms per frame). That sounds very odd for me.&lt;/P&gt;</description>
      <pubDate>Fri, 05 Nov 2021 14:57:29 GMT</pubDate>
      <guid>https://community.nxp.com/t5/i-MX-Processors/Apply-Deeplab-yolo-examples-on-NPU-VERY-SLOW/m-p/1367392#M182708</guid>
      <dc:creator>horst127</dc:creator>
      <dc:date>2021-11-05T14:57:29Z</dc:date>
    </item>
  </channel>
</rss>

