<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference in Processor Expert Software</title>
    <link>https://community.nxp.com/t5/Processor-Expert-Software/i-MX8M-Plus-DeepViewRT-Installation-ONNX-Conversion-and-CPU-vs/m-p/2186169#M5992</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/255327"&gt;@sunghyun96&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The recommended replacement&amp;nbsp; for DeepViewRT on i.MX Plus is&amp;nbsp; TesnsorFlow Lite (TFlite) + VX Delegate.&lt;/P&gt;
&lt;P&gt;VX Delegate is the official replacement for DeepViewRT,&amp;nbsp; it uses OpenVX under the hood to offload suppported operations to the NPU.&lt;/P&gt;
&lt;P&gt;TFLite modes must be quantized (INT8) and converted using the eIQ toolkit to be compatible with the VX Delegate.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Since you are using ONNX Runtime,&amp;nbsp; I would suggest you convert your ONNX model to TFLite (INT8) using the eIQ&amp;nbsp; Toolkit or TensorFlow tools, then run it with TFLite + VX Delegate.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;
&lt;P&gt;Daniel&lt;/P&gt;</description>
    <pubDate>Wed, 15 Oct 2025 02:24:40 GMT</pubDate>
    <dc:creator>danielchen</dc:creator>
    <dc:date>2025-10-15T02:24:40Z</dc:date>
    <item>
      <title>i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference</title>
      <link>https://community.nxp.com/t5/Processor-Expert-Software/i-MX8M-Plus-DeepViewRT-Installation-ONNX-Conversion-and-CPU-vs/m-p/2177153#M5985</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I would like to perform inference using DeepViewRT on the i.MX8M Plus board.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;1. How can I install DeepViewRT on the i.MX8M Plus and run inference on it?&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;2. I have an ONNX model — how can I convert it into an RTM model for DeepViewRT?&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;3. If I run inference directly with an ONNX model on the i.MX8M Plus, is it limited to CPU execution only?&lt;/P&gt;&lt;P&gt;Thank you in advance for your support.&lt;/P&gt;</description>
      <pubDate>Mon, 29 Sep 2025 04:24:12 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Processor-Expert-Software/i-MX8M-Plus-DeepViewRT-Installation-ONNX-Conversion-and-CPU-vs/m-p/2177153#M5985</guid>
      <dc:creator>sunghyun96</dc:creator>
      <dc:date>2025-09-29T04:24:12Z</dc:date>
    </item>
    <item>
      <title>Re: i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference</title>
      <link>https://community.nxp.com/t5/Processor-Expert-Software/i-MX8M-Plus-DeepViewRT-Installation-ONNX-Conversion-and-CPU-vs/m-p/2184351#M5986</link>
      <description>&lt;P&gt;HI&amp;nbsp;&amp;nbsp;Sunghyun96：&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please check UG10166:&amp;nbsp; i.MX Machine Learning User's Guider.&lt;/P&gt;
&lt;P&gt;DeepViewRT inference enginine was removed.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="danielchen_0-1760239877158.png" style="width: 400px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/360375i7DA7CF35A4E8E7D6/image-size/medium?v=v2&amp;amp;px=400" role="button" title="danielchen_0-1760239877158.png" alt="danielchen_0-1760239877158.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;
&lt;P&gt;Daniel&lt;/P&gt;</description>
      <pubDate>Sun, 12 Oct 2025 03:31:27 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Processor-Expert-Software/i-MX8M-Plus-DeepViewRT-Installation-ONNX-Conversion-and-CPU-vs/m-p/2184351#M5986</guid>
      <dc:creator>danielchen</dc:creator>
      <dc:date>2025-10-12T03:31:27Z</dc:date>
    </item>
    <item>
      <title>Re: i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference</title>
      <link>https://community.nxp.com/t5/Processor-Expert-Software/i-MX8M-Plus-DeepViewRT-Installation-ONNX-Conversion-and-CPU-vs/m-p/2186147#M5991</link>
      <description>&lt;P&gt;Thank you for your reply.&lt;BR /&gt;On Linux 6.12.20-lts with the i.MX 8M Plus (EVK), I’m planning to run NPU inference using ONNX Runtime.&lt;BR /&gt;If there’s a recommended runtime to replace DeepViewRT (e.g., ORT vs. TFLite + VX), please let me know as well.&lt;/P&gt;</description>
      <pubDate>Wed, 15 Oct 2025 01:58:49 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Processor-Expert-Software/i-MX8M-Plus-DeepViewRT-Installation-ONNX-Conversion-and-CPU-vs/m-p/2186147#M5991</guid>
      <dc:creator>sunghyun96</dc:creator>
      <dc:date>2025-10-15T01:58:49Z</dc:date>
    </item>
    <item>
      <title>Re: i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference</title>
      <link>https://community.nxp.com/t5/Processor-Expert-Software/i-MX8M-Plus-DeepViewRT-Installation-ONNX-Conversion-and-CPU-vs/m-p/2186169#M5992</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/255327"&gt;@sunghyun96&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The recommended replacement&amp;nbsp; for DeepViewRT on i.MX Plus is&amp;nbsp; TesnsorFlow Lite (TFlite) + VX Delegate.&lt;/P&gt;
&lt;P&gt;VX Delegate is the official replacement for DeepViewRT,&amp;nbsp; it uses OpenVX under the hood to offload suppported operations to the NPU.&lt;/P&gt;
&lt;P&gt;TFLite modes must be quantized (INT8) and converted using the eIQ toolkit to be compatible with the VX Delegate.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Since you are using ONNX Runtime,&amp;nbsp; I would suggest you convert your ONNX model to TFLite (INT8) using the eIQ&amp;nbsp; Toolkit or TensorFlow tools, then run it with TFLite + VX Delegate.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;
&lt;P&gt;Daniel&lt;/P&gt;</description>
      <pubDate>Wed, 15 Oct 2025 02:24:40 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Processor-Expert-Software/i-MX8M-Plus-DeepViewRT-Installation-ONNX-Conversion-and-CPU-vs/m-p/2186169#M5992</guid>
      <dc:creator>danielchen</dc:creator>
      <dc:date>2025-10-15T02:24:40Z</dc:date>
    </item>
  </channel>
</rss>

