<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Choosing the Right i.MX Processor for a Small Edge AI Prototype in Other NXP Products</title>
    <link>https://community.nxp.com/t5/Other-NXP-Products/Choosing-the-Right-i-MX-Processor-for-a-Small-Edge-AI-Prototype/m-p/2217331#M30627</link>
    <description>&lt;P&gt;Hi everyone,&lt;/P&gt;&lt;P&gt;I’m working on a compact edge device and need help selecting the right &lt;A href="https://community.nxp.com/" target="_self"&gt;i.MX processor&lt;/A&gt; for basic on-device AI tasks. The goal is to run small inference models (sensor-based classification, simple vision, etc.) without high power consumption.&lt;/P&gt;&lt;P&gt;Right now, I’m comparing the i.MX 8M Mini, i.MX RT series, and the i.MX 8M Plus. In the mid of the project planning, someone suggested testing a &lt;A href="https://www.lenovo.com/in/en/lenovoauraedition/" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;best ai pc&lt;/STRONG&gt;&lt;/A&gt; setup first, but I prefer staying fully embedded unless there’s a strong reason not to.&lt;/P&gt;&lt;P&gt;My main questions:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;Is the NPU on the 8M Plus noticeably better for small TFLite/ONNX models?&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;How is the power draw when running inference continuously?&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Any recommended dev kits or sample projects to speed up the testing phase?&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Looking for guidance from anyone who has built similar lightweight edge AI systems using NXP hardware. Thanks!&lt;/P&gt;</description>
    <pubDate>Thu, 20 Nov 2025 07:02:47 GMT</pubDate>
    <dc:creator>Alicecarry</dc:creator>
    <dc:date>2025-11-20T07:02:47Z</dc:date>
    <item>
      <title>Choosing the Right i.MX Processor for a Small Edge AI Prototype</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/Choosing-the-Right-i-MX-Processor-for-a-Small-Edge-AI-Prototype/m-p/2217331#M30627</link>
      <description>&lt;P&gt;Hi everyone,&lt;/P&gt;&lt;P&gt;I’m working on a compact edge device and need help selecting the right &lt;A href="https://community.nxp.com/" target="_self"&gt;i.MX processor&lt;/A&gt; for basic on-device AI tasks. The goal is to run small inference models (sensor-based classification, simple vision, etc.) without high power consumption.&lt;/P&gt;&lt;P&gt;Right now, I’m comparing the i.MX 8M Mini, i.MX RT series, and the i.MX 8M Plus. In the mid of the project planning, someone suggested testing a &lt;A href="https://www.lenovo.com/in/en/lenovoauraedition/" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;best ai pc&lt;/STRONG&gt;&lt;/A&gt; setup first, but I prefer staying fully embedded unless there’s a strong reason not to.&lt;/P&gt;&lt;P&gt;My main questions:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;Is the NPU on the 8M Plus noticeably better for small TFLite/ONNX models?&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;How is the power draw when running inference continuously?&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Any recommended dev kits or sample projects to speed up the testing phase?&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Looking for guidance from anyone who has built similar lightweight edge AI systems using NXP hardware. Thanks!&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 07:02:47 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/Choosing-the-Right-i-MX-Processor-for-a-Small-Edge-AI-Prototype/m-p/2217331#M30627</guid>
      <dc:creator>Alicecarry</dc:creator>
      <dc:date>2025-11-20T07:02:47Z</dc:date>
    </item>
    <item>
      <title>Re: Choosing the Right i.MX Processor for a Small Edge AI Prototype</title>
      <link>https://community.nxp.com/t5/Other-NXP-Products/Choosing-the-Right-i-MX-Processor-for-a-Small-Edge-AI-Prototype/m-p/2229689#M30637</link>
      <description>&lt;P&gt;HI&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/253510"&gt;@Alicecarry&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Is the NPU on the 8M Plus noticeably better for small TFLite/ONNX models?&lt;/P&gt;
&lt;P&gt;Yes,&amp;nbsp;&amp;nbsp;&lt;SPAN&gt;&amp;nbsp;but&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;how much&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;depends on your model size and complexity.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;i.MX 8M Plus:&amp;nbsp; It integrates an NPU (2.3 TOPS) optimized for common vision and classification tasks.&amp;nbsp; &amp;nbsp;It accelerates quantized models (INT8) via NXP's eIQ toolkit.&amp;nbsp; For small models like sensor classification, simple CNNs,&amp;nbsp; the NPU can give 10x-20x speedup vs CPU.&amp;nbsp; Typical active power for inference on NPU is ~1–1.5 W (whole SoC), depending on DDR and peripherals. Idle can drop below 0.5 W&lt;/P&gt;
&lt;P&gt;i.MX 8M Mini: No NPU, so inference runs on Cortex-A53 or GPU (OpenCL). Power is similar or slightly lower than 8M Plus, but performance is much slower.&lt;/P&gt;
&lt;P&gt;i.MX RT series: Ultra-low power (hundreds of mW), but no Linux and no NPU. Great for tiny ML models (TensorFlow Lite Micro) on bare metal or FreeRTOS.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I.MX Machine Learning User's Guide&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.nxp.com/docs/en/user-guide/UG10166.pdf" target="_blank"&gt;https://www.nxp.com/docs/en/user-guide/UG10166.pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;i.MX 8MP EVK board.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.nxp.com/design/design-center/development-boards-and-designs/8MPLUSLPD4-PEVK" target="_blank"&gt;https://www.nxp.com/design/design-center/development-boards-and-designs/8MPLUSLPD4-PEVK&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I.MX 8MP Frdm board&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.nxp.com/design/design-center/development-boards-and-designs/FRDM-IMX8MPLUS" target="_blank"&gt;https://www.nxp.com/design/design-center/development-boards-and-designs/FRDM-IMX8MPLUS&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;More detailed information, please refer to the product page.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.nxp.com/products/i.MX8M" target="_blank"&gt;https://www.nxp.com/products/i.MX8MPLUShttps://www.nxp.com/products/i.MX8M&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;You can find sample demos in the I.MX 8MP BSP&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;
&lt;P&gt;Daniel&lt;/P&gt;</description>
      <pubDate>Fri, 21 Nov 2025 05:52:59 GMT</pubDate>
      <guid>https://community.nxp.com/t5/Other-NXP-Products/Choosing-the-Right-i-MX-Processor-for-a-Small-Edge-AI-Prototype/m-p/2229689#M30637</guid>
      <dc:creator>danielchen</dc:creator>
      <dc:date>2025-11-21T05:52:59Z</dc:date>
    </item>
  </channel>
</rss>

