<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Deployment of an AI Model in MCX Microcontrollers</title>
    <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2118775#M3206</link>
    <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/248257"&gt;@Abdu_&lt;/a&gt;,&lt;BR /&gt;I successfully exported a TensorFlow INT8 model as a header file. This model was trained outside the eIQ environment and does not include quantization or dequantization nodes&lt;BR /&gt;Please use these params in the custom options:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;dump-header-file-output; dump-header-file-input&lt;/STRONG&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Habib_MS_0-1750270236021.png" style="width: 400px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/343610i5C2E714276307C00/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Habib_MS_0-1750270236021.png" alt="Habib_MS_0-1750270236021.png" /&gt;&lt;/span&gt;&lt;BR /&gt;In the other hand, in this &lt;A href="https://mcuxpresso.nxp.com/mcuxsdk/25.03.00/html/middleware/eiq/docs/index.html" target="_self"&gt;page&lt;/A&gt; are more information about eIQ that could be helpful.&lt;/P&gt;
&lt;P&gt;BR&lt;BR /&gt;Habib&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 18 Jun 2025 18:38:10 GMT</pubDate>
    <dc:creator>Habib_MS</dc:creator>
    <dc:date>2025-06-18T18:38:10Z</dc:date>
    <item>
      <title>Deployment of an AI Model</title>
      <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2109677#M3105</link>
      <description>&lt;P&gt;Hello everyone,&lt;/P&gt;&lt;P&gt;I’m looking for answers on deploying a model to the FRDM-MCXN947 board. I have a model already trained in TensorFlow Lite (float32). I used the eIQ environment to convert it to C source, and that worked well. However, I understand that the NPU only accepts int8 models—is that correct?&lt;/P&gt;&lt;P&gt;When I converted my model to TensorFlow Lite int8, I encountered an error during conversion, so it didn’t work. I tried manually adding quantization and dequantization nodes, but that also failed.&lt;/P&gt;&lt;P&gt;From the examples I’ve seen, when you use your dataset in eIQ to generate the model, it automatically adds quantization and dequantization nodes.&lt;/P&gt;&lt;P&gt;Finally, I used a Python script to convert my TensorFlow Lite model to int8 C source, and that worked, but the model’s output differs from my TensorFlow Lite int8 tests in Python.&lt;/P&gt;&lt;P&gt;I’d like to know: is there a way to convert a model to int8 using eIQ, or to add quantization/dequantization nodes?&lt;/P&gt;&lt;P&gt;Thank you very much.&lt;/P&gt;</description>
      <pubDate>Tue, 03 Jun 2025 16:20:18 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2109677#M3105</guid>
      <dc:creator>Abdu_</dc:creator>
      <dc:date>2025-06-03T16:20:18Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment of an AI Model</title>
      <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2112302#M3124</link>
      <description>&lt;P&gt;Hello &lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/248257"&gt;@Abdu_&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;To answer your first question: the NPU currently supports only INT8 models. For more details, I recommend checking out this &lt;A href="https://community.nxp.com/t5/MCX-Microcontrollers-Knowledge/MCXN947-How-to-Train-and-Deploy-Customer-ML-model-to-NPU/ta-p/1899497" target="_self"&gt;community post&lt;/A&gt; which provides further clarification.&lt;BR /&gt;Regarding your second question: the eIQ Toolkit includes a model conversion feature, as illustrated in the image below. This allows you to convert models into formats compatible with the supported hardware:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Habib_MS_0-1749242366368.png" style="width: 400px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/341833iBB44CFD7F7580A01/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Habib_MS_0-1749242366368.png" alt="Habib_MS_0-1749242366368.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;For a deeper understanding, please refer to Chapter 4.2: "Model Conversion" in the eIQ Toolkit User Guide, available on this &lt;A href="https://www.nxp.com/design/design-center/software/eiq-ai-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT#documentation" target="_self"&gt;page&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Additionally, you may find helpful resources on the official &lt;A href="https://ai.google.dev/edge/litert/models/convert_tf" target="_self"&gt;Google&lt;/A&gt; documentation, especially if you're working with TensorFlow Lite or other Google-supported frameworks.&lt;BR /&gt;BR&lt;BR /&gt;Habib&lt;/P&gt;</description>
      <pubDate>Fri, 06 Jun 2025 20:40:12 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2112302#M3124</guid>
      <dc:creator>Habib_MS</dc:creator>
      <dc:date>2025-06-06T20:40:12Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment of an AI Model</title>
      <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2112385#M3128</link>
      <description>&lt;P&gt;Thank you, Habib, for your answer.&lt;/P&gt;&lt;P&gt;The problem is that elQ is unable to convert a TensorFlow Lite (float32) model into an INT8 C file. I therefore provided an INT8 TensorFlow Lite model and waited for the corresponding INT8 C file, but I encountered the error shown on the screen.&lt;/P&gt;&lt;P&gt;I would also like to know how to insert quantize and dequantize nodes into a float32 model. I’ve reviewed all the documentation but couldn’t find any guidance on this.&lt;/P&gt;&lt;P&gt;Thank you very much.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 07 Jun 2025 14:43:52 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2112385#M3128</guid>
      <dc:creator>Abdu_</dc:creator>
      <dc:date>2025-06-07T14:43:52Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment of an AI Model</title>
      <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2113057#M3138</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/248257"&gt;@Abdu_&lt;/a&gt;,&lt;BR /&gt;a)&amp;nbsp;I am able to convert a FLoat32 model to INT8.tflite model in the &lt;A href="https://www.nxp.com/design/design-center/software/eiq-ai-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT" target="_self"&gt;eIQ Toolkit version 1.15.1.104&lt;/A&gt;. To assist you more effectively, could you please share the specific steps you followed that led to the error you encountered?&lt;/P&gt;
&lt;P&gt;b) The most relevant documentation I found regarding Quantization is in Chapter 3.10 of the eIQ Toolkit User Guide:&lt;BR /&gt;&lt;EM&gt;"You can quantize a trained model to reduce its size and speed up the inference time on&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;different hardware accelerators (for example, GPU and NPU) with a minimal accuracy&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;loss. You can choose between the per channel and per tensor quantizations. The per&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;tensor quantization means that all the values within a tensor are scaled in the same way.&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;The per channel quantization means that tensor values are separately scaled for each&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;channel (for example, the convolution filter is scaled separately for each filter).""&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;I highly recommend reviewing this chapter to better understand how to implement quantization effectively.&lt;BR /&gt;BR&lt;BR /&gt;Habib&lt;/P&gt;</description>
      <pubDate>Mon, 09 Jun 2025 22:25:04 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2113057#M3138</guid>
      <dc:creator>Habib_MS</dc:creator>
      <dc:date>2025-06-09T22:25:04Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment of an AI Model</title>
      <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2113629#M3143</link>
      <description>&lt;P&gt;Hello Habib,&lt;BR /&gt;Thank you for your response.&lt;/P&gt;&lt;P&gt;a) Here are the steps I followed:&lt;BR /&gt;I developed my model in Google Colab, then converted it to TensorFlow Lite. I imported the .tflite file into eIQ and selected the Model Tool. I opened the model there, then clicked on Convert: TensorFlow Lite for Neutron. I selected my board (MCXN-947) and enabled &lt;STRONG&gt;Dump &lt;/STRONG&gt;header file to generate a header file for use with the NPU.&lt;BR /&gt;However, when I clicked Convert, I encountered the error I showed you earlier. Attached is a screenshot of my model.&lt;/P&gt;&lt;P&gt;b) I also read Chapter 3.10 of the &lt;EM&gt;eIQ Toolkit User Guide&lt;/EM&gt;. From what I understand, it mainly discusses models developed directly on the platform using imported datasets. It seems that when you bring a pre-trained TensorFlow Lite model from outside, the only available option is to convert it — you can't customize it like you can with models trained from raw data on the platform. That's the limitation I noticed.&lt;BR /&gt;&lt;BR /&gt;thank you so much&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jun 2025 12:57:02 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2113629#M3143</guid>
      <dc:creator>Abdu_</dc:creator>
      <dc:date>2025-06-10T12:57:02Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment of an AI Model</title>
      <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2113884#M3145</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/248257"&gt;@Abdu_&lt;/a&gt;,&lt;BR /&gt;I followed the steps outlined in Chapter 3, titled "Label Image Example", from the guide &lt;A href="https://community.nxp.com/t5/MCX-Microcontrollers-Knowledge/eIQ-Neutron-NPU-Lab-Guides/ta-p/1799233#:~:text=These%20lab%20guides%20provide%20step-by-step%20instructions%20on%20how,eIQ%20Neutron%20NPU%20found%20on%20MCX%20N%20devices." target="_self"&gt;Lab eIQ Neutron NPU for MCX N Lab Guide - Part 1 - Mobilenet - MCUXpresso SDK Builder.&lt;/A&gt; As a result, I was able to successfully export a dump header model using eIQ even with a model that was trained outside of the eIQ environment&lt;/P&gt;
&lt;P&gt;Please review the steps and let me know if they were helpful. I also strongly recommend downloading the latest version of eIQ to ensure compatibility and avoid potential issues.&lt;/P&gt;
&lt;P&gt;BR&lt;BR /&gt;Habib&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jun 2025 21:23:53 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2113884#M3145</guid>
      <dc:creator>Habib_MS</dc:creator>
      <dc:date>2025-06-10T21:23:53Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment of an AI Model</title>
      <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2115208#M3159</link>
      <description>&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;H6&gt;&lt;SPAN&gt;Hello Habib,&lt;/SPAN&gt;&lt;/H6&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;P&gt;I followed the example and successfully converted the TensorFlow Lite model (float32) to a file header (float32). However, since the NPU only supports INT8 models, I converted the TensorFlow Lite model to INT8 and encountered an error.&lt;/P&gt;&lt;P&gt;I noticed that when you load the trained TensorFlow Lite (float32) model into the environment, it doesn’t automatically insert the quantization and dequantization nodes. In contrast, when you train the model within the environment, those nodes are added, as shown in the example.&lt;/P&gt;&lt;P&gt;Thank you so much.&lt;/P&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 12 Jun 2025 08:32:31 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2115208#M3159</guid>
      <dc:creator>Abdu_</dc:creator>
      <dc:date>2025-06-12T08:32:31Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment of an AI Model</title>
      <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2118775#M3206</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/248257"&gt;@Abdu_&lt;/a&gt;,&lt;BR /&gt;I successfully exported a TensorFlow INT8 model as a header file. This model was trained outside the eIQ environment and does not include quantization or dequantization nodes&lt;BR /&gt;Please use these params in the custom options:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;dump-header-file-output; dump-header-file-input&lt;/STRONG&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Habib_MS_0-1750270236021.png" style="width: 400px;"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/343610i5C2E714276307C00/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Habib_MS_0-1750270236021.png" alt="Habib_MS_0-1750270236021.png" /&gt;&lt;/span&gt;&lt;BR /&gt;In the other hand, in this &lt;A href="https://mcuxpresso.nxp.com/mcuxsdk/25.03.00/html/middleware/eiq/docs/index.html" target="_self"&gt;page&lt;/A&gt; are more information about eIQ that could be helpful.&lt;/P&gt;
&lt;P&gt;BR&lt;BR /&gt;Habib&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 18 Jun 2025 18:38:10 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2118775#M3206</guid>
      <dc:creator>Habib_MS</dc:creator>
      <dc:date>2025-06-18T18:38:10Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment of an AI Model</title>
      <link>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2135100#M3562</link>
      <description>&lt;P&gt;Thank you so much, &lt;a href="https://community.nxp.com/t5/user/viewprofilepage/user-id/231807"&gt;@Habib_MS&lt;/a&gt;. That worked well.&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jul 2025 15:11:46 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MCX-Microcontrollers/Deployment-of-an-AI-Model/m-p/2135100#M3562</guid>
      <dc:creator>Abdu_</dc:creator>
      <dc:date>2025-07-16T15:11:46Z</dc:date>
    </item>
  </channel>
</rss>

