<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: What about resizing the images during retraining? in eIQ Machine Learning Software</title>
    <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939405#M8</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Marcin.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The sentence &lt;EM&gt;"We will retrain this model for 128x128 pixel images using a python script found inside the tutorial folder."&lt;/EM&gt; does not actually mean that we will retrain some model to work on 128x128 images. In fact, it means that the model for 128x128 images will be retrained to recognize a different set of categories. The original model was trained to classify a much larger set of categories and we retrain it to classify only&amp;nbsp;daisies, dandelions, roses, sunflowers and tulips.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As you can see in&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;--architecture=mobilenet_0.25_128&lt;/EM&gt; The particular type of Mobilenet model to use as a starting point&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;the model architecture chosen for this demo is the&amp;nbsp;&lt;STRONG&gt;&lt;EM&gt;mobilenet_0.25_128&lt;/EM&gt;&lt;/STRONG&gt;. The 128 at the end of the name means that the&amp;nbsp;model is already pretrained for 128x128 images.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When you look in the source code of the example, you can see that before inference is run, the input images are first resized to the 128x128 format because the model cannot correctly classify images in any other format.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hopefully this clears it up for you.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Wed, 18 Sep 2019 09:41:35 GMT</pubDate>
    <dc:creator>david_piskula</dc:creator>
    <dc:date>2019-09-18T09:41:35Z</dc:date>
    <item>
      <title>What about resizing the images during retraining?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939404#M7</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi NXP Team!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I followed the lab called &lt;STRONG&gt;"eIQ Transfer Learning Lab - Without Camera.pdf"&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;Section &lt;STRONG&gt;"3. Retrain Existing Model"&lt;/STRONG&gt; says that we will retrain the model for 128x128 pixel images.&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 11.0pt;"&gt;However images in folder with example images have different dimensions (like 320x232, 320x212, 500x332 pixels and so on).&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;So why is the reason for resizing the images?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And later on, section &lt;STRONG&gt;"5. Run Demo (point 21.)"&lt;/STRONG&gt; says: change in the code the image height and width to 128, however C array representing the image (in this case of this lab this is &lt;STRONG&gt;21652746_cc379e0eea_m.bmp&lt;/STRONG&gt;) contains much more data because 21652746_cc379e0eea_m.bmp is 231x240.&lt;/P&gt;&lt;P&gt;So why this step is also important?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 11.0pt;"&gt;&lt;SPAN style="font-size: 11.0pt;"&gt;Any hints more than welcome! &lt;/SPAN&gt;&lt;SPAN style="font-size: 11.0pt;"&gt;Thanks in advance!&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 18 Sep 2019 09:11:42 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939404#M7</guid>
      <dc:creator>MarcinChelminsk</dc:creator>
      <dc:date>2019-09-18T09:11:42Z</dc:date>
    </item>
    <item>
      <title>Re: What about resizing the images during retraining?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939405#M8</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Marcin.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The sentence &lt;EM&gt;"We will retrain this model for 128x128 pixel images using a python script found inside the tutorial folder."&lt;/EM&gt; does not actually mean that we will retrain some model to work on 128x128 images. In fact, it means that the model for 128x128 images will be retrained to recognize a different set of categories. The original model was trained to classify a much larger set of categories and we retrain it to classify only&amp;nbsp;daisies, dandelions, roses, sunflowers and tulips.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As you can see in&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;--architecture=mobilenet_0.25_128&lt;/EM&gt; The particular type of Mobilenet model to use as a starting point&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;the model architecture chosen for this demo is the&amp;nbsp;&lt;STRONG&gt;&lt;EM&gt;mobilenet_0.25_128&lt;/EM&gt;&lt;/STRONG&gt;. The 128 at the end of the name means that the&amp;nbsp;model is already pretrained for 128x128 images.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When you look in the source code of the example, you can see that before inference is run, the input images are first resized to the 128x128 format because the model cannot correctly classify images in any other format.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hopefully this clears it up for you.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 18 Sep 2019 09:41:35 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939405#M8</guid>
      <dc:creator>david_piskula</dc:creator>
      <dc:date>2019-09-18T09:41:35Z</dc:date>
    </item>
    <item>
      <title>Re: What about resizing the images during retraining?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939406#M9</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi David,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;it gives me some light on this challenge &lt;IMG alt="Smiley Happy" class="emoticon emoticon-smileyhappy" id="smileyhappy" src="https://community.nxp.com/i/smilies/16x16_smiley-happy.png" title="Smiley Happy" /&gt; however...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Retraining process also requires 128x128 images as the input, am I right?&lt;/P&gt;&lt;P&gt;So script called &lt;STRONG&gt;retrain.py&lt;/STRONG&gt; resizes images to proper value 128x128, am I right?&lt;/P&gt;&lt;P&gt;Is this function below responsible for that?&lt;/P&gt;&lt;PRE class="language-python line-numbers"&gt;&lt;CODE&gt;&lt;SPAN class="keyword token"&gt;def&lt;/SPAN&gt; &lt;SPAN class="token function"&gt;add_jpeg_decoding&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;input_width&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; input_height&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; input_depth&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; input_mean&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt;
                      input_std&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;:&lt;/SPAN&gt;&lt;SPAN class="line-numbers-rows"&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When you said this:&lt;/P&gt;&lt;BLOCKQUOTE class="jive_macro_quote jive-quote jive_text_macro"&gt;&lt;P&gt;When you look in the source code of the example, you can see that before inference is run, the input images are first resized to the 128x128 format because the model cannot correctly classify images in any other format.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;you mean this part of code in &lt;STRONG&gt;label_image.cpp&lt;/STRONG&gt; file?&lt;/P&gt;&lt;PRE class="language-cpp line-numbers"&gt;&lt;CODE&gt;&lt;SPAN class="keyword token"&gt;int&lt;/SPAN&gt; image_width &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; &lt;SPAN class="number token"&gt;128&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
&lt;SPAN class="keyword token"&gt;int&lt;/SPAN&gt; image_height &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; &lt;SPAN class="number token"&gt;128&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
&lt;SPAN class="keyword token"&gt;int&lt;/SPAN&gt; image_channels &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; &lt;SPAN class="number token"&gt;3&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
uint8_t&lt;SPAN class="operator token"&gt;*&lt;/SPAN&gt; in &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; &lt;SPAN class="token function"&gt;read_bmp&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;daisy_bmp&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; daisy_bmp_len&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; &lt;SPAN class="operator token"&gt;&amp;amp;&lt;/SPAN&gt;image_width&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; &lt;SPAN class="operator token"&gt;&amp;amp;&lt;/SPAN&gt;image_height&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt;
                           &lt;SPAN class="operator token"&gt;&amp;amp;&lt;/SPAN&gt;image_channels&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; s&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;‍‍‍‍‍&lt;SPAN class="line-numbers-rows"&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 02 Nov 2020 14:25:38 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939406#M9</guid>
      <dc:creator>MarcinChelminsk</dc:creator>
      <dc:date>2020-11-02T14:25:38Z</dc:date>
    </item>
    <item>
      <title>Re: What about resizing the images during retraining?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939407#M10</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P style="color: #51626f; background-color: #ffffff; border: 0px; font-size: 14px; padding: 0px;"&gt;Hello Marcin,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE class="jive_macro_quote jive-quote jive_text_macro"&gt;&lt;P&gt;Retraining process also requires 128x128 images as the input, am I right?&amp;nbsp;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Yes.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE class="jive_macro_quote jive-quote jive_text_macro"&gt;&lt;P&gt;So script called &lt;STRONG&gt;retrain.py&lt;/STRONG&gt; resizes images to proper value 128x128, am I right?&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Yes.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE class="jive_macro_quote jive-quote jive_text_macro"&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is this function below responsible for that?&lt;/P&gt;&lt;PRE class="language-python line-numbers"&gt;&lt;CODE&gt;&lt;SPAN class="keyword token"&gt;def&lt;/SPAN&gt; &lt;SPAN class="token function"&gt;add_jpeg_decoding&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;input_width&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; input_height&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; input_depth&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; input_mean&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt;
                      input_std&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;:&lt;/SPAN&gt;‍‍‍‍‍‍&lt;SPAN class="line-numbers-rows"&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P style="min-height: 8pt; padding: 0px;"&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;I believe so. The script actually comes from Google as part of the &lt;A href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html#0" rel="nofollow noopener noreferrer" target="_blank"&gt;Tensorflow for Poets tutorial&lt;/A&gt;&amp;nbsp;and I recommend you go through some of the guides there as well, if you wish to learn more about TensorFlow and Machine Learning in general.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE class="jive_macro_quote jive-quote jive_text_macro"&gt;&lt;P&gt;you mean this part of code in &lt;STRONG&gt;label_image.cpp&lt;/STRONG&gt; file?&lt;/P&gt;&lt;PRE class="" style="color: #000000; background: #f5f2f0; border: 0px; font-size: 14px; margin: 0.5em 0px; padding: 1em 1em 1em 3.8em;"&gt;&lt;CODE style="border: 0px; font-weight: inherit; font-size: 14px;"&gt;&lt;SPAN class="" style="color: #0077aa; border: 0px; font-weight: inherit; font-size: 14px;"&gt;int&lt;/SPAN&gt; image_width &lt;SPAN class="" style="color: #a67f59; background: rgba(255, 255, 255, 0.5); border: 0px; font-weight: inherit; font-size: 14px;"&gt;=&lt;/SPAN&gt; &lt;SPAN class="" style="color: #990000; border: 0px; font-weight: inherit; font-size: 14px;"&gt;128&lt;/SPAN&gt;&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;;&lt;/SPAN&gt;
&lt;SPAN class="" style="color: #0077aa; border: 0px; font-weight: inherit; font-size: 14px;"&gt;int&lt;/SPAN&gt; image_height &lt;SPAN class="" style="color: #a67f59; background: rgba(255, 255, 255, 0.5); border: 0px; font-weight: inherit; font-size: 14px;"&gt;=&lt;/SPAN&gt; &lt;SPAN class="" style="color: #990000; border: 0px; font-weight: inherit; font-size: 14px;"&gt;128&lt;/SPAN&gt;&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;;&lt;/SPAN&gt;
&lt;SPAN class="" style="color: #0077aa; border: 0px; font-weight: inherit; font-size: 14px;"&gt;int&lt;/SPAN&gt; image_channels &lt;SPAN class="" style="color: #a67f59; background: rgba(255, 255, 255, 0.5); border: 0px; font-weight: inherit; font-size: 14px;"&gt;=&lt;/SPAN&gt; &lt;SPAN class="" style="color: #990000; border: 0px; font-weight: inherit; font-size: 14px;"&gt;3&lt;/SPAN&gt;&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;;&lt;/SPAN&gt;
uint8_t&lt;SPAN class="" style="color: #a67f59; background: rgba(255, 255, 255, 0.5); border: 0px; font-weight: inherit; font-size: 14px;"&gt;*&lt;/SPAN&gt; in &lt;SPAN class="" style="color: #a67f59; background: rgba(255, 255, 255, 0.5); border: 0px; font-weight: inherit; font-size: 14px;"&gt;=&lt;/SPAN&gt; &lt;SPAN class="" style="color: #d74444; border: 0px; font-weight: inherit; font-size: 14px;"&gt;read_bmp&lt;/SPAN&gt;&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;(&lt;/SPAN&gt;daisy_bmp&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;,&lt;/SPAN&gt; daisy_bmp_len&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;,&lt;/SPAN&gt; &lt;SPAN class="" style="color: #a67f59; background: rgba(255, 255, 255, 0.5); border: 0px; font-weight: inherit; font-size: 14px;"&gt;&amp;amp;&lt;/SPAN&gt;image_width&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;,&lt;/SPAN&gt; &lt;SPAN class="" style="color: #a67f59; background: rgba(255, 255, 255, 0.5); border: 0px; font-weight: inherit; font-size: 14px;"&gt;&amp;amp;&lt;/SPAN&gt;image_height&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;,&lt;/SPAN&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;SPAN class="" style="color: #a67f59; background: rgba(255, 255, 255, 0.5); border: 0px; font-weight: inherit; font-size: 14px;"&gt;&amp;amp;&lt;/SPAN&gt;image_channels&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;,&lt;/SPAN&gt; s&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;)&lt;/SPAN&gt;&lt;SPAN class="" style="color: #999999; border: 0px; font-weight: inherit; font-size: 14px;"&gt;;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Actually, these values are just initialization values and get changed to the actual width, height and number of channels of the supplied bmp in the&amp;nbsp;read_bmp() function.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The code that takes care of the resizing is here:&lt;/P&gt;&lt;PRE class="language-c line-numbers"&gt;&lt;CODE&gt;  &lt;SPAN class="comment token"&gt;/* Get input dimension from the input tensor metadata
     assuming one input only */&lt;/SPAN&gt;
  TfLiteIntArray&lt;SPAN class="operator token"&gt;*&lt;/SPAN&gt; dims &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; interpreter&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="token function"&gt;tensor&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;input&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;dims&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
  &lt;SPAN class="keyword token"&gt;int&lt;/SPAN&gt; wanted_height &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; dims&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;data&lt;SPAN class="punctuation token"&gt;[&lt;/SPAN&gt;&lt;SPAN class="number token"&gt;1&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;]&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
  &lt;SPAN class="keyword token"&gt;int&lt;/SPAN&gt; wanted_width &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; dims&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;data&lt;SPAN class="punctuation token"&gt;[&lt;/SPAN&gt;&lt;SPAN class="number token"&gt;2&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;]&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
  &lt;SPAN class="keyword token"&gt;int&lt;/SPAN&gt; wanted_channels &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; dims&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;data&lt;SPAN class="punctuation token"&gt;[&lt;/SPAN&gt;&lt;SPAN class="number token"&gt;3&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;]&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;

  &lt;SPAN class="keyword token"&gt;switch&lt;/SPAN&gt; &lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;interpreter&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="token function"&gt;tensor&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;input&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;type&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt; &lt;SPAN class="punctuation token"&gt;{&lt;/SPAN&gt;
    &lt;SPAN class="keyword token"&gt;case&lt;/SPAN&gt; kTfLiteFloat32&lt;SPAN class="punctuation token"&gt;:&lt;/SPAN&gt;
      s&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;input_floating &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; true&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
      resize&lt;SPAN class="operator token"&gt;&amp;lt;&lt;/SPAN&gt;&lt;SPAN class="keyword token"&gt;float&lt;/SPAN&gt;&lt;SPAN class="operator token"&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;interpreter&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;typed_tensor&lt;SPAN class="operator token"&gt;&amp;lt;&lt;/SPAN&gt;&lt;SPAN class="keyword token"&gt;float&lt;/SPAN&gt;&lt;SPAN class="operator token"&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;input&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; in&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; image_height&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt;
                    image_width&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; image_channels&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; wanted_height&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; wanted_width&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt;
                    wanted_channels&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; s&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="keyword token"&gt;break&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
    &lt;SPAN class="keyword token"&gt;case&lt;/SPAN&gt; kTfLiteUInt8&lt;SPAN class="punctuation token"&gt;:&lt;/SPAN&gt;
      resize&lt;SPAN class="operator token"&gt;&amp;lt;&lt;/SPAN&gt;uint8_t&lt;SPAN class="operator token"&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;interpreter&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;typed_tensor&lt;SPAN class="operator token"&gt;&amp;lt;&lt;/SPAN&gt;uint8_t&lt;SPAN class="operator token"&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;input&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; in&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt;
                      image_height&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; image_width&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; image_channels&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; wanted_height&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt;
                      wanted_width&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; wanted_channels&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; s&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="keyword token"&gt;break&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
    &lt;SPAN class="keyword token"&gt;default&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;:&lt;/SPAN&gt;
      &lt;SPAN class="token function"&gt;LOG&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;FATAL&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt; &lt;SPAN class="operator token"&gt;&amp;lt;&amp;lt;&lt;/SPAN&gt; &lt;SPAN class="string token"&gt;"cannot handle input type "&lt;/SPAN&gt;
                 &lt;SPAN class="operator token"&gt;&amp;lt;&amp;lt;&lt;/SPAN&gt; interpreter&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="token function"&gt;tensor&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;input&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="operator token"&gt;-&amp;gt;&lt;/SPAN&gt;type &lt;SPAN class="operator token"&gt;&amp;lt;&amp;lt;&lt;/SPAN&gt; &lt;SPAN class="string token"&gt;" yet"&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="token function"&gt;exit&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;&lt;SPAN class="operator token"&gt;-&lt;/SPAN&gt;&lt;SPAN class="number token"&gt;1&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;;&lt;/SPAN&gt;
  &lt;SPAN class="punctuation token"&gt;}&lt;/SPAN&gt;‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍&lt;SPAN class="line-numbers-rows"&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Where the wanted_height, wanted_width and wanted_channels get loaded from the model's metadata (it is generalized and automated here, so that if you decided to use a different model that requires a different format, you don't have to manually hard code the values). Afterwards, the image inside&amp;nbsp;&lt;EM&gt;in&lt;/EM&gt; and its actual and required dimensions&amp;nbsp;get passed to the resize function, which takes care of preparing the input and storing it inside &lt;EM&gt;interpreter-&amp;gt;typed_tensor&amp;lt;float&amp;gt;(input) &lt;/EM&gt;or&lt;EM&gt;&amp;nbsp;interpreter-&amp;gt;typed_tensor&amp;lt;uint8_t&amp;gt;(input)&lt;/EM&gt;.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 18 Sep 2019 12:33:13 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939407#M10</guid>
      <dc:creator>david_piskula</dc:creator>
      <dc:date>2019-09-18T12:33:13Z</dc:date>
    </item>
    <item>
      <title>Re: What about resizing the images during retraining?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939408#M11</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;David, thank you very much for your support! It clarifies all my current doubts :smileyhappy: Thanks again!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 20 Sep 2019 08:54:29 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939408#M11</guid>
      <dc:creator>MarcinChelminsk</dc:creator>
      <dc:date>2019-09-20T08:54:29Z</dc:date>
    </item>
    <item>
      <title>Re: What about resizing the images during retraining?</title>
      <link>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939409#M12</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;You're welcome Marcin, I'm glad I could be of help. Don't hesitate to create new questions if you need anything else clarified in the future. Have fun with eIQ!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 20 Sep 2019 10:50:31 GMT</pubDate>
      <guid>https://community.nxp.com/t5/eIQ-Machine-Learning-Software/What-about-resizing-the-images-during-retraining/m-p/939409#M12</guid>
      <dc:creator>david_piskula</dc:creator>
      <dc:date>2019-09-20T10:50:31Z</dc:date>
    </item>
  </channel>
</rss>

