Hi NXP Team!
I followed the lab called "eIQ Transfer Learning Lab - Without Camera.pdf".
Section "3. Retrain Existing Model" says that we will retrain the model for 128x128 pixel images.
However images in folder with example images have different dimensions (like 320x232, 320x212, 500x332 pixels and so on).
So why is the reason for resizing the images?
And later on, section "5. Run Demo (point 21.)" says: change in the code the image height and width to 128, however C array representing the image (in this case of this lab this is 21652746_cc379e0eea_m.bmp) contains much more data because 21652746_cc379e0eea_m.bmp is 231x240.
So why this step is also important?
Any hints more than welcome! Thanks in advance!