eIQ Machine Learning Software Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

eIQ Machine Learning Software Knowledge Base

Labels

Discussions

Sort by:
This demo shows a low power smart door running eIQ heterogeneously on i.MX8MMini: Cortex A53 performs face recognition using eIQ OpenCV Cortex M4 performs Key Word Spotting using eIQ CMSIS-NN The demo application is build around the Django framework running on the board. It has two main usage scenarios: The first one is to manage users and inspect the access logs through a dashboard. This dashboard is accesses from a web browser on the host PC. The second use case is the smart door application itself running on the board. The scenario is the following: Cortex A cores and connected peripherals stay in low power mode. Cortex M is active, waiting for the Key Word ‘GO’. When the word is detected, Cortex M sends an MU interrupt to Cortex A and the system wakes up. Now Cortex A performs face recognition and allows access for registered users. In addition to face recognition, the MPUs are able to run a Django server to manage the user’s database, a QT5 application for the graphical interface and perform training on the edge. The algorithm for face recognition running on Cortex A and the one for key word spotting running on Cortex M are both implemented using eIQ. For the MPU eIQ support is integrated in Yocto. For the MCU the support was ported to i.MX8 from the MCU Expresso SDK for RT for the purpose of this demo. Software Environment Ubuntu 16 host PC SD card image with Yocto BSP 4.14.98/sumo 2.0.0 GA for i.MX8MMini platform with eIQ OpenCV AND eIQ heterogenus demo. See detailed steps in Build Yocto Image section CMSIS-NN MCUXpresso SDK version 2.6.0 for i.MX8MMini (SDK_2.6.0_EVK-MIMX8MM-ARMGCC). See detailed build steps in Build Cortex M4 executable section. HW Environment i.MX 8MMini Kit Touch screen display (preferred resolution 1920x1080). Tested with HDMI connection to the board. NOTE: if the display does not support touch, a mouse can be connected to the board and used instead MIPI-CSI Camera module Microphone: Synaptics CONEXANT AudioSmart® DS20921 Ribbon, 4 female-female wires and 60 pins connector to connect mic to board Optional: headphones (used to test recording on M4 - everything recorded by the mic will be played to the headphones). Host PC for remote access to the demo application (used Chrome browser) NOTE: The board and the host PC should be in the same network to communicate. Build Yocto image: Step 1 – Project initialization: $: mkdir imx-linux-bsp $: cd imx-linux-bsp-bsp $: repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b imx-linux-sumo -m imx-4.14.98-2.0.0_machinelearning.xml $: repo sync Step 2 - Setup Project build: $: MACHINE=imx8mmevk DISTRO=fsl-imx-xwayland source ./fsl-setup-release.sh -b bld-xwayland Step 3 – Download project layer in ${BSPDIR}/sources/: $: git clone https://source.codeaurora.org/external/imxsupport/meta-eiq-heterogenous Step 4 – Add project layer into bblayers: Add the following line into ${BSPDIR}/sources/base/conf/bblayers.conf: BBLAYERS += " ${BSPDIR}/sources/meta-eiq-heterogenous " Step 5 – Enable eIQ and other dependencies. Add the following lines into conf/local.conf: EXTRA_IMAGE_FEATURES = " dev-pkgs debug-tweaks tools-debug \ tools-sdk ssh-server-openssh" IMAGE_INSTALL_append = " net-tools iputils dhcpcd which gzip \ python3 python3-pip wget cmake gtest \ git zlib patchelf nano grep vim tmux \ swig tar unzip parted \ e2fsprogs e2fsprogs-resize2fs" IMAGE_INSTALL_append = " python3-pytz python3-django-cors-headers" IMAGE_INSTALL_append = " opencv python3-opencv" PACKAGECONFIG_append_pn-opencv_mx8 = " dnn python3 qt5 jasper \ openmp test neon" PACKAGECONFIG_remove_pn-opencv_mx8 = "opencl" TOOLCHAIN_HOST_TASK_append = " nativesdk-cmake nativesdk-make" PREFERRED_VERSION_opencv = "4.0.1%" PREFERRED_VERSION_python3-django = "2.1%" IMAGE_ROOTFS_EXTRA_SPACE = "20971520" Step 6 – Bake the image: $: bitbake image-eiq-hetero Build Cortex M4 executable Download MCUXpresso SDK version 2.6.0 for i.MX8MMini (SDK_2.6.0_EVK-MIMX8MM-ARMGCC) OS: Linux, Toolchain: GCC ARM Embedded Components: Amazon-FreeRTOS, CMSIS DSP Library, multicore SDK Version: 2.6.0 (2019-06-14) SDK Tag: REL_2.6.0_REL10_RFP_RC3_4 Download CMIS NN and copy "CMSIS\NN" folder to "$MCUXpressoSDK_ROOT\CMSIS" Got to "$MCUXpressoSDK_ROOT\boards\evkmimx8mm\demo_apps\" Get M4 app from CAF: git clone https://source.codeaurora.org/external/imxsupport/eiq-heterogenous-cortexm4 [Win]: Open ARM GCC console and go to "$MCUXpressoSDK_ROOT\boards\evkmimx8mm\demo_apps\eiq-heterogenous-cortexm4\armgcc\" [Win]: Call "build_ddr_release.bat" to obtain "eiq-kws.bin". Deploy "eiq-kws.bin" to the Yocto image on the boot partition. Prepare the Demo 1.  Connect 12V power supply to the board, switch SW101 to power on the board 2.  Connect a USB cable between the host PC and the J901 USB port on the target board. 3.  Open two serial terminals for A53 core and M4 core with the following settings:     - 115200 baud rate     - 8 data bits     - No parity     - One stop bit     - No flow control 4. Connect display to the board (used 1920x1080 HDMI display connected to the board with an IMX-MIPI-HDMI adapter). NOTE: depending on the display, you might want to change the config in "/etc/xdg/weston/weston.ini". The demo was tested by uncommenting the following section in this file: [output] name=HDMI-A-1 mode=1920x1080@60 transform=90 5.Connect MIPI-CSI camera to the board. 6. Connect Synaptics microphone to the board using a 60 pins connector with a ribbon. SAI3 is used for record and playback on Cortex M4. The following pins are used: Pin 44 (connector) <-> I2S_TX_Data1 (mic board) Pin 43 (connector) <-> I2S_TX_LRCLK (mic board) Pin 41 (connector) <-> I2S_TX_CLK (mic board) Pin 60 (connector) <-> GND (mic board) 7. Using U-Boot command to run the demo.bin file. For details, please refer to "Getting Started with MCUXpresso SDK for i.MX 8M Mini.pdf". 8.  After running the demo.bin, using the "boot" command to boot the kernel on the A core terminal; 9.  After the kernel is boot, using "root" to login. 10.  After login, make sure imx_rpmsg_pingpong kernel module is inserted (lsmod) or insert it (modprobe imx_rpmsg_pingpong). Run the Demo Start Key Word Spotting on Cortex M4: Stop in u-boot and run the eiq-kws.bin executable in DDR: u-boot=>fatload mmc 0 0x80000000 eiq-kws.bin u-boot=>dcache flush u-boot=>bootaux 0x80000000 u-boot=>boot After the boot process succeeds, the ARM Cortex-M4 terminal displays the following information: RPMSG Ping-Pong FreeRTOS RTOS API Demo... RPMSG Share Base Addr is 0xb8000000 During boot the Kernel,the ARM Cortex-M4 terminal displays the following information: Link is up! Nameservice announce sent. Start Face Recognition on Cortex-A: Insert updated rpmsg driver: $: modprobe imx_rpmsg_pingpong After the Linux RPMsg pingpong module was installed, the ARM Cortex-M4 terminal displays the following information: Looping forever... Waiting for ping... Sending pong... 96% go First time only: $: cd ~/eiq-heterogenous-cortexa $: python3 wrap_migrate.py $: python3 wrap_createsuperuser.py Start: $: cd ~/eiq-heterogenous-cortexa $: python3 manage.py runserver 0.0.0.0:8000 --noreload & $: /opt/src/bin/src NOTE: the first instruction will start the django server, the second instruction will show the pin-pad on the display. Browser access from HOST PC: - http://$BOARD_IP:8000/dashboard/😞 Dashboard that facilitates managing users and view access logs - http://$BOARD_IP:8000/admin/: manage users database  
View full article
When comparing NPU with CPU performance on the i.MX 8M Plus, the perception is that inference time is much longer on the NPU. This is due to the fact that the ML accelerator spends more time performing overall initialization steps. This initialization phase is known as warmup and is necessary only once at the beginning of the application. After this step inference is executed in a truly accelerated manner as expected for a dedicated NPU. The purpose of this document is to clarify the impact of the warmup time on overall performance.
View full article
The eIQ CMSIS-NN software for i.MX RT devices that is found in the MCUXPresso SDK package can be ported to other microcontroller devices in the RT family, as well as to some LPC and Kinetis devices.  A very common question is what processors support inferencing of models, and the answer is that inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, which almost any MCU or MPU is capable of. There’s no special hardware or module required to do inferencing. However high core clock speeds, and fast memory can drastically reduce inference time. Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application and the particular model.  Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the intermediate calculations and output The attached guide walks through how to port the CMSIS-NN inference engine to the LPC55S69 family. Similar steps can be done to port eIQ to other microcontroller devices. This guide is made available as a reference for users interested in exploring eIQ on other devices, however only the RT1050 and RT1060 are officially supported at this time for CMSIS-NN for MCUs as part of eIQ.  These other eIQ porting guides might also be of interest: Glow Porting Guide for MCUs TensorFlow Lite Porting Guide for RT685
View full article
NXP BSP currently does not support running a Keras application directly on i.MX. The customers that use this approach must convert their Keras model into one the supported inference engines in eIQ. In this post we will cover converting a Keras model (.h5) to a TfLite model (.tflite). Install TensorFlow with the same eIQ TfLite supported version (you can find this information on Linux User's Guide). For L4.19.35_1.0.0 the TfLite version is v1.12.0. $ pip3 install tensorflow==1.12.0 Run the following commands in a python3 environment to convert the .h5 model to a .tflite model: >>> from tensorflow.contrib import lite >>> converter = lite.TFLiteConverter.from_keras_model_file('model.h5') #path to your model  >>> tfmodel = converter.convert() >>> open("model.tflite", "wb").write(tfmodel) The model can be deployed and used by TfLite inference engine in eIQ.
View full article
This docker is designed to enable portability to leverage i.MX 8 series GPUs. It enables flexibility on top of the Yocto image and allows the use of libraries available in Ubuntu for machine learning, which is otherwise difficult. Using docker, a user can develop and prototype GPU applications and then ship and run it anywhere using the container. This App note describes how to enable this Docker. The docker is a wrapper that provides an application with the necessary dependencies to execute code on the GPU. This is a significant achievement and has the potential to greatly simplify many customer developments for Linux. We keep the Yocto BSP intact, but customers can develop applications using the widely available neural network frameworks and libraries at the same time leveraging the GPU without compromising on performance. Not quite as straightforward as full up Debian but still should be an easy sell. marcofranchi‌ diegodorta‌
View full article
After setting up an Yocto build environment as described in the L4.19.35_1.0.0 BSP Yocto Project User's Guide, apply the attached patch to the meta-fsl-bsp-release layer: <yocto_dir>/sources/meta-fsl-bsp-release$ git am eiq-sample-apps-Add-recipe.patch To include the applications in the image, add the following line to local.conf: IMAGE_INSTALL_append += "eiq-sample-apps" This will include all applications from the eIQ Sample Apps repository to the built image.
View full article
This Lab 4 explains how to get started with TensorFlow Lite application demo on i.MX8 board using Inference Engines for eIQ Software. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: TensorFlow Lite MobileFaceNets MIPI/USB Camera Face Detection Using OpenCV   This application demo uses Haar Feature-based Cascade Classifiers for real time face detection. The pre-trained Haar Feature-based Cascade Classifiers for face, named as XML, is already contained in OpenCV. The XML file for face is stored in the opencv/data/haarcascades/ folder. It is also put on code aurora. Read Face Detection using Haar Cascades for more details. TensorFlow Lite implementation for MobileFaceNets  The MobileFaceNets is re-trained with a smaller batch size and input size to get a higher performance on a host PC. The trained model is loaded as a source file in this demo. Setting Up the Board Step 1 - Download the demo from eIQ Sample Apps and put it in /opt/tflite folder. Then enter the src folder: root@imx8mmevk:~# cd /opt/tflite/examples-tflite/1-example/src/ root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# This folder should include these files: . ├── face_detect_helpers.cpp ├── face_detect_helpers.h ├── face_detect_helpers_impl.h ├── face_recognition.cpp ├── face_recognition.h ├── haarcascade_frontalface_alt.xml ├── Makefile ├── mfn.h ├── profiling.h └── ThreadPool.h Step 2 - Compile the source code on the board: root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# make Step 3 - Run the demo: root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# ./FaceRecognition -c 0 -h 0.85 NOTE: -c is used to specify the camera index. '0' means the MIPI/USB camera is mounted on /dev/video0. -h is a threshold for the prediction score. Step 4 - Add a new person to the face data set.  When the demo is running, it will detect one biggest face at real time. Once the face is detected, you can click keyboards on the right of GUI to input the new person's name. Then, click 'Add new person' to add the face to data set.  In brief, 1. Detect face. 2. Input new person's name. 3. Click 'Add new person'. NOTE: Once new faces are added, it will create a folder named 'data' in current directory. If you want to remove the new face from the data set, just delete it in 'data'.
View full article
TensorFlow provides a set of pretrained models ready for inference, please checkCaffe and TensorFlow Pretrained Models. You can use the model as it is or you can retrain the model with your own data to detect specific objects for your custom application. This post shows some useful links that can help you on this task. Inception models: Training Custom Object Detector — TensorFlow Object Detection API tutorial documentation The link above also shows the needed steps to prepare your data for the retraining process (image labeling). MobileNet models: Train your own model with SSD MobileNet · ichbinblau/tfrecord_generator Wiki · GitHub  For the link above, you also need to follow the steps to prepare your data for the retraining process as the Inception retraining tutorial. Make sure your exported the needed PYTHONPATH variable: export PYTHONPATH=$PYTHONPATH:/path/to/tf_models/models/research:/path/to/tf_models/models/research/slim A few tips: - Retraining a model will be faster than training a model from the beginning, but it can still take a long time to complete. It depends on many factors, such as the number of steps defined on the model's *.config file. You need to be aware of overfitting your model if your dataset is too small and the number of steps are too large. Also, TensorFlow saves checkpoints at the retraining process, which you can prepare for inference and test before the retraining process is over and check when the models is good enough for your application. Please check "Exporting a Trained Inference Graph" in the Inception retraining tutorial and keep in mind that you can follow these steps before the training process is complete. Of course, low checkpoints may not be well trained. - If your are running OpenCV DNN inference, you may need to run the following command to get the *.pbtxt file, where X corresponds to the number of classes trained in your model and tf_text_graph_ssd.py is an OpenCV DNN script: python tf_text_graph_ssd.py --input frozen_inference_graph.pb --output frozen_inference_graph.pbtxt --num_classes X
View full article
Getting Started with eIQ Software for i.MX Applications Processors Getting Started with eIQ for i.MX RT
View full article
eIQ Software for i.MX application processors eIQ Machine Learning Software for iMX Linux - 5.4.3_1.0.0 GA for i.MX6/7 and i.MX8MQ/8MM/8MN/8QM/8QXP has been released. eIQ Machine Learning Software for iMX Linux - 5.4.24_2.1.0 BETA for i.MX8QXPlus, BETA for i.MX8MP, and ALPHA 2 for i.MX8DXL has been released. It contains machine learning support for Arm NN, TensorFlow and TensorFlow Lite, ONNX, and OpenCV.  For running on Arm Cortex A cores, these inference engines are accelerated with Arm NEON instructions. For running on the NPU (of the i.MX 8M Plus) and i.MX 8 GPUs, NXP has included optimizations with Arm NN and TensorFlow Lite inference engines. For more information and complete details please be sure to check out the "NXP eIQ Machine Learning" chapter in the Linux User Guide (starting on L4.19 releases; L4.14 releases users should refer to NXP eIQ™ Machine Learning Software Development Environment for i.MX Applications Processors). You can access corresponding sample applications at https://source.codeaurora.org/external/imxsupport/eiq_sample_apps/.   For more information on artificial intelligence, machine learning and eIQ Software please visit AI & Machine Learning | NXP.     eIQ Software for i.MX RT crossover processors eIQ is now included in the MCUXpresso SDK package for i.MX RT1050 and i.MX RT1060. Go to https://mcuxpresso.nxp.comand search for the SDK for your board On the SDK builder page, click on “Add software component” Click on “Select All” and verify the eIQ software option is now checked. Then click on “Save Changes” Download the SDK. It will be saved as a .zip file. eIQ projects can be found in the \boards\<board_name>\eiq_examples folder eIQ source code can be found in the \middleware\eiq folder   More details can be found in this Community post  on how to get started with eIQ on i.MX RT devices. 
View full article
The eIQ demos for i.MX RT use arrays for input data for inferencing. The attached guide and scripts describe how to create custom input data (both images and audio) for use with the eIQ examples.
View full article
The OpenCV Deep-Neural Network (DNN) is a module for inference in deep networks. It is easy to use and it is a great way to get started with computer vision and inferencing. OpenCV DNN supports many frameworks, such as: Caffe TensorFlow Torch Darknet Models in ONNX format For a simple object detection code with OpenCV DNN, please check Object Detection with OpenCV. For running OpenCV DNN inference with camera input, please refer to eIQ Sample Apps - OpenCV Lab 3. For exploring different models ready for OpenCV DNN inference, please refer to Caffe and TensorFlow Pretrained Models. For more information on OpenCV DNN module, please check Deep Learning in OpenCV.
View full article
Caffe and TensorFlow provide a set of pretrained models ready for inference. They can be used as is or for retraining the models with your own dataset. Please check: TensorFlow Hosted Models TensorFlow Model Zoo Caffe Model Zoo
View full article
All the required steps for getting a full eIQ image can be found in the following documentation: NXP eIQ(TM) Machine Learning Enablement. These labs can be applied to all i.MX 8 boards, in this particular tutorial describes the i.MX 8M Mini EVK board. Hardware Requirements i.MX 8MM EVK board USB cable (micro-B to standard-A) USB Type-C to A Adapter USB Type-C 45W Power Delivery Supply IMX-MIPI-HDMI Daughter Card MINISASTOCSI Camera Daughter Card 2×Mini-SAS cable AI/ML BSP flashed into the SDCard Ethernet cable USB Mouse HDMI Cable Monitor Software Requirements For GNU/Linux: minicom or screen. For Windows: PuTTY. Preparing the Board Connect the IMX-MIPI-HDMI daughter card to the Mini-SAS cable and into connector labeled DSI MIPI (J801) and then connect the HDMI monitor cable into it. Warning: Do not hot-plug the Mini-SAS cables and cards or the boards will be damaged! Remove power completely before connecting or disconnecting the Mini-SAS ends. Connect the MINISASTOCSI camera daughter card to the Mini-SAS cable and into connector labeled CSI MIPI (J802). Connect the micro-B end of the supplied USB cable into Debug UART port J901. Connect the other end of the cable to the host computer: For GNU/Linux: Configure minicom or screen with the /dev/ttyUSB port number and set the baud rate to 115200. The port number can be found checking in /dev directory. For Windows: Configure PuTTY with the board COM port number and set the baud rate to 115200. The port number can be found on the Windows Device Manager. NOTE: This board mounts two device port numbers, use the highest number which is used to communicate with Cortex-A. Connect the MicroSD Card to the MicroSD Card Connector J701 in the board back side. In order to Boot the board from the MicroSD Card, change the Boot Switches SW1101 and SW1102 according to the table below: BOOT Device SW1101 SW1102 MicroSD / uSDHC2 0110110010 0001101000 NOTE: The boot device settings above apply to the revision C i.MX 8MM EVK board. Other revisions of the boards may have a different number of boot mode switches and slightly different settings. Please follow the SW1101 and SW1102 values printed on your specific board for booting from the MicroSD Card. Connect the power supply cable to the power connector J302 and power on the board by flipping the switch button SW101. NOTE: For more details on the board peripherals, please consult the i.MX 8MM EVK Getting Started. After these steps, it is all set to start with the eIQ Sample Apps. Go to the eIQ Sample Apps - Object Recognition using Arm NN.
View full article
This Lab 4 explains how to get started with TensorFlow Lite application demo on i.MX8 board using Inference Engines for eIQ Software. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: TensorFlow Lite MobileFaceNets MIPI/USB Camera Face Detection Using OpenCV   This application demo uses Haar Feature-based Cascade Classifiers for real time face detection. The pre-trained Haar Feature-based Cascade Classifiers for face, named as XML, is already contained in OpenCV. The XML file for face is stored in the opencv/data/haarcascades/ folder as well as code aurora. Read Face Detection using Haar Cascades for more details. TensorFlow Lite implementation for MobileFaceNets  The MobileFaceNets is re-trained on a host PC with a smaller batch size and input size to get higher performance. The trained model is loaded as a source file in this demo. Setting Up the Board Step 1 - Download the demo from eIQ Sample Apps and put it in /opt/tflite folder. Then enter the src folder: root@imx8mmevk:~# cd /opt/tflite/examples-tflite/face_recognition/src/ root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# This folder should include these files: . ├── face_detect_helpers.cpp ├── face_detect_helpers.h ├── face_detect_helpers_impl.h ├── face_recognition.cpp ├── face_recognition.h ├── haarcascade_frontalface_alt.xml ├── Makefile ├── mfn.h ├── profiling.h └── ThreadPool.h Step 2 - Compile the source code on the board: root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# make Step 3 - Run the demo: root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# ./FaceRecognition -c 0 -h 0.85 NOTE: -c is used to specify the camera index. '0' means the MIPI/USB camera is mounted on /dev/video0. -h is a threshold for the prediction score. Step 4 - Add a new person to the face data set.  When the demo is running, it will detect one biggest face at real time. Once the face is detected, you can click keyboards on the right of GUI to input the new person's name. Then, click 'Add new person' to add the face to data set.  In brief, 1. Detect face. 2. Input new person's name. 3. Click 'Add new person'. NOTE: Once new faces are added, it will create a folder named 'data' in current directory. If you want to remove the new face from the data set, just delete it in 'data'.
View full article
   The eIQ Sample Apps repository hosts Machine Learning applications demos based on the eIQ ™ ML Software Development Environment. The following examples were tested and used for training purposes. To be understandable each application contains a read-me file allowing the user to get started with the eIQ demos.    The eIQ samples apps target the latest eIQ release and are split in labs sections. Before starting with the examples, read the introduction part: eIQ Sample Apps - Introduction Object Recognition using Arm NN This section contains samples for running inference and predicting different objects. It also includes an extension that can recognize any given camera input/object. eIQ Sample Apps - Object Recognition using Arm NN Handwritten Digit Recognition This section focuses on a comparison of inference time between different models for handwritten digits recognition. eIQ Sample Apps - Handwritten Digit Recognition Object Recognition using OpenCV DNN This section uses OpenCV DNN module for running inference and detecting objects from an image. It also includes an extension that can detect any given camera input/object. eIQ Sample Apps - Object Recognition using OpenCV DNN Face Recognition using TensorFlow Lite This section uses a model for running inference and recognizing faces. eIQ Sample Apps - Face Recognition using TF Lite TensorFlow Lite Quantization This tutorial demonstrates how to convert a TensorFlow model to TensorFlow Lite and then apply quantization. eIQ Sample Apps - TFLite Quantization TensorFlow Transfer Learning This lab takes a TensorFlow image classification model and re-trains it to categorize images of flowers.  eIQ Transfer Learning Lab with i.MX 8 To deploy the demos from the eIQ Sample Apps repository to an i.MX8 board, please check: Deploying the eIQ Sample Apps to an i.MX8 board  These labs sections will be updated frequently in order to keep all codes and tutorials up-to-date. Check also: https://community.nxp.com/community/eiq/blog/2020/06/30/pyeiq-a-python-framework-for-eiq-on-imx-processors 
View full article
UPDATE: Note that this document describes eIQ Machine Learning Software for the NXP L4.14 BSP release. Beginning with the L4.19 BSP, eIQ Software is pre-integrated in the BSP release and this document is no longer necessary or being maintained. For more information on eIQ Software in these releases (L4.19, L5.4, etc), please refer to the "NXP eIQ Machine Learning" chapter in the Linux User Guide for that specific release.  Original post: eIQ Machine Learning Software for iMX Linux 4.14.y kernel series is available now. The NXP eIQ™ Machine Learning Software Development Environment enables the use of ML algorithms on NXP MCUs, i.MX RT crossover processors, and i.MX family SoCs. eIQ software includes inference engines, neural network compilers, and optimized libraries and leverages open source technologies. eIQ is fully integrated into our MCUXpresso SDK and Yocto development environments, allowing you to develop complete system-level applications with ease. Source download, build and installation Please refer to document NXP eIQ(TM) Machine Learning Enablement (UM11226.pdf) for detailed instructions on how to download, build and install eIQ software on your platform. Sample applications To help get you started right away we've posted numerous howtos and sample applications right here in the community. Please refer to eIQ Sample Apps - Overview. Supported platforms eIQ Machine learning software for i.MX Linux 4.14.y supports the L4.14.78-1.0.0 and L4.14.98-2.0.0 GA releases running on i.MX 8 Series Applications Processors. For more information on artificial intelligence, machine learning and eIQ Software please visit AI & Machine Learning | NXP.
View full article
The attached project enables users to capture and save the camera data captured by a i.MXRT1060-EVK board onto a microSD card. This project does not do inferencing of a model. Instead it is meant to be used to generate images that can then be used on a PC for training a model. The images are saved in RGB NHWC format as a binary file on the microSD card, and then a Python script running on the PC can convert those binary files into the PNG image format.   Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD - MicroSD card - Personal computer (Windows) - Micro SD card reader - Python 3.x installed Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Insert a micro SD card into the micro SD card slot on the i.MXRT1060-EVK (J39) 2. Open MCUXpresso IDE 11.2 and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 3. If desired, two #define values can be modified in camera_capture.c to adjust the captured image size: #define EXTRACT_HEIGHT 256 //Max EXTRACT_HEIGHT value possible is 271 (due to border drawing) #define EXTRACT_WIDTH 256 //Max EXTRACT_WIDTH value possible is 480 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel 8. Click on the "Resume" icon to begin running the demo. Running the demo ================ The terminal will ask for a classification name. This will create a new directory on the SD card with that name. The name size is limited to 5 characters because the FATFS file system supports only 8 characters in a file name, and three of those characters are used to number the images. For best results, the selection rectangle should be centered on the image and nearly (but not completely)  fill up the whole rectangle. The camera should be stabilized with your finger or by some other means to prevent shaking. Also ensure the camera lens has been focused as described in the instructions when connecting the camera and LCD. While running the demo, press the 'c' key to enter a new classification name, or press 'q' to quit the program and remove the micro SD card. Transfer the .bin files created on the SD card to your PC in the same directory that the Python script which can be found in the "scripts" directory to convert the images to PNG format. If the captured image is a square (width==height) the script can be called with: python convert_image.py directory_name which will convert all the .BIN files in the specified directory name to PNG files. If the captured image is not a square, the width and height can be specified at the command line: python convert_image.py directory_name width height Terminal Output ============== Camera SD Card Capture Extracted Image: Height x Width: 256x256 Please insert a card into board. Card inserted. Mounting SD Card Enter name of new class (must be less than 5 characters): test Creating directory test...... Press any key to capture image. Press 'c' to change class or 'q' to quit Writing file /test/test001.bin...... Write Complete Press any key to capture image. Press 'c' to change class or 'q' to quit Remove SD Card
View full article