eIQ Machine Learning Software Knowledge Base

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

eIQ Machine Learning Software Knowledge Base

标签

讨论

排序依据:
When comparing NPU with CPU performance on the i.MX 8M Plus, the perception is that inference time is much longer on the NPU. This is due to the fact that the ML accelerator spends more time performing overall initialization steps. This initialization phase is known as warmup and is necessary only once at the beginning of the application. After this step inference is executed in a truly accelerated manner as expected for a dedicated NPU. The purpose of this document is to clarify the impact of the warmup time on overall performance.
查看全文
The eIQ CMSIS-NN software for i.MX RT devices that is found in the MCUXPresso SDK package can be ported to other microcontroller devices in the RT family, as well as to some LPC and Kinetis devices.  A very common question is what processors support inferencing of models, and the answer is that inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, which almost any MCU or MPU is capable of. There’s no special hardware or module required to do inferencing. However high core clock speeds, and fast memory can drastically reduce inference time. Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application and the particular model.  Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the intermediate calculations and output The attached guide walks through how to port the CMSIS-NN inference engine to the LPC55S69 family. Similar steps can be done to port eIQ to other microcontroller devices. This guide is made available as a reference for users interested in exploring eIQ on other devices, however only the RT1050 and RT1060 are officially supported at this time for CMSIS-NN for MCUs as part of eIQ.  These other eIQ porting guides might also be of interest: Glow Porting Guide for MCUs TensorFlow Lite Porting Guide for RT685
查看全文
NXP BSP currently does not support running a Keras application directly on i.MX. The customers that use this approach must convert their Keras model into one the supported inference engines in eIQ. In this post we will cover converting a Keras model (.h5) to a TfLite model (.tflite). Install TensorFlow with the same eIQ TfLite supported version (you can find this information on Linux User's Guide). For L4.19.35_1.0.0 the TfLite version is v1.12.0. $ pip3 install tensorflow==1.12.0 Run the following commands in a python3 environment to convert the .h5 model to a .tflite model: >>> from tensorflow.contrib import lite >>> converter = lite.TFLiteConverter.from_keras_model_file('model.h5') #path to your model  >>> tfmodel = converter.convert() >>> open("model.tflite", "wb").write(tfmodel) The model can be deployed and used by TfLite inference engine in eIQ.
查看全文
This docker is designed to enable portability to leverage i.MX 8 series GPUs. It enables flexibility on top of the Yocto image and allows the use of libraries available in Ubuntu for machine learning, which is otherwise difficult. Using docker, a user can develop and prototype GPU applications and then ship and run it anywhere using the container. This App note describes how to enable this Docker. The docker is a wrapper that provides an application with the necessary dependencies to execute code on the GPU. This is a significant achievement and has the potential to greatly simplify many customer developments for Linux. We keep the Yocto BSP intact, but customers can develop applications using the widely available neural network frameworks and libraries at the same time leveraging the GPU without compromising on performance. Not quite as straightforward as full up Debian but still should be an easy sell. marcofranchi‌ diegodorta‌
查看全文
After setting up an Yocto build environment as described in the L4.19.35_1.0.0 BSP Yocto Project User's Guide, apply the attached patch to the meta-fsl-bsp-release layer: <yocto_dir>/sources/meta-fsl-bsp-release$ git am eiq-sample-apps-Add-recipe.patch To include the applications in the image, add the following line to local.conf: IMAGE_INSTALL_append += "eiq-sample-apps" This will include all applications from the eIQ Sample Apps repository to the built image.
查看全文
This Lab 4 explains how to get started with TensorFlow Lite application demo on i.MX8 board using Inference Engines for eIQ Software. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: TensorFlow Lite MobileFaceNets MIPI/USB Camera Face Detection Using OpenCV   This application demo uses Haar Feature-based Cascade Classifiers for real time face detection. The pre-trained Haar Feature-based Cascade Classifiers for face, named as XML, is already contained in OpenCV. The XML file for face is stored in the opencv/data/haarcascades/ folder. It is also put on code aurora. Read Face Detection using Haar Cascades for more details. TensorFlow Lite implementation for MobileFaceNets  The MobileFaceNets is re-trained with a smaller batch size and input size to get a higher performance on a host PC. The trained model is loaded as a source file in this demo. Setting Up the Board Step 1 - Download the demo from eIQ Sample Apps and put it in /opt/tflite folder. Then enter the src folder: root@imx8mmevk:~# cd /opt/tflite/examples-tflite/1-example/src/ root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# This folder should include these files: . ├── face_detect_helpers.cpp ├── face_detect_helpers.h ├── face_detect_helpers_impl.h ├── face_recognition.cpp ├── face_recognition.h ├── haarcascade_frontalface_alt.xml ├── Makefile ├── mfn.h ├── profiling.h └── ThreadPool.h Step 2 - Compile the source code on the board: root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# make Step 3 - Run the demo: root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# ./FaceRecognition -c 0 -h 0.85 NOTE: -c is used to specify the camera index. '0' means the MIPI/USB camera is mounted on /dev/video0. -h is a threshold for the prediction score. Step 4 - Add a new person to the face data set.  When the demo is running, it will detect one biggest face at real time. Once the face is detected, you can click keyboards on the right of GUI to input the new person's name. Then, click 'Add new person' to add the face to data set.  In brief, 1. Detect face. 2. Input new person's name. 3. Click 'Add new person'. NOTE: Once new faces are added, it will create a folder named 'data' in current directory. If you want to remove the new face from the data set, just delete it in 'data'.
查看全文
TensorFlow provides a set of pretrained models ready for inference, please checkCaffe and TensorFlow Pretrained Models. You can use the model as it is or you can retrain the model with your own data to detect specific objects for your custom application. This post shows some useful links that can help you on this task. Inception models: Training Custom Object Detector — TensorFlow Object Detection API tutorial documentation The link above also shows the needed steps to prepare your data for the retraining process (image labeling). MobileNet models: Train your own model with SSD MobileNet · ichbinblau/tfrecord_generator Wiki · GitHub  For the link above, you also need to follow the steps to prepare your data for the retraining process as the Inception retraining tutorial. Make sure your exported the needed PYTHONPATH variable: export PYTHONPATH=$PYTHONPATH:/path/to/tf_models/models/research:/path/to/tf_models/models/research/slim A few tips: - Retraining a model will be faster than training a model from the beginning, but it can still take a long time to complete. It depends on many factors, such as the number of steps defined on the model's *.config file. You need to be aware of overfitting your model if your dataset is too small and the number of steps are too large. Also, TensorFlow saves checkpoints at the retraining process, which you can prepare for inference and test before the retraining process is over and check when the models is good enough for your application. Please check "Exporting a Trained Inference Graph" in the Inception retraining tutorial and keep in mind that you can follow these steps before the training process is complete. Of course, low checkpoints may not be well trained. - If your are running OpenCV DNN inference, you may need to run the following command to get the *.pbtxt file, where X corresponds to the number of classes trained in your model and tf_text_graph_ssd.py is an OpenCV DNN script: python tf_text_graph_ssd.py --input frozen_inference_graph.pb --output frozen_inference_graph.pbtxt --num_classes X
查看全文
Getting Started with eIQ Software for i.MX Applications Processors Getting Started with eIQ for i.MX RT
查看全文
eIQ Software for i.MX application processors eIQ Machine Learning Software for iMX Linux - 5.4.3_1.0.0 GA for i.MX6/7 and i.MX8MQ/8MM/8MN/8QM/8QXP has been released. eIQ Machine Learning Software for iMX Linux - 5.4.24_2.1.0 BETA for i.MX8QXPlus, BETA for i.MX8MP, and ALPHA 2 for i.MX8DXL has been released. It contains machine learning support for Arm NN, TensorFlow and TensorFlow Lite, ONNX, and OpenCV.  For running on Arm Cortex A cores, these inference engines are accelerated with Arm NEON instructions. For running on the NPU (of the i.MX 8M Plus) and i.MX 8 GPUs, NXP has included optimizations with Arm NN and TensorFlow Lite inference engines. For more information and complete details please be sure to check out the "NXP eIQ Machine Learning" chapter in the Linux User Guide (starting on L4.19 releases; L4.14 releases users should refer to NXP eIQ™ Machine Learning Software Development Environment for i.MX Applications Processors). You can access corresponding sample applications at https://source.codeaurora.org/external/imxsupport/eiq_sample_apps/.   For more information on artificial intelligence, machine learning and eIQ Software please visit AI & Machine Learning | NXP.     eIQ Software for i.MX RT crossover processors eIQ is now included in the MCUXpresso SDK package for i.MX RT1050 and i.MX RT1060. Go to https://mcuxpresso.nxp.comand search for the SDK for your board On the SDK builder page, click on “Add software component” Click on “Select All” and verify the eIQ software option is now checked. Then click on “Save Changes” Download the SDK. It will be saved as a .zip file. eIQ projects can be found in the \boards\<board_name>\eiq_examples folder eIQ source code can be found in the \middleware\eiq folder   More details can be found in this Community post  on how to get started with eIQ on i.MX RT devices. 
查看全文
The eIQ demos for i.MX RT use arrays for input data for inferencing. The attached guide and scripts describe how to create custom input data (both images and audio) for use with the eIQ examples.
查看全文
The OpenCV Deep-Neural Network (DNN) is a module for inference in deep networks. It is easy to use and it is a great way to get started with computer vision and inferencing. OpenCV DNN supports many frameworks, such as: Caffe TensorFlow Torch Darknet Models in ONNX format For a simple object detection code with OpenCV DNN, please check Object Detection with OpenCV. For running OpenCV DNN inference with camera input, please refer to eIQ Sample Apps - OpenCV Lab 3. For exploring different models ready for OpenCV DNN inference, please refer to Caffe and TensorFlow Pretrained Models. For more information on OpenCV DNN module, please check Deep Learning in OpenCV.
查看全文
Caffe and TensorFlow provide a set of pretrained models ready for inference. They can be used as is or for retraining the models with your own dataset. Please check: TensorFlow Hosted Models TensorFlow Model Zoo Caffe Model Zoo
查看全文
All the required steps for getting a full eIQ image can be found in the following documentation: NXP eIQ(TM) Machine Learning Enablement. These labs can be applied to all i.MX 8 boards, in this particular tutorial describes the i.MX 8M Mini EVK board. Hardware Requirements i.MX 8MM EVK board USB cable (micro-B to standard-A) USB Type-C to A Adapter USB Type-C 45W Power Delivery Supply IMX-MIPI-HDMI Daughter Card MINISASTOCSI Camera Daughter Card 2×Mini-SAS cable AI/ML BSP flashed into the SDCard Ethernet cable USB Mouse HDMI Cable Monitor Software Requirements For GNU/Linux: minicom or screen. For Windows: PuTTY. Preparing the Board Connect the IMX-MIPI-HDMI daughter card to the Mini-SAS cable and into connector labeled DSI MIPI (J801) and then connect the HDMI monitor cable into it. Warning: Do not hot-plug the Mini-SAS cables and cards or the boards will be damaged! Remove power completely before connecting or disconnecting the Mini-SAS ends. Connect the MINISASTOCSI camera daughter card to the Mini-SAS cable and into connector labeled CSI MIPI (J802). Connect the micro-B end of the supplied USB cable into Debug UART port J901. Connect the other end of the cable to the host computer: For GNU/Linux: Configure minicom or screen with the /dev/ttyUSB port number and set the baud rate to 115200. The port number can be found checking in /dev directory. For Windows: Configure PuTTY with the board COM port number and set the baud rate to 115200. The port number can be found on the Windows Device Manager. NOTE: This board mounts two device port numbers, use the highest number which is used to communicate with Cortex-A. Connect the MicroSD Card to the MicroSD Card Connector J701 in the board back side. In order to Boot the board from the MicroSD Card, change the Boot Switches SW1101 and SW1102 according to the table below: BOOT Device SW1101 SW1102 MicroSD / uSDHC2 0110110010 0001101000 NOTE: The boot device settings above apply to the revision C i.MX 8MM EVK board. Other revisions of the boards may have a different number of boot mode switches and slightly different settings. Please follow the SW1101 and SW1102 values printed on your specific board for booting from the MicroSD Card. Connect the power supply cable to the power connector J302 and power on the board by flipping the switch button SW101. NOTE: For more details on the board peripherals, please consult the i.MX 8MM EVK Getting Started. After these steps, it is all set to start with the eIQ Sample Apps. Go to the eIQ Sample Apps - Object Recognition using Arm NN.
查看全文
This Lab 4 explains how to get started with TensorFlow Lite application demo on i.MX8 board using Inference Engines for eIQ Software. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: TensorFlow Lite MobileFaceNets MIPI/USB Camera Face Detection Using OpenCV   This application demo uses Haar Feature-based Cascade Classifiers for real time face detection. The pre-trained Haar Feature-based Cascade Classifiers for face, named as XML, is already contained in OpenCV. The XML file for face is stored in the opencv/data/haarcascades/ folder as well as code aurora. Read Face Detection using Haar Cascades for more details. TensorFlow Lite implementation for MobileFaceNets  The MobileFaceNets is re-trained on a host PC with a smaller batch size and input size to get higher performance. The trained model is loaded as a source file in this demo. Setting Up the Board Step 1 - Download the demo from eIQ Sample Apps and put it in /opt/tflite folder. Then enter the src folder: root@imx8mmevk:~# cd /opt/tflite/examples-tflite/face_recognition/src/ root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# This folder should include these files: . ├── face_detect_helpers.cpp ├── face_detect_helpers.h ├── face_detect_helpers_impl.h ├── face_recognition.cpp ├── face_recognition.h ├── haarcascade_frontalface_alt.xml ├── Makefile ├── mfn.h ├── profiling.h └── ThreadPool.h Step 2 - Compile the source code on the board: root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# make Step 3 - Run the demo: root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# ./FaceRecognition -c 0 -h 0.85 NOTE: -c is used to specify the camera index. '0' means the MIPI/USB camera is mounted on /dev/video0. -h is a threshold for the prediction score. Step 4 - Add a new person to the face data set.  When the demo is running, it will detect one biggest face at real time. Once the face is detected, you can click keyboards on the right of GUI to input the new person's name. Then, click 'Add new person' to add the face to data set.  In brief, 1. Detect face. 2. Input new person's name. 3. Click 'Add new person'. NOTE: Once new faces are added, it will create a folder named 'data' in current directory. If you want to remove the new face from the data set, just delete it in 'data'.
查看全文
   The eIQ Sample Apps repository hosts Machine Learning applications demos based on the eIQ ™ ML Software Development Environment. The following examples were tested and used for training purposes. To be understandable each application contains a read-me file allowing the user to get started with the eIQ demos.    The eIQ samples apps target the latest eIQ release and are split in labs sections. Before starting with the examples, read the introduction part: eIQ Sample Apps - Introduction Object Recognition using Arm NN This section contains samples for running inference and predicting different objects. It also includes an extension that can recognize any given camera input/object. eIQ Sample Apps - Object Recognition using Arm NN Handwritten Digit Recognition This section focuses on a comparison of inference time between different models for handwritten digits recognition. eIQ Sample Apps - Handwritten Digit Recognition Object Recognition using OpenCV DNN This section uses OpenCV DNN module for running inference and detecting objects from an image. It also includes an extension that can detect any given camera input/object. eIQ Sample Apps - Object Recognition using OpenCV DNN Face Recognition using TensorFlow Lite This section uses a model for running inference and recognizing faces. eIQ Sample Apps - Face Recognition using TF Lite TensorFlow Lite Quantization This tutorial demonstrates how to convert a TensorFlow model to TensorFlow Lite and then apply quantization. eIQ Sample Apps - TFLite Quantization TensorFlow Transfer Learning This lab takes a TensorFlow image classification model and re-trains it to categorize images of flowers.  eIQ Transfer Learning Lab with i.MX 8 To deploy the demos from the eIQ Sample Apps repository to an i.MX8 board, please check: Deploying the eIQ Sample Apps to an i.MX8 board  These labs sections will be updated frequently in order to keep all codes and tutorials up-to-date. Check also: https://community.nxp.com/community/eiq/blog/2020/06/30/pyeiq-a-python-framework-for-eiq-on-imx-processors 
查看全文
UPDATE: Note that this document describes eIQ Machine Learning Software for the NXP L4.14 BSP release. Beginning with the L4.19 BSP, eIQ Software is pre-integrated in the BSP release and this document is no longer necessary or being maintained. For more information on eIQ Software in these releases (L4.19, L5.4, etc), please refer to the "NXP eIQ Machine Learning" chapter in the Linux User Guide for that specific release.  Original post: eIQ Machine Learning Software for iMX Linux 4.14.y kernel series is available now. The NXP eIQ™ Machine Learning Software Development Environment enables the use of ML algorithms on NXP MCUs, i.MX RT crossover processors, and i.MX family SoCs. eIQ software includes inference engines, neural network compilers, and optimized libraries and leverages open source technologies. eIQ is fully integrated into our MCUXpresso SDK and Yocto development environments, allowing you to develop complete system-level applications with ease. Source download, build and installation Please refer to document NXP eIQ(TM) Machine Learning Enablement (UM11226.pdf) for detailed instructions on how to download, build and install eIQ software on your platform. Sample applications To help get you started right away we've posted numerous howtos and sample applications right here in the community. Please refer to eIQ Sample Apps - Overview. Supported platforms eIQ Machine learning software for i.MX Linux 4.14.y supports the L4.14.78-1.0.0 and L4.14.98-2.0.0 GA releases running on i.MX 8 Series Applications Processors. For more information on artificial intelligence, machine learning and eIQ Software please visit AI & Machine Learning | NXP.
查看全文
The attached project enables users to capture and save the camera data captured by a i.MXRT1060-EVK board onto a microSD card. This project does not do inferencing of a model. Instead it is meant to be used to generate images that can then be used on a PC for training a model. The images are saved in RGB NHWC format as a binary file on the microSD card, and then a Python script running on the PC can convert those binary files into the PNG image format.   Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD - MicroSD card - Personal computer (Windows) - Micro SD card reader - Python 3.x installed Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Insert a micro SD card into the micro SD card slot on the i.MXRT1060-EVK (J39) 2. Open MCUXpresso IDE 11.2 and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 3. If desired, two #define values can be modified in camera_capture.c to adjust the captured image size: #define EXTRACT_HEIGHT 256 //Max EXTRACT_HEIGHT value possible is 271 (due to border drawing) #define EXTRACT_WIDTH 256 //Max EXTRACT_WIDTH value possible is 480 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel 8. Click on the "Resume" icon to begin running the demo. Running the demo ================ The terminal will ask for a classification name. This will create a new directory on the SD card with that name. The name size is limited to 5 characters because the FATFS file system supports only 8 characters in a file name, and three of those characters are used to number the images. For best results, the selection rectangle should be centered on the image and nearly (but not completely)  fill up the whole rectangle. The camera should be stabilized with your finger or by some other means to prevent shaking. Also ensure the camera lens has been focused as described in the instructions when connecting the camera and LCD. While running the demo, press the 'c' key to enter a new classification name, or press 'q' to quit the program and remove the micro SD card. Transfer the .bin files created on the SD card to your PC in the same directory that the Python script which can be found in the "scripts" directory to convert the images to PNG format. If the captured image is a square (width==height) the script can be called with: python convert_image.py directory_name which will convert all the .BIN files in the specified directory name to PNG files. If the captured image is not a square, the width and height can be specified at the command line: python convert_image.py directory_name width height Terminal Output ============== Camera SD Card Capture Extracted Image: Height x Width: 256x256 Please insert a card into board. Card inserted. Mounting SD Card Enter name of new class (must be less than 5 characters): test Creating directory test...... Press any key to capture image. Press 'c' to change class or 'q' to quit Writing file /test/test001.bin...... Write Complete Press any key to capture image. Press 'c' to change class or 'q' to quit Remove SD Card
查看全文