eIQ Machine Learning Software Knowledge Base

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

eIQ Machine Learning Software Knowledge Base

标签

讨论

排序依据:
This document will cover some of the most commonly asked questions we've gotten about eIQ and embedded machine learning. Anything requiring more in-depth discussion/explanation will be put in a separate thread. All new questions should go into their own thread as well What is eIQ? The NXP® eIQ™ machine learning (ML) software development environment enables the use of ML algorithms on NXP EdgeVerse™ microcontrollers and microprocessors, including MCX-N microcontrollers, i.MX RT crossover MCUs, and i.MX family application processors. eIQ ML software includes a ML workflow tool called eIQ Toolkit, along with inference engines, neural network compilers and optimized libraries. This software leverages open-source and proprietary technologies and is fully integrated into our MCUXpresso SDK and Yocto development environments, allowing you to develop complete system-level applications with ease. eIQ also enables models to use the new eIQ Neutron NPU found on the MCX-N microcontroller devices and upcoming future NPU enabled embedded devices.    How much does eIQ cost? Free! NXP is making eIQ freely available as a basic enablement to jumpstart ML application development. It is also royalty free.    What devices are supported by eIQ? eIQ is available for the following i.MX application processors: i.MX 8M Plus i.MX 8M i.MX 8M Nano i.MX 8M Mini i.MX 8ULP i.MX 8X i.MX 93 i.MX 95 eIQ is available for the following MCX MCUs: MCX-N eIQ is available for the following i.MX RT crossover MCUs: i.MX RT1170 i.MX RT1160 i.MX RT1064 i.MX RT1060 i.MX RT1050 i.MX RT700 i.MX RT685 i.MX RT595   What inference engines are available in eIQ? i.MX apps processors and i.MX RT MCUs support different inference engines. The best inference engine can depend on the particular model being used, so eIQ offers several inference engine options to find the best fit for your particular application.  Inference engines for i.MX: TensorFlow Lite (Supported on both CPU and GPU/NPU) ARM NN (Supported on both CPU and GPU/NPU) OpenCV (Supported on only CPU) ONNX Runtime (Currently only supported on CPU) Inference engines for MCX and i.MX RT TensorFlow Lite for Microcontrollers     Can eIQ run on other MCU devices? There's no special hardware module required to run eIQ inference engines and it is possible to port the inference engines to other NXP devices.    What is eIQ Toolkit? eIQ Toolkit enables machine learning development with an intuitive GUI (named eIQ Portal) and development workflow tools, along with command line host tool options as part of the eIQ ML software development environment.  The eIQ Portal is an intuitive graphical user interface (GUI) that simplifies ML development. Developers can create, optimize, debug and export vision-based ML models, as well as import datasets and models, rapidly train and deploy neural network models and ML workloads. eIQ Toolkit also includes the Neutron Converter Tool that is used to convert quantized TensorFlow Lite models so they can make use of the eIQ Neutron NPU found on newer NXP devices like the MCX N family.  eIQ Toolkit also now includes the eIQ Time Series Studio which is a new tool that features an automated machine-learning workflow that streamlines the development and deployment of time series-based machine learning models across microcontroller (MCU) class devices.  Is eIQ Toolkit required to use eIQ inference engines?  No, eIQ Toolkit is optional enablement from NXP to make it easier to generate models that can then be used with the eIQ inference engines. However if you already have your model development flow in place, or want to use pre-created models from a model zoo, you can use those models with eIQ inference engines as well.    What is the eIQ Neutron NPU?  The new eIQ Neutron NPU is a Neural Processing Unit developed by NXP which has been integrated into the upcoming MCX N and i.MX95 devices, with many more to come. It was designed to accelerate neural network computations and significantly reduce model inference time. The scalability of this module allows NXP to integrate this NPU into a wide range of devices all while having the same eIQ software enablement.  For more details on the NPU for MCX N see this Community post. How can I start using the eIQ Neutron NPU?  There are hands-on NPU lab guides available that walk through the steps for converting and running a model with the eIQ Neutron NPU.  What TFLite operators are supported by the N1-16 eIQ Neutron NPU on MCX N?  The details and constraints for these operators can be found in the eIQ Toolkit User Guide. The table includes all possible operators for RT700 and MCX N but the MCX N is a subset:  ADD AVERAGE_POOL_2D CONV_2D DEPTHWISE_CONV_2D FULLY_CONNECTED MAX_POOL_2D PAD RESHAPE SLICE   What is eIQ Time Series Studio (TSS)?   It's a new free tool in our eIQ AI and machine learning development software family. eIQ TSS features an automated machine-learning workflow that streamlines the development and deployment of time series-based machine learning models across microcontroller (MCU) class devices such as the MCX portfolio of MCUs and i.MX RT portfolio of crossover MCUs. Time Series Studio supports a wide range of sensor input signals, including voltage, current, temperature, vibration, pressure, sound, time of flight, among others, as well as combinations of these for multimodal sensor fusion. The automatic machine learning capability enables developers to extract meaningful insights from raw time-sequential data and quickly build AI models tailored to meet accuracy, RAM and storage criteria for microcontrollers. The tool offers a comprehensive development environment, including data curation, visualization and analysis, as well as model autogeneration, optimization, emulation and deployment.   What devices are supported by the eIQ Time Series Studio?   TSS will generate a library that can be included in your application and does not use an Deep Learning inference engine, so it can be deployed to a much wider range of NXP devices as it has very minimal flash and RAM requirements.  MCX FRDM-MCXA153 FRDM-MCXN947 MCX-N9XX-EVK i.MX RT MIMXRT1060-EVK MIMXRT1170-EVK MIMXRT1180-EVK LPC LPC55S69-EVK Kinetis FRDM-K66F FRDM-KV31F FRDM-K32L3A6 DSC MC56F83000-EVK MC56F80000-EVK How can I start using the eIQ Time Series Studio tool?  It is included in eIQ Toolkit v1.13.1 or later and can be accessed from the home screen. There is a hands-on lab guide available to walk through how to use the tool as well as documentation and guides in the tool itself.  How can I get eIQ? For MCU devices: eIQ inference engines are included as part of MCUXpresso SDK for supported devices. Make sure to select the “eIQ” middleware option. There is also eIQ Toolkit - for model creation and conversion. Includes the GUI model creation tool eIQ Portal, the eIQ Time Series Studio, and the eIQ Neutron Converter Tool for eIQ Neutron NPU enabled devices like MCX N.  For i.MX devices: eIQ is distributed as part of the Yocto Linux BSP. Starting with the 4.19 release line there is a dedicated Yocto image that includes all the Machine Learning features: ‘imx-image-full’. For pre-build binaries refer to i.MX Linux Releases and Pre-releases pages. There are is eIQ Toolkit - for model creation and conversion. Includes the GUI model creation tool eIQ Portal.    What documentation is available for eIQ? For i.MX RT and MCX N devices:  There are user guides inside the \middleware\eiq\doc folder after downloading the MCUXpresso SDK from the MCUXpresso SDK builder. Documentation for eIQ Toolkit can be found inside the eIQ Toolkit documentation folder after installation, typically at C:\NXP\eIQ_Toolkit_<version>\docs   For i.MX devices: The eIQ documentation for i.MX is integrated in the Yocto BSP documentation. Refer to i.MX Linux Releases and Pre-releases pages. i.MX Reference Manual: presents an overview of the NXP eIQ Machine Learning technology. i.MX Linux_User's Guide: presents detailed instructions on how to run and develop applications using the ML frameworks available in eIQ (currently ArmNN, TFLite, OpenCV and ONNX). i.MX Yocto Project User's Guide: presents build instructions to include eIQ ML support (check sections referring to ‘imx-image-full’ that includes all eIQ features). It is recommended to also check the i.MX Linux Release Notes which includes eIQ details.   For i.MX devices, what type of Machine Learning applications can I create?  Following the BYOM principle described above, you can create a wide variety of applications for running on I.MX. To help kickstart your efforts, refer to PyeIQ – a collection of demos and applications that demonstrate the Machine Learning capabilities available on i.MX. They are very easy to use (install with a single command, retrieve input data automatically) The implementation is very easy to understand (using the python API for TFLite, ArmNN and OpenCV) They demonstrate several types of ML applications (e.g., object detection, classification, facial expression detection) running on the different compute units available on i.MX to execute the inference (Cortex-A, GPU, NPU).   Can I use the python API provided by PyeIQ to develop my own application on i.MX devices? For developing a custom application in python, it is recommended to directly use the python API for ArmNN, TFLite, and OpenCV. Refer to the i.MX Linux User’s Guide for more details.   You can use the PyeIQ scripts as a starting point and include code snippets in a custom application (please make sure to add the right copyright terms) but shouldn’t rely on PyeIQ to entirely develop a product.   The PyeIQ python API is meant to help demo developers with the creation of new examples.   What eIQ example applications are available for i.MX RT? eIQ example applications can be found in the <SDK DIR>\boards\<board_name>\eiq_examples directory:      What are Glow and DeepViewRT inference engines in the MCUXpresso SDK?  These are inference engines that were supported in previous versions of eIQ but are now deprecated as new development has focused on TensorFlow Lite for Microcontrollers. These projects are still available in MCUXpresso SDK 2.15 for legacy users, but it is highly recommended that any new projects use TensorFlow Lite for Microcontrollers.     How can I learn more about using TensorFlow Lite with eIQ? There is a hands-on TensorFlow Lite for Microcontrollers lab available. There is also a i.MX TensorFlow Lite Lab that provide a step-by-step guide on how to get started with eIQ for TensorFlow Lite for i.MX devices.    How can I learn more about using eIQ Toolkit to generate a model? There are hands-on eIQ Toolkit labs available that provide a step-by-step guide to get started with generating a vision based model with eIQ Toolkit.  How can I learn more about using eIQ Time Series Studio to generate a time series model? Use the hands-on eIQ Time Series Studio lab guides to get started with building time series models that can be deployed on microcontrollers.    What application notes are available to learn more about eIQ? Anomaly Detection App Note: https://www.nxp.com/docs/en/application-note/AN12766.pdf Handwritten Digit Recognition: https://www.nxp.com/docs/en/application-note/AN12603.pdf Datasets and Transfer Learning App Note: https://www.nxp.com/docs/en/application-note/AN12892.pdf Glow Memory Analysis App Note: https://www.nxp.com/docs/en/application-note/AN13001.pdf  Security for Machine Learning Package: https://www.nxp.com/docs/en/application-note/AN12867.pdf i.MX 8M Plus NPU Warmup Time App Note: https://www.nxp.com/docs/en/application-note/AN12964.pdf What is the advantage of using eIQ instead of using the open-sourced software directly from Github? eIQ supported inference engines work out of the box and are already tested and optimized, allowing for performance enhancements compared to the original code. eIQ also includes the software to capture the camera or voice data from external peripherals. eIQ allows you to get up and running within minutes instead of weeks. As a comparison, rolling your own is like grinding your own flour to make a pizza from scratch, instead of just ordering a great pizza from your favorite pizza place.    Does eIQ include ML models? Do I use it to train a model? eIQ is a collection of software that allows you to Bring Your Own Model (BYOM) and run it on NXP embedded devices. eIQ provides the ability to run your own specialized model on NXP’s embedded devices.  For those new to AI/ML, we also now offer eIQ Toolkit which can be used to generate new vision based AI models using images provided to the tool as well as generate time series models using the included Time Series Studio software.  MCUXpresso SDK and the i.MX Linux releases come with several examples that use pre-created models that can be used to get a sense of what is possible on our platforms, and it is very easy to substitute in your own model into those examples.   I’m new to AI/ML and don’t know how to create a model, what can I do? A wide variety of resources are available for creating models, from labs and tutorials, to automated model generation tools like eIQ Toolkit, Google Cloud AutoML, Microsoft Azure Machine Learning, or Amazon ML Services, to 3 rd party partners like SensiML and Au-Zone that can help you define, enhance, and create a model for your specific application. Alternatively if you have no interest in generating models yourself, NXP also offers several pre-built voice and facial recognition solutions that include the appropriate models already created for you. There are Alexa Voice Services, Local voice control, and face and emotion recognition solutions available. Note that these solutions are different from eIQ as they include the model as well as the appropriate hardware and so those devices are sold as unique part numbers and have a cost-optimized BOM to directly use in your final product. eIQ is for those who want to use their own model or generate a model themselves using eIQ Toolkit. The three solutions mentioned above are for those who want a full solution (including model) already created for them for those specific applications. I’m interested in anomaly detect or time series models on microcontrollers, where can I get started? The eIQ Time Series Studio (TSS) tool, included as part of the eIQ Toolkit, is perfect for getting started with time series or anomaly detection models. It allows you to import time series datasets, generate models, and deploy them to NXP microcontrollers.  There is also ML-based System State Monitor Application Software Pack which provides an example of gathering time-series data, in this case vibrations picked up by an accelerometer, and includes Python scripts to use the data that was collected to generate a small model that can be deployed on many different microcontrollers (including i.MX RT1170, LPC55S69, K66F) for anomaly detection. The same concepts and technique can be used for any sort of times series data like magnetometers, pressure, temperature, flow speed, and much more. This can simplify the work of coming up with a customer algorithm to detect the different states of whatever system you're interested in, as you can let the power of machine learning figure all that out for you.  There is also an on-device trained anomaly detection model example that can be found on the Application Code Hub.     Troubleshooting: Why do I get an error when running Tensorflow Lite Micro that it "Didn't find op for builtin opcode"? The full error will look something like this: Didn't find op for builtin opcode 'PAD' version '1' Failed to get registration from op code ADD Failed starting model allocation. AllocateTensors() failed Failed initializing model The reason is that with MCUXpresso SDK, the TFLM examples have been optimized to only support the operands necessary for the default models. If you are using your own model, it may use extra types of operands. To fix this issue, add that operator to MODEL_GetOpsResolver function found in source\model\model_name_ops_npu.cpp Also make sure to also increase the size of the static array s_microOpResolver to match the number of operators An alternative method is also described in the TFLM Lab Guide on how to use the All Ops Resolver. Add the following header file #include"tensorflow/lite/micro/all_ops_resolver.h" and then comment out the micro_op_resolver and use this instead:  //tflite::MicroOpResolver &micro_op_resolver = //MODEL_GetOpsResolver(s_errorReporter); tflite::AllOpsResolver micro_op_resolver; Why do I get the error “Internal Neutron NPU driver error 281b in model prepare!” or "Incompatible Neutron NPU microcode and driver versions!" when using the Neutron NPU? The version of the eIQ Neutron Converter Tool needs to be compatible with the NPU libraries used by your project. See more details in this post on using custom models with eIQ Neutron NPU.  Sometimes in eIQ Toolkit the Validation page hangs and it stays stuck on "Converting Model". How do I work around this?  On the Validation section of the wizard, you will need to wait for the "Input Data Type" and "Output Data Type" selection boxes to be populated before clicking on the "Validation" button at the bottom. It may take a minute or two for those selection boxes to pop up on the left hand side. Once they do, then click on Validate and it should no longer hang.  How do I use my GPU when training with eIQ Toolkit? eIQ Toolkit 1.10 only supports GPU training on Linux due to the latest TensorFlow versions no longer supporting GPU on Windows.  Why do I get a blank or black LCD screen when I use the eIQ demos that have camera+LCD support on RT1170 or RT1160? There are different versions of the LCD, so you need to make sure you have the software configured correctly for the LCD you have. See this post for more details on what to change. There is a Javascript error in Time Series Studio when I start the training.  There is a bug where if the eIQ Portal window is closed after opening the Time Series Studio then that error comes up. Try relaunching Time Series Studio but keep the original eIQ Portal window open.    General AI/ML: What is Artificial Intelligence, Machine Learning, and Deep Learning? Artificial intelligence is the idea of using machines to do “smart” things like a human. Machine Learning is one way to implement artificial intelligence, and is the idea that if you give a computer a lot of data, it can learn how to do smart things on its own. Deep Learning is a particular way of implementing machine learning by using something called a neural network. It’s one of the more promising subareas of artificial intelligence today.   This video series on Neural Network basics provides an excellent introduction into what a neural network is and the basics of how one works.    What are some uses for machine learning on embedded systems? Image classification – identify what a camera is looking at Coffee pods Empty vs full trucks Factory defects on manufacturing line Produce on supermarket scale Facial recognition – identifying faces for personalization without uploading that private information to the cloud Home Personalization Appliances Toys Auto Audio Analysis Wake-word detection Voice commands Alarm Analytics (Breaking glass/crying baby) Anomaly Detection Identify factory issues before they become catastrophic Motor analysis Personalized health analysis   What is training and inference? Machine learning consists of two phases: Training and Inference   Training is the process of creating and teaching the model. This occurs on a PC or in the cloud and requires a lot of data to do the training. eIQ is not used during the training process.   Inference is using a completed and trained model to do predictions on new data. eIQ is focused on enhancing the inferencing of models on embedded devices.   What are the benefits for “on the edge” inference? When inference occurs on the embedded device instead of the cloud, it’s called “on the edge”. The biggest advantage of on the edge inferencing is that the data being analyzed never goes anywhere except the local embedded system, providing increased security and privacy. It also saves BOM costs because there’s no need for WiFi or BLE to get data up to the cloud, and there’s no charge for the cloud compute costs to do the inferencing.  It also allows for faster inferencing since there’s no latency waiting for data to be uploaded and then the answer received from the cloud.   What processor do I need to do inferencing of models? Inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, which any MCU or MPU is capable of. There’s no special hardware or module required to do inferencing. However specialized ML hardware accelerators, high core clock speeds, and fast memory can drastically reduce inference time.   Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application. Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the intermediate calculations and output   As an example, the performance required for image recognition will be very dependent on the model is being used to do image recognition. This will vary depending on how many classes, what size of images to be analyzed, if multiple objects or just one will be identified, and how that particular model is structured. In general image classification can be done on i.MX RT devices and multiple object detection requires i.MX devices, as those models are significantly more complex.   eIQ provides several examples of image recognition for i.MX RT and i.MX devices and your own custom models can be easily evaluated using those example projects.    How is accuracy affected when running on slower/simpler MCUs? The same model running on different processors will give the exact same result if given the same input. It will just take longer to run the inference on a slower processor.   In order to get an acceptable inference time on a simpler MCU, it may be necessary to simplify the model, which will affect accuracy. How much the accuracy is affected is extremely model dependent and also very dependent on what techniques are used to simplify the model.   What are some ways models can be simplified? Quantization – Transforming the model from its original 32-bit floating point weights to 8-bit fixed point weights. Requires ¼ the space for weights and fixed point math is faster than floating point math. Often does not have much impact on accuracy but that is model dependent. Fewer output classifications can allow for a simpler yet still accurate model Decreasing the input data size (e.g. 128x128 image input instead of 256x256) can reduce complexity with the trade-off of accuracy due to the reduced resolution. How much that trade-off is depends on the model and requires experimentation to find. Software could rotate image to specific position using classic image manipulation techniques, which means the neural network for identification can be much smaller while maintaining good accuracy compared to case that neural network has to analyze an image that could be in all possible orientations.   What is the difference between image classification, object detection, and instance segmentation? Image classification identifies an entire image and gives a single answer for what it thinks it is seeing. Object detection is detecting one or more objects in an image. Instance segmentation is finding the exact outline of the objects in an image.   Larger and more complex models are needed to do object detection or instance segmentation compared to image classification.   What is the difference between Facial Detection and Facial Recognition? Facial detection finds any human face. Facial recognition identifies a particular human face. A model that does facial recognition will be more complex than a model that only does facial detection.    How come I don’t see 100% accuracy on the data I trained my model on? Models need to generalize the training data in order to avoid overfitting. This means a model will not always give 100% confidence , even on the data a model was trained on.   What are some resources to learn more about machine learning concepts?  Video series on Neural Network basics  ARM Embedded Machine Learning for Dummies Google TensorFlow Lab Google Machine Learning Crash Course Google Image Classification Practica YouTube series on the basics of ML and TensorFlow (ML Zero to Hero Series)
查看全文
This lab will walk through how to use eIQ Time Series Studio (TSS), a new tool, included as part of eIQ Tookit, for creating time series models for embedded microcontrollers.  It covers how to import time series data, shows how the tool can generate multiple ML algorithms, and describes how to deploy those generated models to your development board. This lab uses the FRDM-MCXN947 but the same steps will apply to any of the devices supported by eIQ Time Series Studio:  MCX FRDM-MCXA153 FRDM-MCXN947 MCX-N9XX-EVK i.MX RT MIMXRT1060-EVK MIMXRT1170-EVK MIMXRT1180-EVK LPC LPC55S69-EVK Kinetis FRDM-K66F FRDM-KV31F FRDM-K32L3A6 DSC MC56F83000-EVK MC56F80000-EVK   Also check out the ML Universal Datalogger on the App Code Hub for a tool to collect sensor data that can be used with the Time Series Studio. 
查看全文
eIQ Development software for i.MX RT devices can be downloaded from https://mcuxpresso.nxp.com    The current MCUXpresso SDK 2.16 release supports the following devices:  MCX N i.MX RT500 i.MX RT600 i.MX RT1050 i.MX RT1060 i.MX RT1064 i.MX RT1160 i.MX RT1170 i.MX RT1180   Full details on how to download eIQ and run it with MCUXpresso IDE, VS Code, IAR, or Keil MDK can be found in the attached Getting Started guide.  For more information about eIQ and some hands-on labs for the i.MX RT family, see the following links: eIQ FAQ Getting Started with Time Series Studio Getting Started with MCX N Neutron NPU Getting Started with eIQ Toolkit  Getting Started with TensorFlow Lite for Microcontrollers for i.MX RT Anomaly Detection App Note  Handwritten Digit Recognition App Note Datasets and Transfer Learning App Note  Security for Machine Learning Package 
查看全文
This lab will cover how to take an existing TensorFlow Lite model and run it on NXP MCU devices using the TensorFlow Lite for Microcontrollers inference engine. It will use the Flower model generated as part of the eIQ Toolkit lab as an example, but the same process can be used for other TFLite models. eIQ provides examples that incorporate an LCD and camera alongside the inference engine, and so the EVK boards can be used to identify different types of flowers.   This lab can also be used without a camera+LCD, but in that scenario the flowers images will need to be converted to a C array and loaded at compile time.      Attached to this post you will find: Photos to test out the new model A lab document on how to do 'transfer learning' on a TensorFlow model and then run that TFLite model on the i.MX RT family using TensorFlow Lite for Microcontrollers. The use of the camera+LCD is optional. If have camera+LCD use: eIQ TensorFlow Lite for Microcontrollers for i.MX RT170 - With Camera.pdf If do not have camera or LCD use: eIQ TensorFlow Lite for Microcontrollers for i.MX RT170 - Without Camera.pdf If using the RT685 use: eIQ TensorFlow Lite for Microcontrollers for i.MX RT685 - Without Camera.pdf   This lab supports the following boards: FRDM-MCXN947 i.MX RT685-EVK i.MX RT1050-EVKB i.MX RT1060-EVK i.MX RT1064-EVK i.MX RT1160-EVK i.MX RT1170-EVK i.MX RT1180-EVK Updated November 2024 for MCUXpresso SDK 2.16 and eIQ Toolkit 1.13.1
查看全文
eIQ Toolkit enables machine learning development with an intuitive GUI (named eIQ Portal) and development workflow tools, along with command line host tool options as part of the eIQ ML software development environment. Developers can create, optimize, debug and export ML models, as well as import datasets and models, rapidly train and deploy neural network models and ML workloads. The eIQ Portal provides output TensorFlow Lite models that seamlessly feed into eIQ inference engines like TensorFlow Lite and TensorFlow Lite for Microcontrollers. Using a tool called Model Runner, eIQ Toolkit can also generate runtime insights to help optimize neural network architectures on i.MX RT and i.MX devices. These labs go over how to use eIQ Portal. It is recommended to do them in the following order: Data Import Lab Model Runner Lab   The labs are written for using a FRDM-MCXN947 and i.MX RT1170-EVK, but other eIQ supported devices can be used as well.  MCX N i.MX RT1050 i.MX RT1060 i.MX RT1064 i.MX RT1160 i.MX RT1170 i.MX RT1180 i.MX RT500 i.MX RT600 For details on the Time Series Studio tool included in eIQ Toolkit please see the Time Series Studio lab guides. For 
查看全文
The attached lab guide walks through step-by-step how to use the new Application Software Pack for the ML-based System State Monitor found on Github.  This is related to AN13562 -  Building and Benchmarking Deep Learning Models for Smart Sensing Appliances on MCUs This lab guide was written for FRDM-MCXN947 but the application software pack also supports RT1170, LPC55S69 and Kinetis K66F devices. It can also be ported to other MCX, i.MX RT, LPC, and Kinetis devices. There's also a document on Dataset creation that goes into more detail on the considerations to make when gathering data. For more details visit the ML-based System State Monitor website on NXP.com The lab uses the FXLS8974CF accelerometer found on the ACCEL 4 Click board or the FRDM-STBI-A8974 board. The FXLS8974CF is the latest accelerometer from NXP and is what is recommended to use.  The video below walks through the steps for the FRDM-STBC-AGM01 but there may be updated details in the lab guide so follow the lab guide if there is any differences.    
查看全文
The attached labs provide a step-by-step guide on how to use the eIQ for Glow Neural Network compiler with a handwritten digit recognition model example. This compiler tool turns a model into an machine executable binary for a targeted device. Both the model and the inference engine are compiled into a binary that is generated, which can decrease both inference time and memory usage. That binary can then be integrated into an MCUXpresso SDK software project.    The eIQ Glow Lab for RT1170.pdf can be used with the i.MX RT1170, RT1160, RT1064, RT1060, and RT1050 The eIQ Glow Lab for RT685.pdf can be used with the RT685.    A step-by-step video is also available You will need to download the Glow compiler tools package as well as the latest MCUXpresso SDK for the board you're using. More details on Glow can be found in the eIQ Glow Ahead of Time User Guide and the Glow website.  Updated August 2023
查看全文
Two new LCD panels for i.MX RT EVKs are now available. However this new LCD panel is not supported by the i.MX RT1160/RT1170 eIQ demos in MCUXpresso SDK 2.11, and so some changes will need to be made to use the new LCD panels.    For i.MX RT1050/RT1060/RT1064 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are only configured for the original panel. However because the eIQ demos do not use the touch controller, all eIQ demos for i.MX RT1050/1060/1064 will work fine with both the original and new LCD panels without any changes.   For i.MX RT1160/RT1170 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are still only configured for the original panel. MCUXpresso SDK 2.12 will support both panels when it is released later this summer. In the meantime, for those who have the new LCD panel, some changes need to be made to the eIQ demos for i.MX RT1160/RT1170 otherwise you will just get a black or blank screen.  Unzip the MCUXpresso SDK if not done so already Open an eIQ project Find the directory the eIQ project is located in by right clicking on the project name and select Utilities->Open directory browser here   Copy both the fsl_hx8394.c and fsl_hx8394.h files found in \SDK_2_11_1_MIMXRT1170-EVK\components\video\display\hx8394\ into your eIQ project. You can place them in the video folder which would typically be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\video   Overwrite the eiq_display_conf.c and eiq_display_conf.h files in the eIQ project with the updated versions attached to this post. Typically these files would be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\source\video   6. Compile the project as normal and the eIQ demo will now work with the new LCD panel for RT1160/RT1170.
查看全文
The eIQ Glow neural network compiler software for i.MX RT devices that is found in the MCUXPresso SDK package can be ported to other microcontroller devices in the RT family as well as to some LPC and Kinetis devices. Glow supports compiling machine learning models for Cortex-M4, Cortex-M7, and Cortex-M33 cores out of the box.  Because inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, most embedded microcontrollers can support inferencing of a neural network model. There’s no special hardware or module required to do the inferencing. However specialized ML hardware accelerators, high core clock speeds, and fast memory can drastically reduce inference time. The minimum hardware requirements are also extremely dependent on the particular model being used. Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application and your particular model.  Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the model's intermediate calculations and output   The minimum memory requirements for a particular model when using Glow can be found by using a simple formula using numbers found in the Glow bundle header file after compiling your model: Flash: Base Project + CONSTANT_MEM_SIZE + .o object File    RAM: Base Project + MUTABLE_MEM_SIZE + ACTIVATIONS_MEM_SIZE        More details can be found in this Glow Memory Usage app note.   The attached guide walks through how to port Glow to the LPC55S69 family based on the Cortex-M33 core. Similar steps can be done to port Glow to other NXP microcontroller devices. This guide is made available as a reference for users interested in exploring Glow on other devices not currently supported in the MCUXpresso SDK.  These other eIQ porting guides might also be of interest: TensorFlow Lite Porting Guide for RT685
查看全文
Convolutional Neural Networks are the most popular NN approach to image recognition. Image recognition can be used for a wide variety of tasks like facial recognition for monitoring and security, car vision for safety and traffic sign recognition or augmented reality. All of these tasks require low latency, great security, and privacy, which can’t be guaranteed when using Cloud-based solutions. NXP eIQ makes it possible to run Deep Neural Network inference directly on an MCU. This enables intelligent, powerful, and affordable edge devices everywhere.   As a case study about CNNs on MCUs, a handwritten digit recognition example was created. It runs on the i.MX RT1060 and uses an LCD touch screen as the input interface. The application can recognize digits drawn with a finger on the LCD.   Handwritten digit recognition is a popular “hello world” project for machine learning. It is usually based on the MNIST dataset, which contains 70000 images of handwritten digits. Many machine learning algorithms and techniques have been benchmarked on this dataset since its creation. Convolutional Neural Networks are among the most successful.   The code is also accompanied by an application note describing how it was created and explaining the technologies it uses. The note talks about the MNIST dataset, TensorFlow, the application’s accuracy and other topics.     Application note URL: https://www.nxp.com/docs/en/application-note/AN12603.pdf (can be found at the documentation page for the i.MX RT1060)   Application code is in the attached zip files: *_eiq_mnist is the basic application from the first image and *_eiq_mnist_lock is the extended version from the second image. The applications are provided in the form of MCUXpresso projects and require an existing installation of the i.MX RT1060/RT1170 SDK with the eIQ component included.   The software for this AN was also ported to CMSIS-NN with a Caffe version of the MNIST model in a follow up AN, which can be found here: https://www.nxp.com/docs/en/application-note/AN12781.pdf 
查看全文
MCUXpresso SDK 2.10 for RT1064 now includes eIQ projects for all eIQ inference engines and so this Knowledge Base article is now depreciated. The instructions are being left up however in case any users using older versions of the SDK before i.MX RT1064 eIQ was fully supported need these steps in the future. Users with a i.MX RT1064 EVK should just use SDK 2.10 or later which has all the eIQ projects natively for i.MX RT1064.        1. Import an i.MX RT1060 project into the SDK. For this example, we'll use the Label Image demo.    2. Right click on the project in the workspace and select Properties.      3. Open the C/C++ Build -> MCU Settings page   4. Change the "Location" of the BOARD_FLASH parameter to 0x70000000 which is where the flash is located on the RT1064. Also adjust the size to be 0x400000. You will need to type it out.      5. Then you need to change the "Driver" parameter so the debugger knows to use the flash algorithm for the RT1064 board. Click on that field and you will see a "..." icon come up. Click on it.      6. Change the Flash driver to MIMXRT1064.cfx     7. Click on OK to close the dialog box, then click on Apply and Close to close the Properties dialog box.        8. Next we need to modify the MPU settings for the new flash address.    9. Open up board.c file. Modify the lines below to change the memory address and the memory size on lines 322 and 323 to start at 0x70000000 and for a 4MB region size.      9. Next, modify the clock settings code to ensure that FlexSPI2 is enabled. The clock setup code in the RT1060 SDK disables FlexSPI2, so we need to comment out that code in order to run the example on the RT1064. Open up clock_config.c file and comment out lines 264, 266, and 268.   10. Finally, open the fsl_flexpi_nor_boot.h file and modify the FLASH_BASE define to use FlexSPI2_AMBA_BASE on line 103     11. Compile and debug the project like normal and this project will now run on the RT1064 board.    Updated July 2021 for SDK 2.10 release. 
查看全文
See the latest version of this document here: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-FAQ/ta-p/1099741 
查看全文
This lab will take an existing TensorFlow image classification model and re-train it to categorize images of flowers. This is known as transfer learning. This updated model will then be converted into a TensorFlow Lite file. By using that file with the TensorFlow Lite inference engine that is part of NXPs eIQ package, the model can be ran on an i.MX 8QM-MEK board. This lab can also be used with different i.MX 8 devices.   Please check attached pdf's: 'eIQ Transfer Learning Lab iMX8.pdf': detailed lab instructions 'TransferLearningOverview.pdf': theoretical background of transfer learning and results for customized data set.   Please check also: eIQ Transfer Learning Lab with i.MX RT.
查看全文
The attached file serves as a table of contents for the various collateral, documents, training, etc. that support eIQ software.
查看全文
Transfer learning is one the most important techniques in machine learning. It gives machine learning models the ability to apply past experience to quickly and more accurately learn to solve new problems. This approach is most commonly used in natural language processing and image recognition. However, even with transfer learning, if you don't have the right dataset, you will not get very far.   This application note aims to explain transfer learning and the importance of datasets in deep learning. The first part of the AN goes through the theoretical background of both topics. The second part describes a use case example based on the application from AN12603. It shows how a dataset of handwritten digits can be collected to match the input style of the handwritten digit recognition application. Afterwards, it illustrates how transfer learning can be used with a model trained on the original MNIST dataset to retrain it on the smaller custom dataset collected in the use case.   In the end, the AN shows that although handwritten digit recognition is a simple task for neural networks, it can still benefit from transfer learning. Training a model from scratch is slower and yields worse accuracy, especially if a very small amount of examples is used for training.     Application note URL: https://www.nxp.com/docs/en/application-note/AN12892.pdf 
查看全文
The two demos attached for models that were compiled using the GLOW AOT tools and uses a camera connected to the i.MXRT1060-EVK to generate data for inferencing. The default MCUXpresso SDK Glow demos inference on static images, and these demos expand the capability of those projects to do inferencing on camera data. Each demo uses the default model that is found in the SDK. A readme.txt file found in the /doc folder of each demo provides details for each demo, and there a PDF available inside that same /doc folder for example images to point the camera at for inferencing.    Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - Personal Computer - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Install and open MCUXpresso IDE 11.2 2. If not already done, import the RT1060 MCUXpresso SDK by dragging and dropping the zipped SDK file into the "Installed SDKs" tab. 3. Download one of the attached zip files and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel and then click on the "Resume" button in the Debug perspective that comes up to run the demo. Running the demo ================ For CIFAR10 demo: Use the camera to look at images of airplanes, ships, deer, etc that can be recognized by the CIFAR10 model. The include PDF can be used for example images. For MNIST demo: Use the camera to look at handwritten digits which can be recognized by the LeNet MNIST model. The included PDF can be used for example digits or you can write your own. For further details see the readme.txt file found inside each demo in the /doc directory. Also see the Glow Lab for i.MX RT for more details on how to compile neural network models with Glow. 
查看全文
This Lab 3 explains how to get started with OpenCV DNN applications demos on i.MX8 board using eIQ ™ ML Software Development Environment. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: OpenCV DNN example - File-Based and MIPI Camera OpenCV Inference The OpenCV offers a unitary solution for both neural network inference (DNN module) and classic machine learning algorithms (ML module). Moreover, it includes many computer vision functions, making it easier to build complex machine learning applications in a short amount of time and without having dependencies on other libraries. The OpenCV DNN model is basically an inference engine. It does not aim to provide any model training capabilities. For training, one should use dedicated solutions, such as machine learning frameworks. The inference engine from OpenCV supports a wide set of input model formats: TensorFlow, Caffe, Torch/PyTorch. Comparison with Arm NN Arm NN is a library deeply focused on neural networks. It offers acceleration for Arm Neon, while Vivante GPUs are not currently supported. Arm NN does not support classical non-neural machine learning algorithms. OpenCV is a more complex library focused on computer vision. Besides image and vision specific algorithms, it offers support for neural network machine learning, but also for traditional non-neural machine learning algorithms. OpenCV is the best choice in case your application needs a neural network inference engine, but also other computer vision functionalities. Setting Up the Board Step 1 - Create the following folders and grant them permissions as it follows: root@imx8mmevk:# mkdir -p /opt/opencv/model root@imx8mmevk:# mkdir -p /opt/opencv/media root@imx8mmevk:# chmod 777 /opt/opencv Step 2 - To easily deploy the demos to the board, get the boards IP address using ifconfig command, then set the IMX_INET_ADDR environment variable as it follows: $ export IMX_INET_ADDR=<imx_ip> Step 3 - In the target device, export the required variables: root@imx8mmevk:~# export LD_LIBRARY_PATH=/usr/local/lib root@imx8mmevk:~# export PYTHONPATH=/usr/local/lib/python3.5/site-packages/   Setting Up the Host Step 1 - Download the application from eIQ Sample Apps. Step 2 - Get the models and dataset. The following command-line creates the needed folder structure for the demos and retrieves all needed data and model files for the demo: $ mkdir -p model $ wget -qN https://github.com/diegohdorta/models/raw/master/caffe/MobileNetSSD_deploy.caffemodel -P model/ $ wget -qN https://github.com/diegohdorta/models/raw/master/caffe/MobileNetSSD_deploy.prototxt -P model/   Step 3 - Deploy the built files to the board: $ scp -r src/* model/ media/ root@${IMX_INET_ADDR}:/opt/opencv OpenCV DNN Applications This application was based on: SSD: Single Shot MultiBox Detector. Caffe SSD Implementation. 1 - OpenCV DNN example: File-Based The folder structure must be equal to: ├── file.py ├── camera.py ├── media └── ... ├── model │├── MobileNetSSD_deploy.caffemodel │└── MobileNetSSD_deploy.prototxt This example runs a single picture for example, but you pass as many pictures as you want and save them inside media/ folder. The application tries to recognize all the objects in the picture. Step 1 - For copying new images to the media/ folder: root@imx8mmevk:/opt/opencv/media# cp <path_to_image> .   Step 2 - Run the example image: root@imx8mmevk:/opt/opencv# ./file.py  NOTE: If GPU is available, the example shows: [INFO:0] Initialize OpenCL runtime This demo runs the inference using a Caffe model to recognize a few type of objects for all the images inside the media/ folder. It includes labels for each recognized object in the input images. The processed images are available in the media-labeled/ folder. See before and after labeling: Step 3 - Display the labeled image with the following line: root@imx8mmevk:/opt/opencv/media-labeled# gst-launch-1.0 filesrc location=<image> ! jpegdec ! imagefreeze ! autovideosink 2 - OpenCV DNN example: MIPI Camera This example is the same as above, except that it uses a camera input. It enables the MIPI camera and runs an inference on each captured frame, then displays it in a window interface in real time: root@imx8mmevk:/opt/opencv# ./camera.py 3 - OpenCV DNN example: MIPI Camera improved This example differs from the above due the additional support of GStreamer applied to it. Using the Leaky Bucket algorithm idea, the GStreamer pipeline enables the camera to continue performing its own thread (bucket overflow when full), even if the frame was not processed by the inference thread (bucket water capacity). As a result of this Leaky Bucket algorithm, this demo has smooth camera video at the expense of having some frames dropped in the inference process. root@imx8mmevk:/opt/opencv# ./camera_improved.py Go to the eIQ Sample Apps - Face Recognition using TF Lite.
查看全文
This Lab 2 explains how to get started with MNIST Handwritten Digit application demo on i.MX8 board using eIQ ™ ML Software Development Environment. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: Handwritten Digit Recognition MNIST Handwritten Digits The MNIST is a large database of handwritten digits commonly used for training various image processing systems. This section provides a comparison of Caffe and TensorFlow models for Handwritten Digit Recognition. The data set used for these applications is from Yann Lecun. This is an MNIST data set sample: Setting Up the Board Step 1 - Create the following folder and grant it permission as it follows: root@imx8mmevk:~# mkdir -p /opt/mnist root@imx8mmevk:~# chmod 777 /opt/mnist   Step 2 - To easily deploy the demos to the board, get the board's IP address using ifconfig command, then set the IMX_INET_ADDR environment variable as follows: $ export IMX_INET_ADDR=<imx_ip>   Setting Up the Host Step 1 - Obtain the eIQ toolchain (3.2.9.Generating the Toolchain) from NXP eIQ(TM) Machine Learning Enablement. Step 2 - Install the toolchain: $ chmod +x <toolchain>.sh $ ./<toolchain>.sh This provides all the needed setup for building ARM64 applications on a x86 machine. Step 3 - Download the application from eIQ Sample Apps. Step 4 - Get the models and dataset. The following command-lines create the needed folder structure for the demos and retrieves the mnist dataset, and the Caffe and TensorFlow models: $ mkdir -p bin data model $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/data/t10k-images-idx3-ubyte -P data/ $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/data/t10k-labels-idx1-ubyte -P data/ $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/model/lenet_iter_9000.caffemodel -P model/ $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/model/simple_mnist_tf.pb -P model/ $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/model/simple_mnist_tf.prototxt -P model/ $ wget -qN https://github.com/ARM-software/Tool-Solutions/raw/master/ml-tool-examples/mnist-draw/model/optimized_mnist_tf.pb -P model/ Step 5 - Compile the source code using the eIQ toolchain: $ ${CXX} -Wall -Wextra -O3 -std=c++14 caffe_inference.cpp -o caffe_inference -larmnn -larmnnCaffeParser $ ${CXX} -Wall -Wextra -O3 -std=c++14 tensorflow_inference.cpp -o tensorflow_inference -larmnn -larmnnTfParser Step 6 - Deploy the built files to the board: $ scp -r caffe_inference tensorflow_inference data/ model/ root@${IMX_INET_ADDR}:/opt/mnist Inference Comparison Applications Step 1 - At user space, enter the mnist folder which holds the demo files: root@imx8mmevk:/opt/mnist# This is how the mnist folder structure should look like: │... ├── caffe_inference ├── tensorflow_inference ├── data │├── t10k-images-idx3-ubyte │└── t10k-labels-idx1-ubyte ├── model │├── lenet_iter_9000.caffemodel │├── optimized_mnist_tf.pb │├── simple_mnist_tf.pb │└── simple_mnist_tf.prototxt Step 2 - Run the applications: NOTE: For running these applications, please provide the wanted number of predictions, which can vary from 0 to 9999 since the dataset has 10K images. 1 - Handwritten Digit Recognition using Caffe root@imx8mmevk:/opt/mnist# ./caffe_inference 10 [0] Caffe >> Actual: 7 Predict: 7 Time: 0.0336484s [1] Caffe >> Actual: 2 Predict: 2 Time: 0.028399s [2] Caffe >> Actual: 1 Predict: 1 Time: 0.0283713s [3] Caffe >> Actual: 0 Predict: 0 Time: 0.0284133s [4] Caffe >> Actual: 4 Predict: 4 Time: 0.0280637s [5] Caffe >> Actual: 1 Predict: 1 Time: 0.0281574s [6] Caffe >> Actual: 4 Predict: 4 Time: 0.0285136s [7] Caffe >> Actual: 9 Predict: 9 Time: 0.0283779s [8] Caffe >> Actual: 5 Predict: 5 Time: 0.0283902s [9] Caffe >> Actual: 9 Predict: 9 Time: 0.0283282s Total Time: 0.296081s Sucessfull: 10 Failed: 0 2 - Handwritten Digit Recognition using TensorFlow root@imx8mmevk:/opt/mnist# ./tensorflow_inference 10 [0] Tensor >> Actual: 7 Predict: 7 Time: 0.00670075s [1] Tensor >> Actual: 2 Predict: 2 Time: 0.00377025s [2] Tensor >> Actual: 1 Predict: 1 Time: 0.0036785s [3] Tensor >> Actual: 0 Predict: 0 Time: 0.0036815s [4] Tensor >> Actual: 4 Predict: 4 Time: 0.00372875s [5] Tensor >> Actual: 1 Predict: 1 Time: 0.003669s [6] Tensor >> Actual: 4 Predict: 4 Time: 0.00367825s [7] Tensor >> Actual: 9 Predict: 9 Time: 0.0036955s [8] Tensor >> Actual: 5 Predict: 6 Time: 0.00367488s FAILED [9] Tensor >> Actual: 9 Predict: 9 Time: 0.0036025s Total Time: 0.0414569s Sucessfull: 10 Failed: 1 NOTE: The argument 10 refers to the number of predictions for each test. These tests run the inference on the input MNIST dataset images (Actual), showing the inference results (Predict) and how long it took to complete the prediction. The input images for this test are in the binary form and can be found at the t10k-images-idx3-ubyte.gz package from Yann Lecun. By the output results, it is possible to notice that the Caffe model is slower than TensorFlow, however, it is also more accurate than the latter. Change the argument to compare further results between the two models. Go to the eIQ Sample Apps - Object Recognition using OpenCV DNN.
查看全文
This Lab 1 explains how to get started with Arm NN application demo on i.MX8 board using eIQ ™ ML Software Development Environment. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: Arm NN example - File-Based and MIPI Camera Setting Up the Board Step 1 - Create the following folders and grant them permission as it follows: root@imx8mmevk:# mkdir -p /opt/armnn/model root@imx8mmevk:# mkdir -p /opt/armnn/data root@imx8mmevk:# chmod 777 /opt/armnn Step 2 - To easily deploy the demos to the board, get the boards IP address using ifconfig command, then set the IMX_INET_ADDR environment variable as it follows: $ export IMX_INET_ADDR=<imx_ip> Setting Up Arm NN Step 1 - Install TensorFlow on host PC for preparing the model for inference: $ apt-get install python-pip $ pip install tensorflow $ git clone https://github.com/tensorflow/tensorflow.git NOTE: You may need root privileges (sudo) for running the apt-get command. Step 2 - Generate the graph used to prepare the TensorFlow InceptionV3 model for inference: $ mkdir checkpoints $ git clone https://github.com/tensorflow/models.git $ cd models/research/slim/ $ python export_inference_graph.py --model_name=inception_v3 --output_file=../../../checkpoints/inception_v3_inf_graph.pb   Step 3 - Download the pre-trained model and prepare it for inference with the generated graph: $ cd ../../../checkpoints $ wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz -qO- | tar -xvz # download pretrained model $ python <path_to_tensorflow_repo>/tensorflow/python/tools/freeze_graph.py \ --input_graph=inception_v3_inf_graph.pb --input_checkpoint=inception_v3.ckpt \ --input_binary=true --output_graph=inception_v3_2016_08_28_frozen_transformed.pb \ --output_node_names=InceptionV3/Predictions/Reshape_1 NOTE: <path_to_tensorflow_repo> refers to the cloned TensorFlow path from Step 1. Step 4 - Copy the prepared model inception_v3_2016_08_28_frozen_transformed.pb to /opt/armnn/models: $ scp inception_v3_2016_08_28_frozen_transformed.pb root@<imx_ip>:/opt/armnn/model Step 5 - Find three .jpg images on Google, one containing a dog, one with a cat and one with a shark. Rename them to Dog.jpg, Cat.jpg and shark.jpg accordingly (case sensitive) and copy them to the /opt/armnn/data folder on the device. $ scp Dog.jpg Cat.jpg shark.jpg root@<imx_ip>:/opt/armnn/data NOTE: For the modified demo, download it from eIQ Sample Apps and put it in /opt/armnn folder. 1 - Arm NN example: File-Based Step 1 - At user space, enter the armnn folder which holds the demo files: root@imx8mmevk:~# cd /opt/armnn root@imx8mmevk:/opt/armnn# Here is what the armnn folders should look like: │... ├── data │├── Cat.jpg │├── Dog.jpg │└── shark.jpg ├── model │└── inception_v3_2016_08_28_frozen_transformed.pb │... Step 2 - Run the demo: root@imx8mmevk:/opt/armnn# TfInceptionV3-Armnn --data-dir=data --model-dir=models = Prediction values for test #0 Top(1) prediction is 208 with confidence: 93.5791% Top(2) prediction is 209 with confidence: 2.06653% Top(3) prediction is 223 with confidence: 0.693557% Top(4) prediction is 170 with confidence: 0.210818% Top(5) prediction is 232 with confidence: 0.177887% = Prediction values for test #1 Top(1) prediction is 283 with confidence: 72.4617% Top(2) prediction is 282 with confidence: 22.5384% Top(3) prediction is 286 with confidence: 0.838241% Top(4) prediction is 288 with confidence: 0.0822042% Top(5) prediction is 841 with confidence: 0.05987% = Prediction values for test #2 Top(1) prediction is 3 with confidence: 62.0632% Top(2) prediction is 4 with confidence: 12.8319% Top(3) prediction is 5 with confidence: 1.25482% Top(4) prediction is 154 with confidence: 0.177708% Top(5) prediction is 149 with confidence: 0.116998% Total time for 3 test cases: 2.369 seconds Average time per test case: 789.765 ms Overall accuracy: 1.000 The TfInceptionV3-Armnn demo runs the inference on the three expected input images: one containing a dog, one with a cat and one with a shark. The output shows the top 5 inference results and their confidence percentage. The higher the confidence, the better the input image fits the expected content. There is a chance to get the following result by running the demo: Prediction for test case 0 ( x ) is incorrect (should be y) One or more test cases failed NOTE: ( x ) refers to the ID of the detected object, ( y ) refers to the ID expected object. This is not an execution error. This occurs because the TfInceptionV3-Armnn test expects a specific type of dog, cat and shark to be found so if a different type/breed of these animals is passed to the test, it returns a case failed. The expected inputs for this test are: A_ID Label File Name 208 Golden Retriever Dog.jpg 283 Tiger Cat Cat.jpg 3 White Shark shark.jpg The complete list of supported objects can be found here. Try passing different .jpg images to the test, including the expected types as well as other types and see the confidence percentage increasing when you match the expected breeds. Remember to rename the images according to the expect input (Dog.jpg, Cat.jpg, shark.jpg, case sensitive). To rename a file, use the mv command: root@imx8mmevk:/opt/armnn/data# mv <name>.jpg <new_name>.jpg The next section shows how to modify this demo to identify any object. 2 - Arm NN example: MIPI Camera This section shows how to use the TfInceptionV3-Armnn test from eIQ for general object detection. The list of all object detection supported by this model can be found here. Step 1 - Enter the demo directory and run the demo: root@imx8mmevk:/opt/armnn# python3 camera.py This runs the TfInceptionV3-Armnn test and parses the inference results to return any recognized object, not only the three expected types of animals. Step 2 - Show the provided flash cards to the camera and wait for the detection message: Image captured, wait. The flash cards should not be twisted or curved on this step. Step 3 - After a few seconds, the demo returns the detected object. NOTE: This can return False if the image was not correctly captured. In this case, try showing the flash card again. This video is currently being processed. Please try again in a few minutes. (view in My Videos) Go to the next eIQ Sample Apps - Handwritten Digit Recognition.
查看全文
Welcome to PyeIQ PyeIQ gathers everything needed by itself. It provides a simplified way to run ML applications, which avoids the user spending time on preparing the environment. PyeIQ Version Release Date Notes tag_v1.0 Apr 29, 2020 - tag_v2.0 - Planned for June i.MX Board BSP Release Building Status 8 QM 5.4.3_2.0.0 passing 8 MPlus 5.4.3_2.0.0 passing This video is currently being processed. Please try again in a few minutes. (view in My Videos) Getting Started with PyeIQ 1. Easy Installation If you prefer to build the package by yourself go to Appendix Section or follow the README file at the PyeIQ repo. 1.1 Copy the PyeIQ pre-built package attached to the board, and then install it by using pip3 tool: 1.2 Check the installation by starting an interactive shell:   1.3 Import PyeIQ and see the version: The output is the PyeIQ latest version installed in the system. (Optional) Install the following package to show downloading status: 2. Easy Running All demos and applications are automatically installed in /opt/eiq. 2.1 To run the demos: 2.2 To run the applications: 2.3 Use help if needed: 3. List of Available Demos and Applications Demo/App Name Demo/App Type i.MX Board BSP Release BSP Framework Inference Status Notes Label Image File Based QM, MPlus 5.4.3_2.0.0 TensorFlow Lite 2.1.0 GPU, NPU passing - Label Image Switch File Based QM, MPlus 5.4.3_2.0.0 TensorFlow Lite 2.1.0 GPU, NPU passing - Object Detection SSD/Camera QM, MPlus 5.4.3_2.0.0 TensorFlow Lite 2.1.0 GPU, NPU passing Need better model. Object Detection OpenCV SSD/Camera QM, MPlus 5.4.3_2.0.0 TensorFlow Lite 2.1.0 GPU, NPU passing Need better model. Object Detection N. GS. SSD/Camera QM, MPlus 5.4.3_2.0.0 TensorFlow Lite 2.1.0 GPU, NPU - Pending issues. Object Detection Yolov3 SSD/File QM, MPlus 5.4.3_2.0.0 TensorFlow Lite 2.1.0 GPU, NPU - Pending issues. Object Detection Yolov3 SSD/Camera QM, MPlus 5.4.3_2.0.0 TensorFlow Lite 2.1.0 GPU, NPU - Pending issues. Fire Detection File Based QM, MPlus 5.4.3_2.0.0 TensorFlow Lite 2.1.0 GPU, NPU passing - Fire Detection Camera QM, MPlus 5.4.3_2.0.0 TensorFlow Lite 2.1.0 GPU, NPU passing - Fire Detection Camera - 5.4.3_2.0.0 PyArmNN 19.08 - - Requires 19.11 Coral Posenet Camera - - - - - Ongoing NEO DLR Camera - - - - - Ongoing 4. Examples 4.1 Fire Detection Image 4.1.1 Non-fire Running Fire Detect Image:     Output: INFO: Created TensorFlow Lite delegate for NNAPI.                                                                        Applied NNAPI delegate.                                                                                                  Inference time: 0:00:00.264853                                                                                           Non-Fire   4.1.2 Fire Running Fire Detect Image: Output: INFO: Created TensorFlow Lite delegate for NNAPI.                                                                        Applied NNAPI delegate.                                                                                                  Inference time: 0:00:00.193055                                                                                           Fire    4.2 Fire Detection Camera Running Fire Detect Camera: Output: PyeIQ also supports training for Fire Detection demo, please refer to PyeIQ - Training and Conversion Support (Keras/TensorFlow Lite) . 4.3 Label Image Switch Running Switch Label Image: Output: Cores Comparison (CPU, GPU and NPU) Check the following graphical plot for Switch Label Image demo: Check the following graphical plot for the other demos: We are currently working to reduce the inference time on Fire Detection demos. Appendix Section The procedures described in this document target a GNU/Linux Distribution Ubuntu 18.04. 1. Software Requirements 1.1 Install the following packages in the GNU/Linux system: 1.2 Then, use pip3 tool to install the virtualenv tool: 2. Building the PyeIQ Package 2.1 Clone the repository: 2.2 Use virtualenv tool to create an isolated Python environment: 2.3 Generate the PyeIQ package: 2.4 Copy the package to the board: 2.5 To deactivate the virtual environment: Contact Feel free to contact us about any issue/bug you might have it. Your feedback is very welcome so we can improve the next version Alifer Moraes diegodorta marcofranchi
查看全文