eIQ Machine Learning Software Knowledge Base

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

eIQ Machine Learning Software Knowledge Base

ラベル

ディスカッション

ソート順:
All the required steps for getting a full eIQ image can be found in the following documentation: NXP eIQ(TM) Machine Learning Enablement. These labs can be applied to all i.MX 8 boards, in this particular tutorial describes the i.MX 8M Mini EVK board. Hardware Requirements i.MX 8MM EVK board USB cable (micro-B to standard-A) USB Type-C to A Adapter USB Type-C 45W Power Delivery Supply IMX-MIPI-HDMI Daughter Card MINISASTOCSI Camera Daughter Card 2×Mini-SAS cable AI/ML BSP flashed into the SDCard Ethernet cable USB Mouse HDMI Cable Monitor Software Requirements For GNU/Linux: minicom or screen. For Windows: PuTTY. Preparing the Board Connect the IMX-MIPI-HDMI daughter card to the Mini-SAS cable and into connector labeled DSI MIPI (J801) and then connect the HDMI monitor cable into it. Warning: Do not hot-plug the Mini-SAS cables and cards or the boards will be damaged! Remove power completely before connecting or disconnecting the Mini-SAS ends. Connect the MINISASTOCSI camera daughter card to the Mini-SAS cable and into connector labeled CSI MIPI (J802). Connect the micro-B end of the supplied USB cable into Debug UART port J901. Connect the other end of the cable to the host computer: For GNU/Linux: Configure minicom or screen with the /dev/ttyUSB port number and set the baud rate to 115200. The port number can be found checking in /dev directory. For Windows: Configure PuTTY with the board COM port number and set the baud rate to 115200. The port number can be found on the Windows Device Manager. NOTE: This board mounts two device port numbers, use the highest number which is used to communicate with Cortex-A. Connect the MicroSD Card to the MicroSD Card Connector J701 in the board back side. In order to Boot the board from the MicroSD Card, change the Boot Switches SW1101 and SW1102 according to the table below: BOOT Device SW1101 SW1102 MicroSD / uSDHC2 0110110010 0001101000 NOTE: The boot device settings above apply to the revision C i.MX 8MM EVK board. Other revisions of the boards may have a different number of boot mode switches and slightly different settings. Please follow the SW1101 and SW1102 values printed on your specific board for booting from the MicroSD Card. Connect the power supply cable to the power connector J302 and power on the board by flipping the switch button SW101. NOTE: For more details on the board peripherals, please consult the i.MX 8MM EVK Getting Started. After these steps, it is all set to start with the eIQ Sample Apps. Go to the eIQ Sample Apps - Object Recognition using Arm NN.
記事全体を表示
Convolutional Neural Networks are the most popular NN approach to image recognition. Image recognition can be used for a wide variety of tasks like facial recognition for monitoring and security, car vision for safety and traffic sign recognition or augmented reality. All of these tasks require low latency, great security, and privacy, which can’t be guaranteed when using Cloud-based solutions. NXP eIQ makes it possible to run Deep Neural Network inference directly on an MCU. This enables intelligent, powerful, and affordable edge devices everywhere.   As a case study about CNNs on MCUs, a handwritten digit recognition example was created. It runs on the i.MX RT1060 and uses an LCD touch screen as the input interface. The application can recognize digits drawn with a finger on the LCD.   Handwritten digit recognition is a popular “hello world” project for machine learning. It is usually based on the MNIST dataset, which contains 70000 images of handwritten digits. Many machine learning algorithms and techniques have been benchmarked on this dataset since its creation. Convolutional Neural Networks are among the most successful.   The code is also accompanied by an application note describing how it was created and explaining the technologies it uses. The note talks about the MNIST dataset, TensorFlow, the application’s accuracy and other topics.     Application note URL: https://www.nxp.com/docs/en/application-note/AN12603.pdf (can be found at the documentation page for the i.MX RT1060)   Application code is in the attached zip files: *_eiq_mnist is the basic application from the first image and *_eiq_mnist_lock is the extended version from the second image. The applications are provided in the form of MCUXpresso projects and require an existing installation of the i.MX RT1060/RT1170 SDK with the eIQ component included.   The software for this AN was also ported to CMSIS-NN with a Caffe version of the MNIST model in a follow up AN, which can be found here: https://www.nxp.com/docs/en/application-note/AN12781.pdf 
記事全体を表示
The OpenCV Deep-Neural Network (DNN) is a module for inference in deep networks. It is easy to use and it is a great way to get started with computer vision and inferencing. OpenCV DNN supports many frameworks, such as: Caffe TensorFlow Torch Darknet Models in ONNX format For a simple object detection code with OpenCV DNN, please check Object Detection with OpenCV. For running OpenCV DNN inference with camera input, please refer to eIQ Sample Apps - OpenCV Lab 3. For exploring different models ready for OpenCV DNN inference, please refer to Caffe and TensorFlow Pretrained Models. For more information on OpenCV DNN module, please check Deep Learning in OpenCV.
記事全体を表示
The attached file serves as a table of contents for the various collateral, documents, training, etc. that support eIQ software.
記事全体を表示
This demo shows a low power smart door running eIQ heterogeneously on i.MX8MMini: Cortex A53 performs face recognition using eIQ OpenCV Cortex M4 performs Key Word Spotting using eIQ CMSIS-NN The demo application is build around the Django framework running on the board. It has two main usage scenarios: The first one is to manage users and inspect the access logs through a dashboard. This dashboard is accesses from a web browser on the host PC. The second use case is the smart door application itself running on the board. The scenario is the following: Cortex A cores and connected peripherals stay in low power mode. Cortex M is active, waiting for the Key Word ‘GO’. When the word is detected, Cortex M sends an MU interrupt to Cortex A and the system wakes up. Now Cortex A performs face recognition and allows access for registered users. In addition to face recognition, the MPUs are able to run a Django server to manage the user’s database, a QT5 application for the graphical interface and perform training on the edge. The algorithm for face recognition running on Cortex A and the one for key word spotting running on Cortex M are both implemented using eIQ. For the MPU eIQ support is integrated in Yocto. For the MCU the support was ported to i.MX8 from the MCU Expresso SDK for RT for the purpose of this demo. Software Environment Ubuntu 16 host PC SD card image with Yocto BSP 4.14.98/sumo 2.0.0 GA for i.MX8MMini platform with eIQ OpenCV AND eIQ heterogenus demo. See detailed steps in Build Yocto Image section CMSIS-NN MCUXpresso SDK version 2.6.0 for i.MX8MMini (SDK_2.6.0_EVK-MIMX8MM-ARMGCC). See detailed build steps in Build Cortex M4 executable section. HW Environment i.MX 8MMini Kit Touch screen display (preferred resolution 1920x1080). Tested with HDMI connection to the board. NOTE: if the display does not support touch, a mouse can be connected to the board and used instead MIPI-CSI Camera module Microphone: Synaptics CONEXANT AudioSmart® DS20921 Ribbon, 4 female-female wires and 60 pins connector to connect mic to board Optional: headphones (used to test recording on M4 - everything recorded by the mic will be played to the headphones). Host PC for remote access to the demo application (used Chrome browser) NOTE: The board and the host PC should be in the same network to communicate. Build Yocto image: Step 1 – Project initialization: $: mkdir imx-linux-bsp $: cd imx-linux-bsp-bsp $: repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b imx-linux-sumo -m imx-4.14.98-2.0.0_machinelearning.xml $: repo sync Step 2 - Setup Project build: $: MACHINE=imx8mmevk DISTRO=fsl-imx-xwayland source ./fsl-setup-release.sh -b bld-xwayland Step 3 – Download project layer in ${BSPDIR}/sources/: $: git clone https://source.codeaurora.org/external/imxsupport/meta-eiq-heterogenous Step 4 – Add project layer into bblayers: Add the following line into ${BSPDIR}/sources/base/conf/bblayers.conf: BBLAYERS += " ${BSPDIR}/sources/meta-eiq-heterogenous " Step 5 – Enable eIQ and other dependencies. Add the following lines into conf/local.conf: EXTRA_IMAGE_FEATURES = " dev-pkgs debug-tweaks tools-debug \ tools-sdk ssh-server-openssh" IMAGE_INSTALL_append = " net-tools iputils dhcpcd which gzip \ python3 python3-pip wget cmake gtest \ git zlib patchelf nano grep vim tmux \ swig tar unzip parted \ e2fsprogs e2fsprogs-resize2fs" IMAGE_INSTALL_append = " python3-pytz python3-django-cors-headers" IMAGE_INSTALL_append = " opencv python3-opencv" PACKAGECONFIG_append_pn-opencv_mx8 = " dnn python3 qt5 jasper \ openmp test neon" PACKAGECONFIG_remove_pn-opencv_mx8 = "opencl" TOOLCHAIN_HOST_TASK_append = " nativesdk-cmake nativesdk-make" PREFERRED_VERSION_opencv = "4.0.1%" PREFERRED_VERSION_python3-django = "2.1%" IMAGE_ROOTFS_EXTRA_SPACE = "20971520" Step 6 – Bake the image: $: bitbake image-eiq-hetero Build Cortex M4 executable Download MCUXpresso SDK version 2.6.0 for i.MX8MMini (SDK_2.6.0_EVK-MIMX8MM-ARMGCC) OS: Linux, Toolchain: GCC ARM Embedded Components: Amazon-FreeRTOS, CMSIS DSP Library, multicore SDK Version: 2.6.0 (2019-06-14) SDK Tag: REL_2.6.0_REL10_RFP_RC3_4 Download CMIS NN and copy "CMSIS\NN" folder to "$MCUXpressoSDK_ROOT\CMSIS" Got to "$MCUXpressoSDK_ROOT\boards\evkmimx8mm\demo_apps\" Get M4 app from CAF: git clone https://source.codeaurora.org/external/imxsupport/eiq-heterogenous-cortexm4 [Win]: Open ARM GCC console and go to "$MCUXpressoSDK_ROOT\boards\evkmimx8mm\demo_apps\eiq-heterogenous-cortexm4\armgcc\" [Win]: Call "build_ddr_release.bat" to obtain "eiq-kws.bin". Deploy "eiq-kws.bin" to the Yocto image on the boot partition. Prepare the Demo 1.  Connect 12V power supply to the board, switch SW101 to power on the board 2.  Connect a USB cable between the host PC and the J901 USB port on the target board. 3.  Open two serial terminals for A53 core and M4 core with the following settings:     - 115200 baud rate     - 8 data bits     - No parity     - One stop bit     - No flow control 4. Connect display to the board (used 1920x1080 HDMI display connected to the board with an IMX-MIPI-HDMI adapter). NOTE: depending on the display, you might want to change the config in "/etc/xdg/weston/weston.ini". The demo was tested by uncommenting the following section in this file: [output] name=HDMI-A-1 mode=1920x1080@60 transform=90 5.Connect MIPI-CSI camera to the board. 6. Connect Synaptics microphone to the board using a 60 pins connector with a ribbon. SAI3 is used for record and playback on Cortex M4. The following pins are used: Pin 44 (connector) <-> I2S_TX_Data1 (mic board) Pin 43 (connector) <-> I2S_TX_LRCLK (mic board) Pin 41 (connector) <-> I2S_TX_CLK (mic board) Pin 60 (connector) <-> GND (mic board) 7. Using U-Boot command to run the demo.bin file. For details, please refer to "Getting Started with MCUXpresso SDK for i.MX 8M Mini.pdf". 8.  After running the demo.bin, using the "boot" command to boot the kernel on the A core terminal; 9.  After the kernel is boot, using "root" to login. 10.  After login, make sure imx_rpmsg_pingpong kernel module is inserted (lsmod) or insert it (modprobe imx_rpmsg_pingpong). Run the Demo Start Key Word Spotting on Cortex M4: Stop in u-boot and run the eiq-kws.bin executable in DDR: u-boot=>fatload mmc 0 0x80000000 eiq-kws.bin u-boot=>dcache flush u-boot=>bootaux 0x80000000 u-boot=>boot After the boot process succeeds, the ARM Cortex-M4 terminal displays the following information: RPMSG Ping-Pong FreeRTOS RTOS API Demo... RPMSG Share Base Addr is 0xb8000000 During boot the Kernel,the ARM Cortex-M4 terminal displays the following information: Link is up! Nameservice announce sent. Start Face Recognition on Cortex-A: Insert updated rpmsg driver: $: modprobe imx_rpmsg_pingpong After the Linux RPMsg pingpong module was installed, the ARM Cortex-M4 terminal displays the following information: Looping forever... Waiting for ping... Sending pong... 96% go First time only: $: cd ~/eiq-heterogenous-cortexa $: python3 wrap_migrate.py $: python3 wrap_createsuperuser.py Start: $: cd ~/eiq-heterogenous-cortexa $: python3 manage.py runserver 0.0.0.0:8000 --noreload & $: /opt/src/bin/src NOTE: the first instruction will start the django server, the second instruction will show the pin-pad on the display. Browser access from HOST PC: - http://$BOARD_IP:8000/dashboard/😞 Dashboard that facilitates managing users and view access logs - http://$BOARD_IP:8000/admin/: manage users database  
記事全体を表示
Two new LCD panels for i.MX RT EVKs are now available. However this new LCD panel is not supported by the i.MX RT1160/RT1170 eIQ demos in MCUXpresso SDK 2.11, and so some changes will need to be made to use the new LCD panels.    For i.MX RT1050/RT1060/RT1064 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are only configured for the original panel. However because the eIQ demos do not use the touch controller, all eIQ demos for i.MX RT1050/1060/1064 will work fine with both the original and new LCD panels without any changes.   For i.MX RT1160/RT1170 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are still only configured for the original panel. MCUXpresso SDK 2.12 will support both panels when it is released later this summer. In the meantime, for those who have the new LCD panel, some changes need to be made to the eIQ demos for i.MX RT1160/RT1170 otherwise you will just get a black or blank screen.  Unzip the MCUXpresso SDK if not done so already Open an eIQ project Find the directory the eIQ project is located in by right clicking on the project name and select Utilities->Open directory browser here   Copy both the fsl_hx8394.c and fsl_hx8394.h files found in \SDK_2_11_1_MIMXRT1170-EVK\components\video\display\hx8394\ into your eIQ project. You can place them in the video folder which would typically be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\video   Overwrite the eiq_display_conf.c and eiq_display_conf.h files in the eIQ project with the updated versions attached to this post. Typically these files would be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\source\video   6. Compile the project as normal and the eIQ demo will now work with the new LCD panel for RT1160/RT1170.
記事全体を表示
When comparing NPU with CPU performance on the i.MX 8M Plus, the perception is that inference time is much longer on the NPU. This is due to the fact that the ML accelerator spends more time performing overall initialization steps. This initialization phase is known as warmup and is necessary only once at the beginning of the application. After this step inference is executed in a truly accelerated manner as expected for a dedicated NPU. The purpose of this document is to clarify the impact of the warmup time on overall performance.
記事全体を表示
This docker is designed to enable portability to leverage i.MX 8 series GPUs. It enables flexibility on top of the Yocto image and allows the use of libraries available in Ubuntu for machine learning, which is otherwise difficult. Using docker, a user can develop and prototype GPU applications and then ship and run it anywhere using the container. This App note describes how to enable this Docker. The docker is a wrapper that provides an application with the necessary dependencies to execute code on the GPU. This is a significant achievement and has the potential to greatly simplify many customer developments for Linux. We keep the Yocto BSP intact, but customers can develop applications using the widely available neural network frameworks and libraries at the same time leveraging the GPU without compromising on performance. Not quite as straightforward as full up Debian but still should be an easy sell. marcofranchi‌ diegodorta‌
記事全体を表示
The eIQ demos for i.MX RT use arrays for input data for inferencing. The attached guide and scripts describe how to create custom input data (both images and audio) for use with the eIQ examples.
記事全体を表示
The attached project enables users to capture and save the camera data captured by a i.MXRT1060-EVK board onto a microSD card. This project does not do inferencing of a model. Instead it is meant to be used to generate images that can then be used on a PC for training a model. The images are saved in RGB NHWC format as a binary file on the microSD card, and then a Python script running on the PC can convert those binary files into the PNG image format.   Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD - MicroSD card - Personal computer (Windows) - Micro SD card reader - Python 3.x installed Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Insert a micro SD card into the micro SD card slot on the i.MXRT1060-EVK (J39) 2. Open MCUXpresso IDE 11.2 and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 3. If desired, two #define values can be modified in camera_capture.c to adjust the captured image size: #define EXTRACT_HEIGHT 256 //Max EXTRACT_HEIGHT value possible is 271 (due to border drawing) #define EXTRACT_WIDTH 256 //Max EXTRACT_WIDTH value possible is 480 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel 8. Click on the "Resume" icon to begin running the demo. Running the demo ================ The terminal will ask for a classification name. This will create a new directory on the SD card with that name. The name size is limited to 5 characters because the FATFS file system supports only 8 characters in a file name, and three of those characters are used to number the images. For best results, the selection rectangle should be centered on the image and nearly (but not completely)  fill up the whole rectangle. The camera should be stabilized with your finger or by some other means to prevent shaking. Also ensure the camera lens has been focused as described in the instructions when connecting the camera and LCD. While running the demo, press the 'c' key to enter a new classification name, or press 'q' to quit the program and remove the micro SD card. Transfer the .bin files created on the SD card to your PC in the same directory that the Python script which can be found in the "scripts" directory to convert the images to PNG format. If the captured image is a square (width==height) the script can be called with: python convert_image.py directory_name which will convert all the .BIN files in the specified directory name to PNG files. If the captured image is not a square, the width and height can be specified at the command line: python convert_image.py directory_name width height Terminal Output ============== Camera SD Card Capture Extracted Image: Height x Width: 256x256 Please insert a card into board. Card inserted. Mounting SD Card Enter name of new class (must be less than 5 characters): test Creating directory test...... Press any key to capture image. Press 'c' to change class or 'q' to quit Writing file /test/test001.bin...... Write Complete Press any key to capture image. Press 'c' to change class or 'q' to quit Remove SD Card
記事全体を表示
The eIQ Glow neural network compiler software for i.MX RT devices that is found in the MCUXPresso SDK package can be ported to other microcontroller devices in the RT family as well as to some LPC and Kinetis devices. Glow supports compiling machine learning models for Cortex-M4, Cortex-M7, and Cortex-M33 cores out of the box.  Because inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, most embedded microcontrollers can support inferencing of a neural network model. There’s no special hardware or module required to do the inferencing. However specialized ML hardware accelerators, high core clock speeds, and fast memory can drastically reduce inference time. The minimum hardware requirements are also extremely dependent on the particular model being used. Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application and your particular model.  Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the model's intermediate calculations and output   The minimum memory requirements for a particular model when using Glow can be found by using a simple formula using numbers found in the Glow bundle header file after compiling your model: Flash: Base Project + CONSTANT_MEM_SIZE + .o object File    RAM: Base Project + MUTABLE_MEM_SIZE + ACTIVATIONS_MEM_SIZE        More details can be found in this Glow Memory Usage app note.   The attached guide walks through how to port Glow to the LPC55S69 family based on the Cortex-M33 core. Similar steps can be done to port Glow to other NXP microcontroller devices. This guide is made available as a reference for users interested in exploring Glow on other devices not currently supported in the MCUXpresso SDK.  These other eIQ porting guides might also be of interest: TensorFlow Lite Porting Guide for RT685
記事全体を表示
The two demos attached for models that were compiled using the GLOW AOT tools and uses a camera connected to the i.MXRT1060-EVK to generate data for inferencing. The default MCUXpresso SDK Glow demos inference on static images, and these demos expand the capability of those projects to do inferencing on camera data. Each demo uses the default model that is found in the SDK. A readme.txt file found in the /doc folder of each demo provides details for each demo, and there a PDF available inside that same /doc folder for example images to point the camera at for inferencing.    Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - Personal Computer - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Install and open MCUXpresso IDE 11.2 2. If not already done, import the RT1060 MCUXpresso SDK by dragging and dropping the zipped SDK file into the "Installed SDKs" tab. 3. Download one of the attached zip files and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel and then click on the "Resume" button in the Debug perspective that comes up to run the demo. Running the demo ================ For CIFAR10 demo: Use the camera to look at images of airplanes, ships, deer, etc that can be recognized by the CIFAR10 model. The include PDF can be used for example images. For MNIST demo: Use the camera to look at handwritten digits which can be recognized by the LeNet MNIST model. The included PDF can be used for example digits or you can write your own. For further details see the readme.txt file found inside each demo in the /doc directory. Also see the Glow Lab for i.MX RT for more details on how to compile neural network models with Glow. 
記事全体を表示
Caffe and TensorFlow provide a set of pretrained models ready for inference. They can be used as is or for retraining the models with your own dataset. Please check: TensorFlow Hosted Models TensorFlow Model Zoo Caffe Model Zoo
記事全体を表示
Transfer learning is one the most important techniques in machine learning. It gives machine learning models the ability to apply past experience to quickly and more accurately learn to solve new problems. This approach is most commonly used in natural language processing and image recognition. However, even with transfer learning, if you don't have the right dataset, you will not get very far.   This application note aims to explain transfer learning and the importance of datasets in deep learning. The first part of the AN goes through the theoretical background of both topics. The second part describes a use case example based on the application from AN12603. It shows how a dataset of handwritten digits can be collected to match the input style of the handwritten digit recognition application. Afterwards, it illustrates how transfer learning can be used with a model trained on the original MNIST dataset to retrain it on the smaller custom dataset collected in the use case.   In the end, the AN shows that although handwritten digit recognition is a simple task for neural networks, it can still benefit from transfer learning. Training a model from scratch is slower and yields worse accuracy, especially if a very small amount of examples is used for training.     Application note URL: https://www.nxp.com/docs/en/application-note/AN12892.pdf 
記事全体を表示
TensorFlow provides a set of pretrained models ready for inference, please checkCaffe and TensorFlow Pretrained Models. You can use the model as it is or you can retrain the model with your own data to detect specific objects for your custom application. This post shows some useful links that can help you on this task. Inception models: Training Custom Object Detector — TensorFlow Object Detection API tutorial documentation The link above also shows the needed steps to prepare your data for the retraining process (image labeling). MobileNet models: Train your own model with SSD MobileNet · ichbinblau/tfrecord_generator Wiki · GitHub  For the link above, you also need to follow the steps to prepare your data for the retraining process as the Inception retraining tutorial. Make sure your exported the needed PYTHONPATH variable: export PYTHONPATH=$PYTHONPATH:/path/to/tf_models/models/research:/path/to/tf_models/models/research/slim A few tips: - Retraining a model will be faster than training a model from the beginning, but it can still take a long time to complete. It depends on many factors, such as the number of steps defined on the model's *.config file. You need to be aware of overfitting your model if your dataset is too small and the number of steps are too large. Also, TensorFlow saves checkpoints at the retraining process, which you can prepare for inference and test before the retraining process is over and check when the models is good enough for your application. Please check "Exporting a Trained Inference Graph" in the Inception retraining tutorial and keep in mind that you can follow these steps before the training process is complete. Of course, low checkpoints may not be well trained. - If your are running OpenCV DNN inference, you may need to run the following command to get the *.pbtxt file, where X corresponds to the number of classes trained in your model and tf_text_graph_ssd.py is an OpenCV DNN script: python tf_text_graph_ssd.py --input frozen_inference_graph.pb --output frozen_inference_graph.pbtxt --num_classes X
記事全体を表示
NXP BSP currently does not support running a Keras application directly on i.MX. The customers that use this approach must convert their Keras model into one the supported inference engines in eIQ. In this post we will cover converting a Keras model (.h5) to a TfLite model (.tflite). Install TensorFlow with the same eIQ TfLite supported version (you can find this information on Linux User's Guide). For L4.19.35_1.0.0 the TfLite version is v1.12.0. $ pip3 install tensorflow==1.12.0 Run the following commands in a python3 environment to convert the .h5 model to a .tflite model: >>> from tensorflow.contrib import lite >>> converter = lite.TFLiteConverter.from_keras_model_file('model.h5') #path to your model  >>> tfmodel = converter.convert() >>> open("model.tflite", "wb").write(tfmodel) The model can be deployed and used by TfLite inference engine in eIQ.
記事全体を表示
Getting Started with eIQ Software for i.MX Applications Processors Getting Started with eIQ for i.MX RT
記事全体を表示
This Lab 4 explains how to get started with TensorFlow Lite application demo on i.MX8 board using Inference Engines for eIQ Software. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: TensorFlow Lite MobileFaceNets MIPI/USB Camera Face Detection Using OpenCV   This application demo uses Haar Feature-based Cascade Classifiers for real time face detection. The pre-trained Haar Feature-based Cascade Classifiers for face, named as XML, is already contained in OpenCV. The XML file for face is stored in the opencv/data/haarcascades/ folder. It is also put on code aurora. Read Face Detection using Haar Cascades for more details. TensorFlow Lite implementation for MobileFaceNets  The MobileFaceNets is re-trained with a smaller batch size and input size to get a higher performance on a host PC. The trained model is loaded as a source file in this demo. Setting Up the Board Step 1 - Download the demo from eIQ Sample Apps and put it in /opt/tflite folder. Then enter the src folder: root@imx8mmevk:~# cd /opt/tflite/examples-tflite/1-example/src/ root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# This folder should include these files: . ├── face_detect_helpers.cpp ├── face_detect_helpers.h ├── face_detect_helpers_impl.h ├── face_recognition.cpp ├── face_recognition.h ├── haarcascade_frontalface_alt.xml ├── Makefile ├── mfn.h ├── profiling.h └── ThreadPool.h Step 2 - Compile the source code on the board: root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# make Step 3 - Run the demo: root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# ./FaceRecognition -c 0 -h 0.85 NOTE: -c is used to specify the camera index. '0' means the MIPI/USB camera is mounted on /dev/video0. -h is a threshold for the prediction score. Step 4 - Add a new person to the face data set.  When the demo is running, it will detect one biggest face at real time. Once the face is detected, you can click keyboards on the right of GUI to input the new person's name. Then, click 'Add new person' to add the face to data set.  In brief, 1. Detect face. 2. Input new person's name. 3. Click 'Add new person'. NOTE: Once new faces are added, it will create a folder named 'data' in current directory. If you want to remove the new face from the data set, just delete it in 'data'.
記事全体を表示