1. INTRODUCTION In this article we are going to discuss about the HW and SW prerequisites needed to complete this course successful. At the end of this session the hardware setup and software environment should be ready for running a deep learning examples in simulations and on the real target represented by the NXP S32V234 Vision Processor. This article explains the following topics: How to configure the SBC-S32V234 Evaluation Board for Machine Vision and Machine Learning applications; How to install the NXP Vision Toolbox from MATLAB Add-on Explorer via MathWorks File Exchange; How to generate a valid license from NXP website, free-of-cost, to activate the NXP Vision Toolbox into MATLAB; How to install the NXP Vision SDK package that contains GCC and APU Compilers, optimized kernels and functions used for HW acceleration on S32V processors; How to setup your PC environment to perform a successful cross compilation of the code generated from MATLAB; How to prepare an SD-CARD with NXP official pre-built u-boot and Linux images required to boot up the HW platform using the Vision SDK for S32V How to install and configure additional MATLAB toolboxes what are required for this course Before we start with specific details, please make yourself familiar with the main features supported by NXP Vision Toolbox. The NXP Vision Toolbox for S32V234 is an NXP proprietary tool that is designed to helps you: Test vision algorithms using NXP Vision SDK functions in the MATLAB environment for a complete development, simulation and execution on the NXP targets by generating the C++ code directly from m-scripts using nxpvt_codegen() Program the NXP APEX cores directly from MATLAB environment using Apex Core Framework graphs Configure the NXP S32V Targets to enable code deployment directly from MATLAB environment and execute vision algorithm on NXP S32V Evaluation Boards Fast evaluation of NXP solutions using ready-to-run examples derived from MATLAB Computer Vision System Toolbox and Deep Learning Toolbox Build examples using pretrained/retrained MATLAB Convolutional Neural Networks in and deploy them on the NXP S32V234 boards with just a few lines of code 2. SBC-S32V234 Hardware Overview Throughout this course we are going to use the SBC type of S32V evaluation boards. Anyhow if you have the more expensive version of the S32V234EVB2 you can still follow up these articles, the only differences being the HW setup which can be found in the toolbox documentation and help: the Quick Start Guide that is also attached here for your reference. The SBC-S32V234 is a cost-competitive evaluation board and development platform engineered for high-performance, safe computation-intensive front vision, surround vision, and sensor fusion applications. Developed in partnership with MicroSys and based on the Arm® Cortex®-A53 based S32V processors, it has an efficient form factor while covering most of the uses cases available for the S32V234. The SBC-S32V234 is the recommended starting evaluation board for S32V. The SBC-S32V234 main features are: Video input: 2 x MIPI-CSI2 and Video output: RGB to HDMI converter Communication: Gigabit Ethernet, 1x PCIE 2.0, 2 x CAN, 1 x LIN and 1 x UART 2 GiB DDR Memory plus SD card slot and 16 GiB EMMC for NVM 10 pin JTAG debug connector 12V Power Supply connector For a more comprehensive view and a more detailed description regarding hardware specifications please check the SBC User Manual or visit this NXP webpage for the S32V Data Sheet. For a Quick Start Guide for the S32V234-SBC board please check this link: https://www.nxp.com/docs/en/quick-reference-guide/Quick-Start-Guide-SBC-S32V234.pdf For this course on Machine Learning we are going to make use of the following peripherals and components hence make sure you have all of them available and you are familiar with their intended scope: At least 4GB class 10 micro SD-card that will be configured in the next section to boot up and initialize the platform. This needs to be inserted into the micro SD card slot on the SBC. In addition, for initial configuration you need an SD-card reader for connecting it with your PC; S32V-SONYCAM camera is used for capturing the video frames used for computer vision processing and deep learning applications. The camera needs to be inserted into the MIPI-A port on the SBC board; A CAT-5 ethernet cable will be used for downloading the application via TCP/IP or for getting video frames from the S32V on board camera to be processed in MATLAB; A microUSB cable that will connect the SBC-S32V234 with your Host PC will be used for finding the IP of the board and other verifications over the UART terminal; Connect a LCD monitor via HDMI cable with S32V234 SBC. The monitor will be used to display the computation results. 12v power supply For more details please review the SBC-S32V User Manual. 2. Software Overview Please read carefully this section since it contains all sort of tips and trick to have working setup for being able to generate the code from MATLAB scripts for NXP S32V and also to allow you perform various interactions with the real target. Due to the complexity of the software used to configure and control this processor we have to deal with 3 types of deliveries: Software delivered by MathWorks: MATLAB (we assume you have already installed and configure this). As a hint, please make sure the MATLAB is installed in a path with empty spaces; MATLAB Coder is the key component that allows us to generate the C++ code that will be cross-compiled to be executed on the target; Image Processing Computer Vision System allows us to perform various interactions with the data captured from the camera; Deep Learning Toolbox allows us to use pre-trained networks or re-train and re-purpose them for other scenarios; and various other support packages that are detailed in the next chapters; Software delivered by NXP: NXP Vision Toolbox as S32V embedded target support and plug in for MATLAB environment to allow code generation and deployment. Make sure you install this toolbox in MATLAB 2018b and in a path without spaces. NXP Vision SDK as primary source of optimized function, kernels, libraries and cross compilers for S32V. Make sure you install this package in a path without spaces otherwise the cross compilation will fail. Open Source Software ARM COMPUTE library various programs required to perform generic tasks: putty UART terminal, SD card formater, etc I think now you get a better picture why some manual steps are needed to configure host PC to address all software dependencies. Hence, let's start the process... 2.1 Installation and Configuration for NXP Support Package for S32V234 For convenience a step-by-step installer guide is available on MathWorks’s File Exchange website. Open MATLAB and select Get Add-Ons: Once the Add-On Explorer window opens, search for “ nxp vision toolbox s32v” Select the NXP Support Package for S32V234 and click on Add button to start the installation of the installer guide into your MATLAB instance. Wait until the toolbox is installed and then click on Open Folder button . Run the NXP_Support_Package_S32V234 command in your MATLAB console to start the Installer Guide. The NXP Support Package for S32V234 - Installer Guide User Interface is started The Installer Guide contains instructions for downloading, installing and verification of all software components required for being able to develop vision application with MATLAB for NXP S32V234 automotive vision processors: Steps to download, install and verification of the NXP Vision Toolbox for S32V234 Steps to generate, activate and verification of the license for NXP Vision Toolbox for S32V234 Steps to download and install NXP Vision SDK package Steps to configure the software environment for code generation Steps to download additional software There are 2 main advantages of using this Installer Guide: Each step completion is automatically checked by the tool. If the action is completed successfully, then the tool is going to mark it as green. If a particular step cannot be verified, then the tool will issue a warning or error and is going to highlight in red that particular step that needs more attention for user side. Future updates will be made available via this online toolbox. In case you wish to keep your software up to date, then please install this into your MATLAB Add-ons and once a new update will be available your MATLAB instance will notify you. The next screen capture shows how the Installer Guide notify user of successful or failed actions. At the end of installation all push buttons should be green. You can obtain the NXP Vision Toolbox for S32V234 by: Using the Installer guide “Go To NXP Download Site” button Go directly into your NXP Software Account and download the toolbox using this link No matter which option is used, the NXP Vision Toolbox for S32V234 installation steps are similar: once you have the toolbox on your PC, double click on the *.mltbx file to start the MATLAB Add-ons installer that will automatically start the installation process. You will be prompted with the following options: The NXP’s Vision Toolbox Installation Wizard dialog will appear. Click “Install” to proceed. Indicate acceptance of the NXP Software License Agreement by selecting “I agree to the terms of the license” to proceed . Click “OK” to start the MATLAB installation process. The rest of the process is silent and under MATLAB control. All the files will be automatically copied into default Add-Ons folder within MATLAB The default location can be changed prior to installation by changing the Add-Ons path from MATLAB Preferences After a couple of seconds, the NXP’s Vision Toolbox should be visible as a new Add-ons. More details about the NXP’s Vision Toolbox can be found by clicking on View Details NXP Vision Toolbox documentation, help and examples are fully integrated with MATLAB development environment. Get more details by accessing the standard Help and Supplemental Software section In case you are using the Installer Guide, then you have the option to check if the NXP Vision Toolbox is installed correctly on your MATLAB environment by simply clicking on “Verify Vision Toolbox Installation” button After this step you should see all button related with Vision Toolbox Step 1, green 2.2 License Generation and Activation The NXP Vision Toolbox for S32V234 is available free of charge, however, a valid license is required. You can obtain the NXP Vision Toolbox for S32V234 license free of charge by: Using the Installer guide “Generate License File” button Go directly into your NXP Software Account and Generate the license using this link Perform the following steps to obtain the NXP Vision Toolbox for S32V234 license. For the first-time log-in, the “Software Terms and Conditions” page will be displayed. Click on “I agree” button to consent to the software license agreement. In this section we presume, you already logged into your NXP account to download the toolbox prior to license generation step Click on “License Keys” tab Verify if the correct tool and version are identified and then check the box and click on “Generate” S elect Disk Serial Number or Ethernet address as the “Node Host ID”. If you do not know your Disk Serial Number nor the Ethernet address then check the link available on this page with details about License Generation. Enter a name for license to help managing them in case you need to use the Vision Toolbox on multiple computers. (Optional) Click on “Generate” button to get the license. Verify if the information is correct: Toolbox version, expiration date, Node Host ID Either click on “Save All” or copy and paste the file into a text editor, and save the file as “license.dat” into the “Vision Toolbox installed directory\license” folder. In case you are using the Installer Guide, then you can save the license file anywhere and use the “Activate NXP Vision Toolbox” option to make sure the license is copied correctly in the appropriate toolbox location. Check if the license file is installed correctly by using the “Verify Vision Toolbox License” button. If everything is ok, then the Installer Guide will confirm the action. Alternatively, you can check from command line is the license for NXP Vision Toolbox is activated. Run the command nxpvt_license_check . If there are issues with the license, this command will return the root-cause. 2.3 Installation of NXP Vision SDK and Build Tools All the code generated by NXP Vision Toolbox is based on S32V234 Vision SDK package. This software package is also free of charge and apart of optimized kernels and libraries for the S32V automotive vision processors, it also contains the build tools to cross-compile the MATLAB generated code to ARM A53 and APEX cores. You can obtain the S32V234 Vision SDK free of charge by: Using the Installer guide “Go To VSDK Download Site” button Go directly to NXP website Perform the following steps to obtain and install the S32V234 Vision SDK and NXP Build Tools: Download the Vision SDK RTM v1.3.0 on your PC. Due to the size of the package this might take a while . Once the VisionSDK_S32V2_RTM_1_3_0.exe download is finished, select “Install VSDK and A53/APU Compilers” option in the Installer Guide UI. Select the exe file and wait for the Vision SDK Install Anywhere to start. Make sure you follow all the steps and install the: NXP APU Compiler v1.0 – used to compile the generated code for APEX Vision Accelerator NXP ARM GNU Compilers – used to compile the generated code for ARM A53 MSYS2 – used to configure the bootable Linux image and to download the actual vision application to the S32V234 Evaluation Board 2.4 Environment Setup The last step required for software configuration is to set two system or user environmental variables APU_TOOLS and S32V234_SDK_ROOT that points to: APU_TOOLS= C:/NXP/APU_Compiler_v1.0 S32V234_SDK_ROOT = C:/NXP/VisionSDK_S32V2_RTM_1_3_0/s32v234_sdk Ensure system or user environment variables, corresponding to the compiler(s) you have installed, are defined to compiler path value as shown below: Paths shown are for illustration, your installation path may be different. Once environmental variables are setup you will need to restart MATLAB to use these variables. An alternative for setting the system paths manually is the “Set the environment variables” option from the NXP Vision Toolbox support package installer: If the MATLAB is open with Administrator rights, then the “Set system wide” can be used to set the system variables. Othervise (most of the cases) use “Set user wide” to setup the environment variables In order to use the Convolution Neural Networks the ARM_COMPUTELIB variable should also be set to point to the top of the arm_compute installation. More on the ARM Compute library installation in the following chapter. 2.5 SD-Card Configuration The entire procedure for configuration and booting up the platform is described in the Vision SDK manuals. Unfortunately, not everyone has access to a Host PC with Linux OS to configure an SD card (formatting, uboot, filesystem, linux image copy). To do this, you must log into your NXP account at www.nxp.com and download the SD card images: From the My account page go to “Software Licensing and Support”, then click on “View accounts” from the Software accounts panel Click on the “Automotive SW – Vision Software” link: Select the latest SDK (at the being SW32V23-VSDK001-RTM-1.3.0 😞 Agree with the terms and conditions: Then click on the SD card image based on Yocto: Unpack the downloaded file and the ‘build_content/v234_linux_build/s32v234sbc/ ’ folder will contain the sbc SD-card image which can be written directly from MATLAB. Follow the next steps to create a bootable SD card for S32V234 SBC evaluation board: Begin by inserting a microSD card with at least 4GB capacity in your Host PC running Windows OS. The Windows OS should be able to recognize the SD card and assign a drive letter (e.g.: “D:”) From MATLAB command window run the command: nxpvt_create_target('sdcard-sbc.tar.bz2', 'D:'); This example assumes you have untar the SD Card archive downloaded from the NXP website and you run the nxpvt_create_target command from the same directory as sdcard-sbc.tar.bz2 image This command will format the card and then it is going to copy all the required files from the *.bz2 image to the SD Card for booting up the Linux on S32V234 SBC. The copying process might take a while depending on the SD Card class type. During the process the following message will be shown on the screen.Wait until the copying process is finished and the “Image writing done” message is displayed on the MATLAB command prompt After the copying process is completed, you should be able to see an additional drive mapped on your system (e.g. E) that cannot be accessed since it is an ext3 file system type. Check that the initial mapped drive (e.g. D) contains: Image and s32v234-sbc file Remove the SD card from the Host PC and check the next section for details on how to bootup the S32V234 SBC Evaluation Board the S32V234 Evaluation Board Configuration Before running any example on the S32V234 SBC you need to perform the following steps: Insert the micro SD-card that has been configured in the previous section into the micro SD card slot Insert the Sony camera into the MIPI-A port. The Sony camera is used for capturing the video frames used for computer vision processing Insert an Ethernet cable in the ETH port. This will be used for downloading the application via TCP/IP Connect the S32V234 SBC via a microUSB cable with your Host PC. This is used for finding the IP of the board. Connect a LCD monitor via HDMI cable with S32V234 SBC Power on the board 2.6 Setting Up Additional Toolboxes and Utilities The ARM Compute Library is a collection of low-level functions optimized for Arm CPU and GPU architectures targeted at image processing, computer vision, and machine learning. It is available free of charge under a permissive MIT open source license. The library’s collection of functions includes: Basic arithmetic, mathematical, and binary operator functions Color manipulation (conversion, channel extraction, and more) Convolution filters (Sobel, Gaussian, and more) Canny Edge, Harris corners, optical flow, and more Pyramids (such as Laplacians) HOG (Histogram of Oriented Gradients) SVM (Support Vector Machines) H/SGEMM (Half and Single precision General Matrix Multiply) Convolutional Neural Networks building blocks (Activation, Convolution, Fully connected, Locally connected, Normalization, Pooling, Soft-max) Download ARM Compute by going to this site: https://github.com/ARM-software/ComputeLibrary The toolbox examples were built using version 18.03, so download this one to avoid any backward or forward compatibility issues - scroll down until you find Binaries section like in the image below: After downloading and unpacking the ARM Compute image the ARM_COMPUTELIB should point to the top of the installation folder: The ARM Compute library should contain the linux-arm-v8a-neon with the correct libraries: To be able to run the CNN examples in the toolbox, the following MATLAB Add-Ons should be installed : Deep Learning Toolbox Deep Learning Toolbox™ Model for GoogLeNet Network Deep Learning Toolbox™ Model for AlexNet Network Deep Learning Toolbox™ Model for SqueezeNet Network MATLAB Coder Interface for Deep Learning Libraries 3. Conclusions At this point you should be able to run all examples in the NXP Vision toolbox, including the ones containing Convolutional Neural Networks. Your setup should now be configured with: NXP Vision SDK package (libraries and compilers) NXP Vision Toolbox MATLAB Add-on for S32V processor MATLAB environment ready for CNN simulation and code generation SBC-S32V234 Evaluation Board ready to run applications from MATLAB
In this article we are going to discuss the following topics: how to use pre-trained CNN in MATLAB how to build a simple program to classify objects using CNN how to compare 3 types of CNN based on the accuracy & speed how to use NXP's SBC S32V234 Evaluation Board ISP camera to feed data into MATLAB simulations in real-time 1. INTRODUCTION The NXP Vision Toolbox offers support for integrating: MATLAB designed built-from-scratch CNN; MATLAB provided pre-trained CNN; MATLAB imported CNN from other deep learning frameworks via ONNX format; ...into your final application. The NXP Vision Toolbox has an intuitive m-script API and allows you to simply use a custom built-in wrapper that enables both simulation and deployment of the CNN supported by MATALB on the NXP S32V embedded processors for rapid prototyping and evaluation purposes. As mentioned in the beginning the main focus will be on MATLAB simulation of CNN algorithms. NXP Vision Toolbox API allows users to create a Neural Network that can be then executed in: MATLAB simulation; Real-time on NXP S32V hardware; ...using an easy syntax as shown below: nxpvt.CNN(...) For more information about the nxpvt.CNN object, type help nxpvt.CNN in the MATLAB command line. Therefore, when you want to create a CNN object from a pre-saved .mat file that contains a MATLAB-formatted and MATLAB-supported Neural Network you just need to specify: the .mat file; the input size; You then need to load the class names (still a .mat file with the class names). Currently, NXP Vision Toolbox supports only 3-channel neural networks, but we are planning to add support for a variable number of channels in the future. You can find out the input size of a MATLAB CNN very easily by using the command: Additional information about the CNN you wish to know such as class names, can be found by looking at the last (classification) layer of the network: To save the network to a .mat file and to save the class names to a .mat file you should run the following commands: The result will be the creation of two .mat files in the current folder named alexnet.mat , which will contain the actual network with all the layers, and alexnet_classes.mat which will contain the class names. As mentioned below, the toolbox also supports helper functions for achieving this. 2. USING PRE-TRAINED MODELS You can find a number of CNN examples in the vision_toolbox\examples\cnn folder. The NXP Vision Toolbox has three CNN examples which can detect objects on an image, using the webcam on your laptop or by gathering the images from the MIPI-CSI attached camera on the S32V board. Below is a simple m-script that implement object classification based AlexNet CNN. In this example the input image is obtained from the webcam using the built in NXP Vision Toolbox wrapper nxpvt.webcam() Then, a new alxNet object is created based on a pre-trained alexnet CNN provided by MATLAB. The frames captured from the webcam are then processed one-by-one in a infinite loop, by feeding the frames to the predict() method associated with the object we have created in the beginning of the the program. The first five predictions are then displayed on top of the frame acquired from the webcam using another builtin NXP Vision Toolbox wrapper nxpvt.imshow . Since this is simulation only, the program could be written without any nxpvt. prefixes but in preparation for next module code generation we think is better to get used with this notation as soon as possible. These 20-something lines of code let you create an AlexNet object and represent all the code one should write to be able to run the algorithm in simulation and deploy on the board. Before using the code, one should get a hold of the pre-trained network alexnet and save it in a file using the nxpvt.save_cnn_to_file command: There are also 3 scripts that can do this automatically : save_alexnet_to_file , save_googlenet_to_file , save_squeezenet_to_file . Running the AlexNet CNNexample is done by executing the cnn_alexnet.m . After running the m-script you should be able to see the following information on your MATLAB Figure. As can be seen the algorithm is able to identify objects on background with high confidence. 3. COMPARISON BETWEEN THE MODELS As you can see if you look in the cnn_alexnet.m, cnn_googlenet.m and cnn_squeezenet.m files the same code is being used. Apart from the fact that you should use the appropriate .mat file for the CNN object, all other aspects are pretty much the same. You can also use the cnn_alexnet_image.m, cnn_googlenet_image.m and cnn_squeezenet_image.m. We have tested the CNN predictions on our colleague mariuslucianandrei's dog and got different results. All of them were able to figure out that it's a dog, but the most accurate one was SqueezeNet which predicted it's a Pekinese. DISCLAIMER: The dog is mixed-breed so we can't really blame any of them :-) In terms of speed and accuracy on frames from a video using the webcam in MATLAB we have encountered the following performances: These performance differences aren't necessarily relevant since it depends a lot on the lighting and the shadows in the images. SqueezeNet was optimized for running with a small footprint in terms of size and its performance is comparable with AlexNet. 4. PROCESSOR IN THE LOOP SIMULATION There is also the possibility to get the images from the S32V234 board and run the algorithm in MATLAB. To do so, you have to use the S32V234 connectivity object. An example using SqueezeNet is provided in the vision_toolbox\examples\cnn\s32v234\ folder. To do this, you should get your board's IP Address: The example uses port 50000 as default, so be sure you have this port free. This example runs SqueezeNet by using the already discussed nxpvt.CNN API to wrap the network and creates an object for communication through a TCP/IP socket with the board. Getting frames from the camera follows the same approach as the Raspberry Pi MATLAB object, so if you're familiar with that, it should be really easy to set this up. To do this, you simply need to create an nxpvt.s32v234 object that takes an IP address and a port. On top of that, in the same manner as you would do for the camera board on the Raspberry Pi you need another object specifying the camera index ( 1 - for MIPI-A connected camera, 2 - for MIPI-B connected camera ) and the resolution the following way: As you can see the s32obj.cameraInUse becomes true after the cameraboard object is connected to the S32V board and the whole rigmarole associated with initializing and configuring the low-level drivers and the ISP-specific details is done automatically. Furthermore, getting a frame from the camera becomes as easy as just calling the snapshot method on the cameraboard object: This communication object is used in the SqueezeNet demo below, which is running the object detection algorithm in MATLAB with frames from the attached camera: If we run the above m-script in MATLAB with data from the S32V Sony camera we should obtain: 4. CONCLUSIONS Using these pre-trained models has the limitation of only being able to classify images based on the predefined classes the models were trained on. However, you can use Transfer Learning, which we will discuss in a future article, to train the network to recognize a set of user-defined custom classes. In conclusion, Simulation in the MATLAB environment with the NXP Vision Toolbox is pretty much straight-forward and the flexibility that MATLAB provides with the Deep Learning Toolbox and the pre-trained models has been integrated and provides MATLAB developers a friendly and familiar environment.
In this article we are going to discuss the following topics: how to configure the SBC S32V234 board to be ready for CNN deployment what are the Host PC requirements for CNN develpment how to use the pre-trained CNN models in MATLAB how to run MATLAB CNN on the NXP S32V microprocessor 1. INTRODUCTION Deploying MATLAB scripts on the NXP proprietary boards can be done using the NXP Vision Toolbox. The utilities offered by the toolbox are bypassing all the hardware configuration and initialization that someone would have had to do, and can provide MATLAB users a way to easily start to run their own code on the boards, without having to know specific hardware and low-level software details. It also includes a module that supports Convolutional Neural Network deployment, so users can: train their own CNN models developed within MATLAB; use pre-trained CNN from MATLAB; adapt a pre-trained CNN model to recognize new objects by using Transfer Learning; We will get back on Trasfer Learning and cover more of this topic in the next Course #5. The NXP Vision Toolbox uses MATLAB's capabilities to generate code for CNNs using ARM Neon technology that can accelerate, to some extent, these computation-intensive algorithms. We are actively working on getting things to work with NXP's next generation machine learning software called AIRunner, which leverages the power of the integrated Vision APEX accelerator, to offer support for deploying stable and real-time object detection, semantic segmentation and other useful algorithms that can improve driver's experience behind the wheel. 2. GETTING THINGS UP AND RUNNING This course assumes you have already set up the MATLAB environment in C1: HW and SW Environment Setup. The steps mentioned there will be listed below just for the completeness of this document: First, the NXP Vision Toolbox should be installed and the Linux Yocto distribution should be written onto the SD-Card (the SD-Card image can be downloaded from the NXP Vision SDK package here) Debian version can also be used, but there are some extra things that need to be installed on the board and our current version of the toolbox does not integrate with it when it comes to deploying scripts on the board. This will most likely change in our next release, which will get out-of-the-box support for Debian as well Download the ARM Compute Library to the host PC. For more details about ARM Compute Library, why is it needed and what can it do please refer to: https://community.arm.com/developer/tools-software/tools/b/tools-software-ides-blog/posts/getting-started-with-deep-learning-models-on-arm-cortex-a-with-matlab The ARM Compute Library can be downloaded from https://github.com/ARM-software/ComputeLibrary The NXP Vision Toolbox CNN examples were built using version 18.03, so download this one to avoid any backward or forward compatibility issues - scroll down until you find Binaries section like in the image below: After downloading and unpacking the ARM Compute image the ARM_COMPUTELIB system variable should point at the top of the installation folder: The ARM Compute library should contain the linux-arm-v8a-neon folder with the correct libraries: To be able to run the CNN examples in the toolbox, the following MATLAB Add-Ons should be installed : Deep Learning Toolbox Deep Learning Toolbox™ Model for GoogLeNet Network Deep Learning Toolbox™ Model for AlexNet Network Deep Learning Toolbox™ Model for SqueezeNet Network MATLAB Coder Interface for Deep Learning Libraries 3. RUNNING THE EXAMPLES At this point, the NXP Vision Toolbox should be ready for deploying scripts on the board. If you are not familiar with the concepts of configuring the Linux OS what runs on the NXP SBC S32V234 evaluation or Network Configuration in Windows OS, please refer to this thread: https://community.nxp.com/docs/DOC-335345 In order to download any application generated and compiled in MATLAB to the NXP microprocessor, you simply need to set the configuration structure with the IP address of the board and then to call the nxpvt_codegen script provided by the NXP Vision Toolbox in the following way: This is just a mechanism for automatizing the deployment on the board. A script can also be compiled without using the Deploy option in the configuration structure. In this case, the resulting .elf and the binaries generated by MATLAB for the Neural Network should be copied to the board, manually. The executable (.elf file) can be found in the ../ codegen/exe/SCRIPT_NAME/build-v234ce-gnu-linux-o if the compilation was done with optimization ( config.Optimize = true ), or in the ../ codegen/exe/SCRIPT_NAME/build-v234ce-gnu-linux-d if the compilation was done with debug ( config.Optimize = false 😞 After copying the executable, you can find the network binary files in the folder directly in the codegen / folder. These files represent the Makefile for compilation, the labels that the class supports, the network implementation together with network's layers, weights and biases. The files you need to be copying to the board are the weights, biases and average binary files: If you use this manual deployment method, you also have to be sure that the libarm_compute.so and libarm_compute_core.so are known by the loader using the LD_LIBRARY_PATH environmental variable. You could either set that to point to a custom folder in which you copied the .so's or you can simply copy them to the /lib/ folder. If this step is omitted we will get an error stating that the arm_compute so's are not found: Deploying the with the nxpvt_codegen script takes care of all the copying for you and no extra steps are required The same 3 ready-to-run examples from the previous course https://community.nxp.com/docs/DOC-343430 that are available in the NXP Vision Toolbox can be deployed with no extra changes to the scripts. You can click on the pictures below to zoom and get an idea of how well the networks are doing in terms of accuracy and performance. AlexNet Object Detector deployed on the S32V board GoogLeNet Object Detector deployed on the S32V board SqueezeNet Object Detector deployed on the S32V board As expected, SqueezeNet performs best in terms of performance and it delivers a pretty decent result in terms of accuracy. For a short presentation of these 3 pre-trained networks you can look into the second course of this tutorial C2: Introduction to Deep Learning. 4. CODE INSIGHTS As shown in the previous articles of this course, the idea that writing 20-something lines of code (including displaying and adding annotations) to actually deploy the neural network to the board is making sense in the context of bringing the embedded world into MATLAB. We will use the cnn_squeezenet.m to prove how easy it is to use the toolbox to run things on the S32V: We fist need to save the SqueezeNet network object from MATLAB to a .mat file to be able to generate code from it: Then we should go and take a look at the cnn_squeezenet.m script. We start by creating an input object that reads from the MIPI-A attached camera using the 1 parameter to nxpvt.webcam at line 3. We create the CNN using the nxpvt.CNN object by passing the saved squeezenet.mat that represents the actual network and the size that the network accepts. We load the squeezenet_classes.mat using the loadClassNames method of the object. We then loop to get a continuous stream from the camera, get the images with the snapshot() method of the nxpvt.webcam object and run predict on that image. The predict method will return the classes together with the percentages, in descending order of the percentages. We display the top 5 classes and we call nxpvt.toc() to determine how much time we needed to predict and display the image ( this is computed with regard to the nxpvt.tic call) in order to compute the frames per second. And we're done ! 4. CONCLUSIONS The NXP Vision Toolbox eliminates all the hassle and the extra steps that would be necessary for deploying Convolutional Neural Networks on the target directly from MATLAB. It also allows using custom networks that are supported by MATLAB to be ran with ease, providing smooth integration and headache-free execution. In the next course, we will focus extensively on how to retrain networks with Transfer Learning.
1. INTRODUCTION In this article we are going to discuss about the way the Convolutional Neural Networks are designed and maintained in the NXP Vision Toolbox and in MATLAB and offer a brief introduction of the concepts used in modern-day Machine Learning and Deep Neural Networks. This course will cover the following topics: CNN Architecure: concept, definition and implementation Perceptron: short intro and the problems it solves MultiLayered perceptrons CNN training: various CNN pre-trained architectures Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on the layers used in artificial neural networks. Learning can be supervised , semi-supervised or unsupervised. Deep learning is primarily a study of multi-layered neural networks, spanning over a great range of model architectures such as deep neural networks , deep belief networks , recurrent neural networks and convolutional neural networks. It is not our intention to provide a full coverage of the deep learning topic in this topic - that would be unrealistic. If you are new to this topic then we advice you to start with: Short introduction - https://www.mathworks.com/discovery/deep-learning.html Matlab Deep Learning training - https://www.mathworks.com/learn/tutorials/deep-learning-onramp.html 2. CNN Architecture A convolutional neural network (CNN or ConvNet) is one of the most popular algorithms for deep learning , a type of machine learning in which a model learns to perform classification tasks directly from images, video, text, or sound. CNNs are particularly useful for finding patterns in images to recognize objects, faces, and scenes. Such algorithms learn directly from image data, using patterns to classify images and eliminating the need for manual feature extraction. CNNs provide an optimal architecture for image recognition and pattern detection. Combined with advances in parallel computing, CNNs are a key technology underlying new developments in automated driving and facial recognition. 2.1 Feature Detection A convolutional neural network can have tens or hundreds of layers that each learn to detect different features of an image. Filters are applied to each training image at different resolutions, and the output of each convolved image is used as the input to the next layer. The filters can start as very simple features, such as brightness and edges, and increase in complexity to features that uniquely define the object. Like other neural networks, a CNN is composed of an input layer, an output layer, and many hidden layers in between. These layers perform operations that alter the data with the intent of learning features specific to the data. Three of the most common CNN layers are: Convolution: puts the input images through a set of convolutional filters, each of which activates certain features from the images. The Convolution layer uses a filter matrix over the array of image pixels and performs convolution operation to obtain a convolved feature map. Rectified linear unit (ReLU): allows for faster and more effective training by mapping negative values to zero and maintaining positive values. This is sometimes referred to as activation, because only the activated features are carried forward into the next layer. Pooling simplifies the output by performing nonlinear down sampling, reducing the number of parameters that the network needs to learn These operations are repeated over tens or hundreds of layers, with each layer learning to identify different features. After learning features in many layers, the architecture of a CNN shifts to classification . The next-to-last layer is a fully connected layer that outputs a vector of K dimensions where K is the number of classes that the network will be able to predict. This vector contains the probabilities for each class of any image being classified. The final layer of the CNN architecture uses a classification layer such as softmax to provide the classification output. The softmax activation function is often placed at the output layer of a neural network. It’s commonly used in multi-class learning problems where a set of features can be related to one-of-K classes. For example, in the CIFAR-10 image classification problem, given a set of pixels as input, we need to classify if a particular sample belongs to one-of-ten available classes: i.e., cat, dog, airplane, etc. Its equation is simple, we just have to compute for the normalized exponential function of all the units in the layer. Intuitively, what the softmax does is that it squashes a vector of size K between 0 and 1. Furthermore, because it is a normalization of the exponential, the sum of this whole vector equates to 1. We can then interpret the output of the softmax as the probabilities that a certain set of features belongs to a certain class. The classification part is done using a Multi Layered Neural Network. A Multi Layered Neural Network consists of a number of layers made up out of Single Layer Perceptrons, which we will cover in the next paragraph. 2.2 Single Layer Perceptron C NNs, like neural networks, are made up of neurons with learnable weights and biases. Each neuron receives several inputs, takes a weighted sum over them, passes it through an activation function and responds with an output. The basic structures of neural networks are perceptrons. The perceptron is depicted in the below figure: The perceptron consists of weights (including a special weight called bias) , summation processor and an activation function. In same cases t here is an extra input node called a bias. All the inputs are individually weighted, added together and passed into the activation function. Every activation function (or non-linearity) takes a single number and performs a certain fixed mathematical operation on it. There are several activation functions that are encountered in practice: Sigmoid: takes a real-valued input and squashes it to range between 0 and 1 σ(x) = 1 / (1 + exp(−x)) tanh: takes a real-valued input and squashes it to the range [-1, 1] tanh(x) = 2σ(2x) − 1 ReLU: ReLU stands for Rectified Linear Unit. It takes a real-valued input and thresholds it at zero (replaces negative values with zero) f(x) = max(0, x) The main function of Bias is to provide every node with a trainable constant value (in addition to the normal inputs that the node receives). I n a nutshell, a perceptron is a very simple learning machine. It can take in a few inputs, each of which has a weight to signify how important it is, and generate an output decision of “0” or “1”. M ore specifically it is a linear classification algorithm, because it uses a line to determine an input’s class. However, when combined with many other perceptrons, it forms an artificial neural network. 2.3 Multi Layer Perceptron A Multi Layer Perceptron (MLP) contains one or more hidden layers (apart from one input and one output layer). While a single layer perceptron can only learn linear functions, a multi layer perceptron can also learn non – linear functions. All connections have weights associated with them, each layer having its own bias. The process by which a Multi Layer Perceptron learns is called the backpropagation algorithm. Initially all the edge weights are randomly assigned. For every input in the training dataset, the Neural Network is activated and its output is observed. This output is compared with the desired output that we already know, and the error is “propagated” back to the previous layer. This error is noted and the weights are “adjusted” accordingly. This process is repeated until the output error is below a predetermined threshold. The process of "adjusting" the weights makes use of the Gradient Descent algorithm for minimizing the error function, which we will not go into details about. 3. CNN implementation with MATLAB and NXP Vision Toolbox MATLAB provides an convenient & easy way to train Neural Networks from scratch using its Deep Learning Toolbox. However, this task may be daunting and may require a lot of computational resources and time. To train a deep network from scratch, you must gather a very large labeled data set and design a network architecture that will learn the features and model. This is good for new applications, or applications that will have a large number of output categories. This is a less common approach because with the large amount of data and rate of learning, these networks typically take days or weeks to train . MATLAB also provides a series of ready-to-use pre-trained CNNs which can be customized and adapted through Transfer Learning, a topic we will cover in a chapter below. There are a few standard CNNs that one can use to classify a bunch of standard objects (such as a cat, a dog, a screwdriver, an apple and so on..). The NXP Vision Toolbox focuses on three of these pre-trained models for exemplification, but it also provides support for using others and for tailoring the above-mentioned ones. The NXP Vision Toolbox provides a way to create Convolutional Networks using pre-trained models from MATLAB, allowing smooth and simple usage in simulation algorithms as well as straight-forward deployment on the NXP S32V234 boards. There is also the possibility of securely running the CNNs in MATLAB to classify images taken directly from the MIPI-CSI attached camera on the board needing minimal configuration steps. One should only know the IP Address of the board and assign a port for the connection to-and-from the PC. There are 3 easy ways to use the NXP Vision Toolbox for image classification using CNNS: Run the algorithms in simulation mode in MATLAB using the PC’s webcam Run the algorithms in simulation mode in MATLAB using the camera attached to the S32V234 board Run the algorithms on the hardware We will get into the specifics of each of these interaction modes in a further document. In the next paragraphs we will provide a short description of the models that are available in the toolbox as examples: 3.1 GoogLeNet - Pretrained CNN Google’s GoogLeNet project was one of the winning teams in the 2014 ImageNet large-scale visual recognition challenge ( ILSVRC ), an annual competition to measure improvements in machine visual technology. GoogLeNet is a pretrained convolutional neural network that is 22 layers deep. GoogLeNet has been trained on over a million images and can classify images into 1000 object categories (such as keyboard, coffee mug, pencil, and many animals). The network has learned rich feature representations for a wide range of images. The network takes an image as input, and then outputs a label for the object in the image together with the probabilities for each of the object categories. The input size for the image is 224x224x3, but one can provide any image since the toolbox will convert it to the appropriate size. Using just the command in the above Command Window, we were able to get a hold of the GoogLeNet pretrained model in MATLAB. One can inspect the network layers in detailes by calling the analyzeNetwork function: As you can see the network is made up of a bunch of layers connected as a DAGNetwork (Directed Acyclic Graph Network). DAGNetwork properties: Layers - The layers of a network Connections - The connections between the layers DAGNetwork methods: predict - Run the network on input data classify - Classify data with a network activations - Compute specific network layer activations. plot - Plot a diagram of the network The layers are pretty much standard, except that there are some Inception layers specific to GoogLeNet. Inception layers of GoogLeNet consist of six convolution layers with different kernel sizes and one pooling layer. To create a GoogLeNet convolutional neural network object with the NXP Vision Toolbox one should get a hold of the .mat saved from the GoogLeNet object in Matlab as well as the classes within it. This can be done using the nxpvt.save_cnn_to_file(cnnObj) wrapper provided in the toolbox: As you can see, the googlenet.mat and googlenet_classes.mat files are created which can then be used to create the nxpvt.CNN wrapper object. The creating, the simulation and the deployment of these objects and algorithms will be discussed in detail in a future document. 3.2 AlexNet - Pretrained CNN I n 2012, AlexNet significantly outperformed all the prior competitors and won the challenge by reducing the top-5 error from 26% to 15.3%. The second place top-5 error rate, which was not a CNN variation, was around 26.2%. The image input size for this network in MATLAB is 227x227x3. Creating a MATLAB provided alexnet SeriesNetwork object is done with the following command: To take a peek at the network layers use the analyzeNetwork command as above. This will display the layers with their corresponding weights and biases: Unlike GoogLeNet, the AlexNet object is of type SeriesNetwork. A series network is one where the layers are arranged one after the other. There is a single input and a single output. SeriesNetwork properties: Layers - The layers of the network. SeriesNetwork methods: predict - Run the network on input data. classify - Classify data with a network. activations - Compute specific network layer activations. predictAndUpdateState - Predict on data and update network state. classifyAndUpdateState - Classify data and update network state. resetState - Reset network state. The CNN object is again created with the help of the generated .mat files: 3.3. SqueezeNet - Pretrained CNN SqueezeNet is the name of a deep neural network that was released in 2016. SqueezeNet was developed by researchers at DeepScale , University of California, Berkeley , and Stanford University . In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters that can more easily fit into computer memory and can more easily be transmitted over a computer network. SqueezeNet achieves the same accuracy as AlexNet but has 50x less weights. To achieve that SqueezeNet has following key ideas: Replace 3×3 filters with 1×1 filters: 1×1 have 9 times fewer parameters. Decrease the number of input channels to 3×3 filters: The number of parameters of a convolutional layer depends on the filter size, the number of channels, and the number of filters. Downsample late in the network so that convolution layers have large activation maps: This might sound counter intuitive. But since the model should be small, we need to make sure that we get the best possible accuracy out of it. The later we down sample the data (e.g. by using strides >1) the more information are retained for the layers in between, which increases the accuracy The building brick of SqueezeNet is called a fire module, which contains two layers: a squeeze layer and an expand layer. A SqueezeNet stacks a bunch of fire modules and a few pooling layers. The squeeze layer and expand layer keep the same feature map size, while the former reduces the depth to a smaller number and the latter increases it. The MATLAB implementation is a DAGNetwork, just as GoogLeNet. Using analyzeNetwork on a squeezenet object will describe its contents: The NXP Vision Toolbox CNN object over squeezenet is created using the .mat filse generated with the nxpvt.save_cnn_to_file command. This was a general introduction that highlights the models used by the NXP Vision Toolbox for Machine Learning on the NXP boards. To find out more about how to use the CNN examples provided in the NXP Vision Toolbox and more, please, stay tuned for our next presentations. 4. CNN Comparison & Conclusions Pretrained networks have different characteristics that matter when choosing a network to apply to your problem. The most important characteristics are network accuracy, speed, and size. Choosing a network is generally a tradeoff between these characteristics. A network is Pareto efficient if there is no other network that is better on all the metrics being compared, in this case accuracy and prediction time. The set of all Pareto efficient networks is called the Pareto frontier . The Pareto frontier contains all the networks that are not worse than another network on both metrics. The plot connects the networks that are on the Pareto frontier in the plane of accuracy and prediction time. All networks except AlexNet, VGG-16, VGG-19, Xception, NASNet-Mobile, ShuffleNet, and DenseNet-201 are on the Pareto frontier. Since in this training we are going to deploy the CNN on an embedded system where we need to consider the limitations in terms of memory footprint and processing power, we've selected the first 3 CNN with the smallest size/prediction time requirements: AlexNet, SqeezeNet and GoogLeNet For a more detailed comparison please visit: https://www.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html
In this 5th module of the AI and Machine Learning with S32V and MATLAB - Workshop we are going to discuss the following topics: How to retrain a CNN in MATLAB How to run custom CNN on the NXP S32V microprocessor Brief introduction to training your own model from scratch 1. INTRODUCTION In practice, very few people train an entire Convolutional Neural Network from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. Instead, it is common to pretrain a CNNon a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories), and then use the CNN either as an initialization or a fixed feature extractor for the task of interest. Transfer learning involves the approach in which knowledge learned in one or more source tasks is transferred and used to improve the learning of a related target task. While most machine learning algorithms are designed to address single tasks, the development of algorithms that facilitate transfer learning is a topic of ongoing interest in the machine-learning community. Transfer learning is commonly used in deep learning applications. You can take a pretrained network and use it as a starting point to learn a new task. Fine-tuning a network with transfer learning is usually much faster and easier than training a network with randomly initialized weights from scratch. You can quickly transfer learned features to a new task using a smaller number of training images. To get all of the files needed by the example in the tutorial you should run the vision_toolbox.exe archive in the root of the Vision Toolbox as in the window below and overwrite all files that the prompt asks you about: You can also find the training set attached (the four .7z archives). They contain all the 64 classes used for training in this tutorial. 2. CUSTOMIZING THE MODEL This tutorial was done using MATLAB R2018b. The R2019a release introduces a couple of changes that will require some porting work. As soon as we update the toolbox to use the new version of MATLAB, we will update the tutorial as well. In order to retrain the model using an already trained network as a starting point, one can tweak the network's layers, augment the training dataset, change some of the learning parameters or other use some of the fine-tuning techniques out there. We are going to showcase the toolbox's capabilities with one example that is not yet available in the toolbox, but that will get posted here. We are going to be talking about a Traffic Sign Recognition system using the Belgian traffic signs database which one can obtain by downloading from the training data from https://btsd.ethz.ch/shareddata/BelgiumTSC/BelgiumTSC_Training.zip and the testing data from https://btsd.ethz.ch/shareddata/BelgiumTSC/BelgiumTSC_Testing.zip. 2.1 CONFIGURING THE DATASET After downloading the dataset, we have renamed all the folders to the class they represent (traffic sign name). The raw data is divided into 62 classes (00000 - 00061) : As you can see we have also added the Cars and Trucks folder with our own data. First thing we need to do is to actually set the class names in a variable called categories that will hold the names of all folders in the training_set_signs folder. 2.2 AUGMENTING THE DATASET Before actually diving into (re)training a CNN, MATLAB provides a way to increase the input dataset on the fly, preventing over-fitting and allowing users to get the most of their data. Over-fitting occurs when you achieve a good fit of your model on the training data, while it does not generalize well on new, unseen data. In other words, the model learned patterns specific to the training data, which are irrelevant in other data. In MATLAB, this step only boils down to configuring a set of preprocessing options for augmenting the image, using a special object. An augmented image datastore transforms batches of training, validation, test, and prediction data, with optional preprocessing such as resizing, rotation, and reflection. It also allows resizing the available images to make them compatible with the input size of the deep learning network. For a complete overview of MATLAB's capabilities regarding this subject, visit https://www.mathworks.com/help/deeplearning/ug/preprocess-images-for-deep-learning.html#mw_ef499675-d7a0-4e77-8741-ea5801695193 . Augmenting the data in our example consists of applying: random rotations ( -30 degrees to 30 degrees ) random X and Y translations ( -50 to +50 pixels ) scaling/zooming ( 0.5 to 1.8). The datastore automatically resizes the input images to 227x227x3, which is the format that SqueezeNet was trained on and the only image format it accepts: 2.3 REPLACING CLASSIFICATION LAYERS Going forward, we should first understand the layers of the SqueezeNet network. The convolutional layers of the network extract image features that the last learnable layer and the final classification layer use to classify the input image. These 2 layers (learnable and classification) contain information on how to combine the features that the network extracts into class probabilities, a loss value, and predicted labels. To retrain a pretrained network to classify new images, we need to replace these two layers with new layers adapted to the new data set. We first extract the layer graph from the trained network. If the network is a SeriesNetwork object, such as AlexNet, VGG-16, or VGG-19, then the list of layers in net.Layers to a layer graph gets converted into a LayerGraph. In most networks, the last layer with learnable weights is a fully connected layer. Replace this fully connected layer with a new fully connected layer with the number of outputs equal to the number of classes in the new data set (5, in this example). In some networks, such as SqueezeNet, the last learnable layer is a 1-by-1 convolutional layer instead. In this case, replace the convolutional layer with a new convolutional layer with the number of filters equal to the number of classes. To learn faster in the new layer than in the transferred layers, increase the learning rate factors of the layer. Finding the layers to be replaced is done with the helper function findLayersToReplace which is provided by MATLAB in one of their example. You can find it attached here if you want to understand more on how it is used to determine the layers that have to get replaced. The classification layer specifies the output classes of the network, thus we have to replace the classification layer with a new one without class labels. The trainNetwork function automatically sets the output classes of the layer at training time. To check that the new layers are connected correctly, we can plot the new layer graph and zoom in on the last layers of the network: The network is now ready to be retrained on the new set of images. Optionally, you can "freeze" the weights of earlier layers in the network by setting the learning rates in those layers to zero. During training, the trainNetwork function does not update the parameters of the frozen layers. Because the gradients of the frozen layers do not need to be computed, freezing the weights of many initial layers can significantly speed up network training. If the new data set is small, then freezing earlier network layers can also prevent those layers from over-fitting to the new data set. Extract the layers and connections of the layer graph and select which layers to freeze. We will make use of the supporting function createLgraphUsingConnections to reconnect all the layers in the original order. The new layer graph contains the same layers, but with the learning rates of the earlier layers set to zero: 3. RETRAINING THE MODEL The actual training is done is the next code snippet. We provide the training options in a trainingOptions structure with a set of parameters ( which, by the way, can be tweaked in an attempt to increase accuracy ). We are using SGDM ( Stochastic Gradient Descent with Momentum ) optimizer, mini-batches with 10 observations at each iteration, an Initial learning rate of 0.0003 and a total of 6 epochs for the training process. Training with this parameters took 32 minutes with a Single CPU hardware resource and achieved a pretty decent 84% validation accuracy level.The training duration can be dramatically improved by using a GPU and the Parallel Computing Toolbox from MATLAB. Last thing we want to do before actually testing all the work we've done is to save the classNames for the generated CNN: At this point you should be ready to test the network, if all went according to the plan. 4. TESTING THE CUSTOMIZED NETWORK After training the script saves the new trained network in a .mat file, which should be then loaded using the same manner show in the previous courses. You should run the self-extracting archive in the toolbox's root to get a hold of the files in the example. To run the traffic sign detector example, just run the traffic_sign_detector() script. To run it on the target board, you should copy the input images to the board. To do that, you can use the helper set_config_for_traffic_sign_detector.m : You now should be getting the expected results from the board: 5. CONCLUSIONS AND FURTHER WORK The NXP Vision Toolbox represents a flexible addition that enables users to get things up and running in no time, and allows deployment of algorithms on the NXP boards bringing the MATLAB world into the embedded one. It combines the versatility of MATLAB with the rapidity of the NXP boards by harnessing all the power the hardware accelerators provide. On top of that it allows Convolutional Neural Networks deployment. Additional ongoing development is happening to accelerate the neural networks (by running the computation-intensive neural network on APEX accelerators) with an internal engine that is also under development. Stay tunes for updates in the near future.
Product Release Announcement Automotive Microcontrollers and Processors NXP Vision Toolbox for S32V234 – 1.2.0 Austin, Texas, USA October 14, 2019 Automotive Microcontrollers and Processors Model-Based Design Tools Team at NXP Semiconductors, is pleased to announce the release of the Vision Toolbox for S32V234 1.2.0 RFP. This release supports Computer Vision and Machine Learning applications prototyping with MATLAB® for NXP’s S32V234 Automotive Vision Processors. Download Location: http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=NXP_VISION_TOOLBOX Activation link http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=NXP_VISION_TOOLBOX Technical Support NXP Vision Toolbox for S32V234 issues are tracked through NXP Model-Based Design Tools Community space Release Content (updates relative to previous version) A quick guided tour into the Vision Toolbox main features supported in this release can be watched here: Machine Vision Algorithm development using Vision Toolbox | NXP Compatible with NXP Vision Software Development Kit for S32V2 RTM 1.4.0 libraries and build tools; Machine Learning support with MATLAB ® pretrained CNN/Deep Learning Toolbox and code generation for S32V234 ARM A53 now compatible with the latest version of release 2019b ; Redesign all ready to run examples that can be executed in MATLAB ® simulation or directly on S32V234 HW to make them more user friendly : Faces / Pedestrians / Lanes Detection applications; CNN SqeezeNet / GoogLeNet / AlexNet applications; MATLAB® Integration The NXP Vision Toolbox extends the MATLAB® Computer Vision System, Image Processing and Deep Learning toolboxes experience by allowing customers to evaluate and use NXP’s Vision SDK RTM 1.4.0 and NXP S32V evaluation boards (EVB and SBC) solutions out-of-the-box with: NXP Support Package for S32V234 Online Installer Guide Add-on; NXP_Vision_Toolbox_for_S32V234 Package integrated with MATLAB® environment in terms of installation, documentation, help and examples; Target Audience This release is intended for technology demonstration, evaluation purposes, computer vision and machine learning prototyping with S32V234 microprocessors and S32V SBC & EVB boards Useful Resources NXP Vision Toolbox Home page Other useful documents can be found on Toolbox Home page Documentation
Single Layer Perceptron Before getting into the hardware specific details, this course will cover a bit of info about how CNNs work and what are they used for. CNNs are widely used in applications in image and video recognition, thus they certainly stirred up interest in the automotive world. CNNs, like neural networks, are made up of neurons with learnable weights and biases. Each neuron receives several inputs, takes a weighted sum over them, passes it through an activation function and responds with an output. The basic structures of neural networks are perceptrons. The perceptron is depicted in the below figure: The perceptron consists of weights (including a special weight called bias) , summation processor and an activation function There also is an extra input node called a bias, which is a bit of All the inputs are individually weighted, added together and passed into the activation function. Every activation function (or non-linearity) takes a single number and performs a certain fixed mathematical operation on it. There are several activation functions that are encountered in practice: Sigmoid: takes a real-valued input and squashes it to range between 0 and 1 σ(x) = 1 / (1 + exp(−x)) tanh: takes a real-valued input and squashes it to the range [-1, 1] tanh(x) = 2σ(2x) − 1 ReLU: ReLU stands for Rectified Linear Unit. It takes a real-valued input and thresholds it at zero (replaces negative values with zero) f(x) = max(0, x) The main function of Bias is to provide every node with a trainable constant value (in addition to the normal inputs that the node receives). Multi Layer Perceptron A Multi Layer Perceptron (MLP) contains one or more hidden layers (apart from one input and one output layer). While a single layer perceptron can only learn linear functions, a multi layer perceptron can also learn non – linear functions. All connections have weights associated with them, each layer having its own bias. The process by which a Multi Layer Perceptron learns is called the Backpropagation algorithm. Initially all the edge weights are randomly assigned. For every input in the training dataset, the Neural Network is activated and its output is observed. This output is compared with the desired output that we already know, and the error is “propagated” back to the previous layer. This error is noted and the weights are “adjusted” accordingly. This process is repeated until the output error is below a predetermined threshold. The process of "adjusting" the weights makes use of the Gradient Descent algorithm for minimizing the error function, which we will not go into details about. Convonlutional Neural Networks
Product Release Announcement Automotive Microcontrollers and Processors NXP Vision Toolbox for S32V234 – 1.1.0 Austin, Texas, USA April 9th, 2019 Automotive Microcontrollers and Processors Model-Based Design Tools Team at NXP Semiconductors, is pleased to announce the release of the Vision Toolbox for S32V234 1.1.0 RFP. This release supports Computer Vision and Machine Learning applications prototyping with MATLAB® for NXP’s S32V234 Automotive Vision Processors. Download Location: http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=NXP_VISION_TOOLBOX Activation link http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=NXP_VISION_TOOLBOX Technical Support NXP Vision Toolbox for S32V234 issues are tracked through NXP Model-Based Design Tools Community space Release Content (updates relative to previous version) A quick guided tour into the Vision Toolbox main features supported in this release can be watched here: Machine Vision Algorithm development using Vision Toolbox | NXP Compatible with NXP Vision Software Development Kit for S32V2 RTM 1.3.0 libraries and build tools; Cascade classifiers support, trained from OpenCV using HAAR and LBP features; KALMAN Filter support; Machine Learning support with MATLAB® pretrained CNN/Deep Learning Toolbox and code generation for S32V234 ARM A53; Ready to run examples that can be executed in MATLAB® simulation or directly on S32V234 HW: Faces / Pedestrians / Lanes Detection applications; CNN SqeezeNet / GoogLeNet / AlexNet applications; MATLAB® Integration The NXP Vision Toolbox extends the MATLAB® Computer Vision System, Image Processing and Deep Learning toolboxes experience by allowing customers to evaluate and use NXP’s Vision SDK RTM 1.3.0 and NXP S32V evaluation boards (EVB and SBC) solutions out-of-the-box with: NXP Support Package for S32V234 Online Installer Guide Add-on; NXP_Vision_Toolbox_for_S32V234 Package integrated with MATLAB® environment in terms of installation, documentation, help and examples; Target Audience This release is intended for technology demonstration, evaluation purposes, computer vision and machine learning prototyping with S32V234 microprocessors and S32V SBC & EVB boards Useful Resources NXP Vision Toolbox Home page Other useful documents can be found on Toolbox Home page Documentation
Product Release Announcement Automotive Microcontrollers and Processors NXP Vision Toolbox for S32V234 - 2018.R1 Austin, Texas, USA November 19, 2018 Automotive Microcontrollers and Processors Model-Based Design Tools Team at NXP Semiconductors, is pleased to announce the release of the Vision Toolbox for S32V234 2018.R1. This release supports computer vision applications prototyping on MATLAB® for NXP’s S32V234 Automotive Vision Processors. Download Location: http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=VISION-MATLAB_v2018.R1 Activation link http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=VISION-MATLAB_v2018.R1 Technical Support NXP Vision Toolbox for S32V234 issues are tracked through NXP Model-Based Design Tools Community space Release Content A utomatic C++ code generation from MATLAB® m-scripts for S32V234 Automotive Vision Processor ARM® A53 and APU Embedded Processors; Support for APEX programming based on: APEX Core Framework (ACF) using APEX Kernels m-script wrappers and graphs APEX Computer Vision (APEXCV) using m-script wrappers over VSDK classes MEX support for APEX emulation in MATLAB®. All classes from VSDK BASE are supported in emulation to allow users fast prototyping in MATLAB® simulation environment and bit-exact comparison against NXP hardware. Compatible with NXP Vision Software Development Kit for S32V2 RTM 1. 2 . 0+HF1+HF2 libraries and build tools; Support for NXP SBC-S32V234 and S32V234 -EVB . The generated code can be built, downloaded and run on the NXP targets directly from MATLAB®; Support for NXP S32V ISP camera object in MATLAB®. Users can capture video frames in real-time from S32V ISP and process data directly in MATLAB for various algorithms development; Ready to run examples from MATLAB ® that can run in Simulation or S32V234 f or : Faces / Pedestrians / Lanes Detection applications; APEX Kernels examples: e.g., Sobel, Gauss, Rotate, etc.; APEX Computer Vision examples: e.g., remap, resize, rgb2gray, etc.; IO Examples: video input, video reader , s32v ISP camera ; MATLAB® Integration The NXP Vision Toolbox extends the MATLAB ® Computer Vision System and Image Processing toolboxes experience by allowing customers to evaluate and use NXP’s Vision SDK RTM 1. 2.0 and NXP S32V evaluation boards (EVB and SBC) solutions out-of-the-box with : NXP Support Package for S32V234 Online Installer Guide Add-on; NXP_Vision_Toolbox_for_S32V234 Package integrated with MATLAB ® environment in terms of installation, documentation, help and examples; Useful Resources NXP Vision Toolbox Home page Other useful documents can be found on Toolbox Home page Documentation