NXP Model-Based Design Tools for VISION Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

NXP Model-Based Design Tools for VISION Knowledge Base

Labels
  • HotFix 6

Discussions

Sort by:
1. INTRODUCTION In this article we are going to discuss about the HW and SW prerequisites needed to complete this course successful. At the end of this session the hardware setup and software environment should be ready for running a deep learning examples in simulations and on the real target represented by the NXP S32V234 Vision Processor.  This article explains the following topics: How to configure the SBC-S32V234 Evaluation Board for Machine Vision and Machine Learning applications; How to install the NXP Vision Toolbox from MATLAB Add-on Explorer via MathWorks File Exchange; How to generate a valid license from NXP website, free-of-cost, to activate the NXP Vision Toolbox into MATLAB; How to install the NXP Vision SDK package that contains GCC and APU Compilers, optimized kernels and functions used for HW acceleration on S32V processors; How to setup your PC environment to perform a successful cross compilation of the code generated from MATLAB; How to prepare an SD-CARD with NXP official pre-built u-boot and Linux images required to boot up the HW platform using the Vision SDK for S32V How to install and configure additional MATLAB toolboxes what are required for this course Before we start with specific details, please make yourself familiar with the main features supported by NXP Vision Toolbox. The NXP Vision Toolbox for S32V234 is an NXP proprietary tool that is designed to helps you: Test vision algorithms using NXP Vision SDK functions in the MATLAB environment for a complete development, simulation and execution on the NXP targets by generating the C++ code directly from m-scripts using nxpvt_codegen() Program the NXP APEX cores directly from MATLAB environment using Apex Core Framework graphs Configure the NXP S32V Targets to enable code deployment directly from MATLAB environment and execute vision algorithm on NXP S32V Evaluation Boards Fast evaluation of NXP solutions using ready-to-run examples derived from MATLAB Computer Vision System Toolbox and Deep Learning Toolbox Build examples using pretrained/retrained MATLAB Convolutional Neural Networks in and deploy them on the NXP S32V234 boards with just a few lines of code 2. SBC-S32V234 Hardware Overview   Throughout this course we are going to use the SBC type of S32V evaluation boards. Anyhow if you have the more expensive version of the S32V234EVB2 you can still follow up these articles, the only differences being the HW setup which can be found in the toolbox documentation and help: the Quick Start Guide that is also attached here for your reference. The SBC-S32V234 is a cost-competitive evaluation board and development platform engineered for high-performance, safe computation-intensive front vision, surround vision, and sensor fusion applications. Developed in partnership with MicroSys and based on the Arm® Cortex®-A53 based S32V processors, it has an efficient form factor while covering most of the uses cases available for the S32V234. The SBC-S32V234 is the recommended starting evaluation board for S32V. The SBC-S32V234 main features are: Video input: 2 x MIPI-CSI2 and Video output: RGB to HDMI converter Communication: Gigabit Ethernet, 1x PCIE 2.0, 2 x CAN, 1 x LIN and 1 x UART 2 GiB DDR Memory plus SD card slot and 16 GiB EMMC for NVM 10 pin JTAG debug connector 12V Power Supply connector     For a more comprehensive view and a more detailed description regarding hardware specifications please check the SBC User Manual or visit this NXP webpage for the S32V Data Sheet. For a Quick Start Guide for the S32V234-SBC board please check this link: https://www.nxp.com/docs/en/quick-reference-guide/Quick-Start-Guide-SBC-S32V234.pdf   For this course on Machine Learning we are going to make use of the following peripherals and components hence make sure you have all of them available and you are familiar with their intended scope: At least 4GB class 10 micro SD-card that will be configured in the next section to boot up and initialize the platform. This needs to be inserted into the micro SD card slot on the SBC. In addition, for initial configuration you need an SD-card reader for connecting it with your PC; S32V-SONYCAM camera is used for capturing the video frames used for computer vision processing and deep learning applications. The camera needs to be inserted into the MIPI-A port on the SBC board; A CAT-5 ethernet cable will be used for downloading the application via TCP/IP or for getting video frames from the S32V on board camera to be processed in MATLAB; A microUSB cable that will connect the SBC-S32V234 with your Host PC will be used for finding the IP of the board and other verifications over the UART terminal; Connect a LCD monitor via HDMI cable with S32V234 SBC. The monitor will be used to display the computation results. 12v power supply For more details please review the SBC-S32V User Manual.   2. Software Overview   Please read carefully this section since it contains all sort of tips and trick to have working setup for being able to generate the code from MATLAB scripts for NXP S32V and also to allow you perform various interactions with the real target. Due to the complexity of the software used to configure and control this processor we have to deal with 3 types of deliveries: Software delivered by MathWorks: MATLAB (we assume you have already installed and configure this). As a hint, please make sure the MATLAB is installed in a path with empty spaces; MATLAB Coder is the key component that allows us to generate the C++ code that will be cross-compiled to be executed on the target; Image Processing Computer Vision System allows us to perform various interactions with the data captured from the camera; Deep Learning Toolbox allows us to use pre-trained networks or re-train and re-purpose them for other scenarios; and various other support packages that are detailed in the next chapters; Software delivered by NXP: NXP Vision Toolbox as S32V embedded target support and plug in for MATLAB environment to allow code generation and deployment. Make sure you install this toolbox in MATLAB 2018b and in a path without spaces. NXP Vision SDK as primary source of optimized function, kernels, libraries and cross compilers for S32V. Make sure you install this package in a path without spaces otherwise the cross compilation will fail. Open Source Software ARM COMPUTE library various programs required to perform generic tasks: putty UART terminal, SD card formater, etc I think now you get a better picture why some manual steps are needed to configure host PC to address all software dependencies. Hence, let's start the process... 2.1 Installation and Configuration for NXP Support Package for S32V234 For convenience a step-by-step installer guide is available on MathWorks’s File Exchange website. Open MATLAB and select Get Add-Ons: Once the Add-On Explorer window opens, search for “nxp vision toolbox s32v” Select the NXP Support Package for S32V234 and click on Add button to start the installation of the installer guide into your MATLAB instance. Wait until the toolbox is installed and then click on Open Folder button. Run the NXP_Support_Package_S32V234 command in your MATLAB console to start the Installer Guide. The NXP Support Package for S32V234 - Installer Guide User Interface is started The Installer Guide contains instructions for downloading, installing and verification of all software components required for being able to develop vision application with MATLAB for NXP S32V234 automotive vision processors: Steps to download, install and verification of the NXP Vision Toolbox for S32V234 Steps to generate, activate and verification of the license for NXP Vision Toolbox for S32V234 Steps to download and install NXP Vision SDK package Steps to configure the software environment for code generation Steps to download additional software   There are 2 main advantages of using this Installer Guide: Each step completion is automatically checked by the tool. If the action is completed successfully, then the tool is going to mark it as green. If a particular step cannot be verified, then the tool will issue a warning or error and is going to highlight in red that particular step that needs more attention for user side. Future updates will be made available via this online toolbox. In case you wish to keep your software up to date, then please install this into your MATLAB Add-ons and once a new update will be available your MATLAB instance will notify you. The next screen capture shows how the Installer Guide notify user of successful or failed actions. At the end of installation all push buttons should be green.    You can obtain the NXP Vision Toolbox for S32V234 by: Using the Installer guide “Go To NXP Download Site” button Go directly into your NXP Software Account and download the toolbox using this link No matter which option is used, the NXP Vision Toolbox for S32V234 installation steps are similar: once you have the toolbox on your PC, double click on the *.mltbx file to start the MATLAB Add-ons installer that will automatically start the installation process. You will be prompted with the following options: The NXP’s Vision Toolbox Installation Wizard dialog will appear. Click “Install” to proceed. Indicate acceptance of the NXP Software License Agreement by selecting “I agree to the terms of the license” to proceed. Click “OK” to start the MATLAB installation process. The rest of the process is silent and under MATLAB control. All the files will be automatically copied into default Add-Ons folder within MATLAB The default location can be changed prior to installation by changing the Add-Ons path from MATLAB Preferences After a couple of seconds, the NXP’s Vision Toolbox should be visible as a new Add-ons.  More details about the NXP’s Vision Toolbox can be found by clicking on View Details NXP Vision Toolbox documentation, help and examples are fully integrated with MATLAB development environment. Get more details by accessing the standard Help and Supplemental Software section  In case you are using the Installer Guide, then you have the option to check if the NXP Vision Toolbox is installed correctly on your MATLAB environment by simply clicking on “Verify Vision Toolbox Installation” button After this step you should see all button related with Vision Toolbox Step 1, green 2.2 License Generation and Activation The NXP Vision Toolbox for S32V234 is available free of charge, however, a valid license is required. You can obtain the NXP Vision Toolbox for S32V234 license free of charge by: Using the Installer guide “Generate License File” button Go directly into your NXP Software Account and Generate the license using this link Perform the following steps to obtain the NXP Vision Toolbox for S32V234 license. For the first-time log-in, the “Software Terms and Conditions” page will be displayed. Click on “I agree” button to consent to the software license agreement. In this section we presume, you already logged into your NXP account to download the toolbox prior to license generation step   Click on “License Keys” tab Verify if the correct tool and version are identified and then check the box and click on “Generate” Select Disk Serial Number or Ethernet address as the “Node Host ID”. If you do not know your Disk Serial Number nor the Ethernet address then check the link available on this page with details about License Generation. Enter a name for license to help managing them in case you need to use the Vision Toolbox on multiple computers. (Optional) Click on “Generate” button to get the license. Verify if the information is correct: Toolbox version, expiration date, Node Host ID Either click on “Save All” or copy and paste the file into a text editor, and save the file as “license.dat” into the “Vision Toolbox installed directory\license” folder. In case you are using the Installer Guide, then you can save the license file anywhere and use the “Activate NXP Vision Toolbox” option to make sure the license is copied correctly in the appropriate toolbox location. Check if the license file is installed correctly by using the “Verify Vision Toolbox License” button. If everything is ok, then the Installer Guide will confirm the action. Alternatively, you can check from command line is the license for NXP Vision Toolbox is activated. Run the command nxpvt_license_check. If there are issues with the license, this command will return the root-cause.   2.3 Installation of NXP Vision SDK and Build Tools All the code generated by NXP Vision Toolbox is based on S32V234 Vision SDK package. This software package is also free of charge and apart of optimized kernels and libraries for the S32V automotive vision processors, it also contains the build tools to cross-compile the MATLAB generated code to ARM A53 and APEX cores. You can obtain the S32V234 Vision SDK free of charge by: Using the Installer guide “Go To VSDK Download Site” button Go directly to NXP website   Perform the following steps to obtain and install the S32V234 Vision SDK and NXP Build Tools: Download the Vision SDK RTM v1.3.0 on your PC. Due to the size of the package this might take a while. Once the VisionSDK_S32V2_RTM_1_3_0.exe download is finished, select “Install VSDK and A53/APU Compilers” option in the Installer Guide UI. Select the exe file and wait for the Vision SDK Install Anywhere to start.  Make sure you follow all the steps and install the: NXP APU Compiler v1.0 – used to compile the generated code for APEX Vision Accelerator NXP ARM GNU Compilers – used to compile the generated code for ARM A53 MSYS2 – used to configure the bootable Linux image and to download the actual vision application to the S32V234 Evaluation Board 2.4 Environment Setup The last step required for software configuration is to set two system or user environmental variables APU_TOOLS and S32V234_SDK_ROOT that points to: APU_TOOLS= C:/NXP/APU_Compiler_v1.0 S32V234_SDK_ROOT = C:/NXP/VisionSDK_S32V2_RTM_1_3_0/s32v234_sdk Ensure system or user environment variables, corresponding to the compiler(s) you have installed, are defined to compiler path value as shown below: Paths shown are for illustration, your installation path may be different. Once environmental variables are setup you will need to restart MATLAB to use these variables. An alternative for setting the system paths manually is the “Set the environment variables” option from the NXP Vision Toolbox support package installer:   If the MATLAB is open with Administrator rights, then the “Set system wide” can be used to set the system variables. Othervise (most of the cases) use “Set user wide” to setup the environment variables In order to use the Convolution Neural Networks the ARM_COMPUTELIB variable should also be set to point to the top of the arm_compute installation. More on the ARM Compute library installation in the following chapter. 2.5 SD-Card Configuration The entire procedure for configuration and booting up the platform is described in the Vision SDK manuals. Unfortunately, not everyone has access to a Host PC with Linux OS to configure an SD card (formatting, uboot, filesystem, linux image copy).  To do this, you must log into your NXP account at www.nxp.com and download the SD card images: From the My account page go to “Software Licensing and Support”, then click on “View accounts” from the Software accounts panel Click on the “Automotive SW – Vision Software” link: Select the latest SDK (at the being SW32V23-VSDK001-RTM-1.3.0 😞 Agree with the terms and conditions: Then click on the SD card image based on Yocto: Unpack the downloaded file and the ‘build_content/v234_linux_build/s32v234sbc/’ folder will contain the sbc SD-card image which can be written directly from MATLAB. Follow the next steps to create a bootable SD card for S32V234 SBC evaluation board: Begin by inserting a microSD card with at least 4GB capacity in your Host PC running Windows OS. The Windows OS should be able to recognize the SD card and assign a drive letter (e.g.: “D:”) From MATLAB command window run the command:    nxpvt_create_target('sdcard-sbc.tar.bz2', 'D:'); This example assumes you have untar the SD Card archive downloaded from the NXP website and you run the nxpvt_create_target command from the same directory as sdcard-sbc.tar.bz2 image This command will format the card and then it is going to copy all the required files from the *.bz2 image to the SD Card for booting up the Linux on S32V234 SBC. The copying process might take a while depending on the SD Card class type. During the process the following message will be shown on the screen.Wait until the copying process is finished and the “Image writing done” message is displayed on the MATLAB command prompt After the copying process is completed, you should be able to see an additional drive mapped on your system (e.g. E) that cannot be accessed since it is an ext3 file system type. Check that the initial mapped drive (e.g. D) contains: Image and s32v234-sbc file Remove the SD card from the Host PC and check the next section for details on how to bootup the S32V234 SBC Evaluation Board the S32V234 Evaluation Board Configuration   Before running any example on the S32V234 SBC you need to perform the following steps: Insert the micro SD-card that has been configured in the previous section into the micro SD card slot Insert the Sony camera into the MIPI-A port. The Sony camera is used for capturing the video frames used for computer vision processing Insert an Ethernet cable in the ETH port. This will be used for downloading the application via TCP/IP Connect the S32V234 SBC via a microUSB cable with your Host PC. This is used for finding the IP of the board. Connect a LCD monitor via HDMI cable with S32V234 SBC Power on the board 2.6 Setting Up Additional Toolboxes and Utilities The ARM Compute Library is a collection of low-level functions optimized for Arm CPU and GPU architectures targeted at image processing, computer vision, and machine learning. It is available free of charge under a permissive MIT open source license. The library’s collection of functions includes: Basic arithmetic, mathematical, and binary operator functions Color manipulation (conversion, channel extraction, and more) Convolution filters (Sobel, Gaussian, and more) Canny Edge, Harris corners, optical flow, and more Pyramids (such as Laplacians) HOG (Histogram of Oriented Gradients) SVM (Support Vector Machines) H/SGEMM (Half and Single precision General Matrix Multiply) Convolutional Neural Networks building blocks (Activation, Convolution, Fully connected, Locally connected, Normalization, Pooling, Soft-max) Download ARM Compute by going to this site: https://github.com/ARM-software/ComputeLibrary The toolbox examples were built using version 18.03, so download this one to avoid any backward or forward compatibility issues - scroll down until you find Binaries section like in the image below: After downloading and unpacking the ARM Compute image the ARM_COMPUTELIB should point to the top of the installation folder: The ARM Compute library should contain the linux-arm-v8a-neon with the correct libraries: To be able to run the CNN examples in the toolbox, the following MATLAB Add-Ons should be installed : Deep Learning Toolbox Deep Learning Toolbox™ Model for GoogLeNet Network Deep Learning Toolbox™ Model for AlexNet Network Deep Learning Toolbox™ Model for SqueezeNet Network MATLAB Coder Interface for Deep Learning Libraries 3. Conclusions   At this point you should be able to run all examples in the NXP Vision toolbox, including the ones containing Convolutional Neural Networks.  Your setup should now be configured with: NXP Vision SDK package (libraries and compilers) NXP Vision Toolbox MATLAB Add-on for S32V processor MATLAB environment ready for CNN simulation and code generation SBC-S32V234 Evaluation Board ready to run applications from MATLAB
View full article
1. INTRODUCTION   In this article we are going to discuss about the way the Convolutional Neural Networks are designed and maintained in the NXP Vision Toolbox and in MATLAB and offer a brief introduction of the concepts used in modern-day Machine Learning and Deep Neural Networks. This course will cover the following topics: CNN Architecure: concept, definition and implementation Perceptron: short intro and the problems it solves MultiLayered perceptrons CNN training: various CNN pre-trained architectures  Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on the layers used in artificial neural networks. Learning can be supervised, semi-supervised or unsupervised. Deep learning is primarily a study of multi-layered neural networks, spanning over a great range of model architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks. It is not our intention to provide a full coverage of the deep learning topic in this topic - that would be unrealistic. If you are new to this topic then we advice you to start with: Short introduction - https://www.mathworks.com/discovery/deep-learning.html Matlab Deep Learning training - https://www.mathworks.com/learn/tutorials/deep-learning-onramp.html   2. CNN Architecture A convolutional neural network (CNN or ConvNet) is one of the most popular algorithms for deep learning, a type of machine learning in which a model learns to perform classification tasks directly from images, video, text, or sound. CNNs are particularly useful for finding patterns in images to recognize objects, faces, and scenes. Such algorithms learn directly from image data, using patterns to classify images and eliminating the need for manual feature extraction. CNNs provide an optimal architecture for image recognition and pattern detection. Combined with advances in parallel computing, CNNs are a key technology underlying new developments in automated driving and facial recognition. 2.1 Feature Detection A convolutional neural network can have tens or hundreds of layers that each learn to detect different features of an image. Filters are applied to each training image at different resolutions, and the output of each convolved image is used as the input to the next layer. The filters can start as very simple features, such as brightness and edges, and increase in complexity to features that uniquely define the object.  Like other neural networks, a CNN is composed of an input layer, an output layer, and many hidden layers in between. These layers perform operations that alter the data with the intent of learning features specific to the data. Three of the most common CNN layers are: Convolution: puts the input images through a set of convolutional filters, each of which activates certain features from the images. The Convolution layer uses a filter matrix over the array of image pixels and performs convolution operation to obtain a convolved feature map. Rectified linear unit (ReLU): allows for faster and more effective training by mapping negative values to zero and maintaining positive values. This is sometimes referred to as activation, because only the activated features are carried forward into the next layer. Pooling simplifies the output by performing nonlinear down sampling, reducing the number of parameters that the network needs to learn These operations are repeated over tens or hundreds of layers, with each layer learning to identify different features. After learning features in many layers, the architecture of a CNN shifts to classification. The next-to-last layer is a fully connected layer that outputs a vector of K dimensions where K is the number of classes that the network will be able to predict. This vector contains the probabilities for each class of any image being classified. The final layer of the CNN architecture uses a classification layer such as softmax to provide the classification output. The softmax activation function is often placed at the output layer of a neural network. It’s commonly used in multi-class learning problems where a set of features can be related to one-of-K classes. For example, in the CIFAR-10 image classification problem, given a set of pixels as input, we need to classify if a particular sample belongs to one-of-ten available classes: i.e., cat, dog, airplane, etc. Its equation is simple, we just have to compute for the normalized exponential function of all the units in the layer. Intuitively, what the softmax does is that it squashes a vector of size K between 0 and 1. Furthermore, because it is a normalization of the exponential, the sum of this whole vector equates to 1. We can then interpret the output of the softmax as the probabilities that a certain set of features belongs to a certain class. The classification part is done using a Multi Layered Neural Network. A Multi Layered Neural Network consists of a number of layers made up out of Single Layer Perceptrons, which we will cover in the next paragraph. 2.2 Single Layer Perceptron  CNNs, like neural networks, are made up of neurons with learnable weights and biases. Each neuron receives several inputs, takes a weighted sum over them, passes it through an activation function and responds with an output. The basic structures of neural networks are perceptrons. The perceptron is depicted in the below figure: The perceptron consists of weights (including a special weight called bias) , summation processor and an activation function. In same cases there is an extra input node called a bias. All the inputs are individually weighted, added together and passed into the activation function.  Every activation function (or non-linearity) takes a single number and performs a certain fixed mathematical operation on it. There are several activation functions that are encountered in practice: Sigmoid: takes a real-valued input and squashes it to range between 0 and 1                σ(x) = 1 / (1 + exp(−x)) tanh: takes a real-valued input and squashes it to the range [-1, 1]                tanh(x) = 2σ(2x) − 1 ReLU: ReLU stands for Rectified Linear Unit. It takes a real-valued input and thresholds it at zero (replaces negative values with zero)                f(x) = max(0, x)   The main function of Bias is to provide every node with a trainable constant value (in addition to the normal inputs that the node receives). In a nutshell, a perceptron is a very simple learning machine. It can take in a few inputs, each of which has a weight to signify how important it is, and generate an output decision of “0” or “1”. More specifically it is a linear classification algorithm, because it uses a line to determine an input’s class. However, when combined with many other perceptrons, it forms an artificial neural network.   2.3 Multi Layer Perceptron   A Multi Layer Perceptron (MLP) contains one or more hidden layers (apart from one input and one output layer).  While a single layer perceptron can only learn linear functions, a multi layer perceptron can also learn non – linear functions.   All connections have weights associated with them, each layer having its own bias. The process by which a Multi Layer Perceptron learns is called the backpropagation algorithm. Initially all the edge weights are randomly assigned. For every input in the training dataset, the Neural Network is activated and its output is observed. This output is compared with the desired output that we already know, and the error is “propagated” back to the previous layer. This error is noted and the weights are “adjusted” accordingly. This process is repeated until the output error is below a predetermined threshold. The process of "adjusting" the weights makes use of the Gradient Descent algorithm for minimizing the error function, which we will not go into details about.     3. CNN implementation with MATLAB and NXP Vision Toolbox MATLAB provides an convenient & easy way to train Neural Networks from scratch using its Deep Learning Toolbox.  However, this task may be daunting and may require a lot of computational resources and time. To train a deep network from scratch, you must gather a very large labeled data set and design a network architecture that will learn the features and model. This is good for new applications, or applications that will have a large number of output categories. This is a less common approach because with the large amount of data and rate of learning, these networks typically take days or weeks to train.  MATLAB also provides a series of ready-to-use pre-trained CNNs which can be customized and adapted through Transfer Learning, a topic we will cover in a chapter below. There are a few standard CNNs that one can use to classify a bunch of standard objects (such as a cat, a dog, a screwdriver, an apple and so on..). The NXP Vision Toolbox focuses on three of these pre-trained models for exemplification, but it also provides support for using others and for tailoring the above-mentioned ones. The NXP Vision Toolbox provides a way to create Convolutional Networks using pre-trained models from MATLAB, allowing smooth and simple usage in simulation algorithms as well as straight-forward deployment on the NXP S32V234 boards. There is also the possibility of securely running the CNNs in MATLAB to classify images taken directly from the MIPI-CSI attached camera on the board needing minimal configuration steps. One should only know the IP Address of the board and assign a port for the connection to-and-from the PC. There are 3 easy ways to use the NXP Vision Toolbox for image classification using CNNS: Run the algorithms in simulation mode in MATLAB using the PC’s webcam Run the algorithms in simulation mode in MATLAB using the camera attached to the S32V234 board Run the algorithms on the hardware We will get into the specifics of each of these interaction modes in a further document. In the next paragraphs we will provide a short description of the models that are available in the toolbox as examples: 3.1 GoogLeNet - Pretrained CNN Google’s GoogLeNet project was one of the winning teams in the 2014 ImageNet large-scale visual recognition challenge (ILSVRC), an annual competition to measure improvements in machine visual technology. GoogLeNet is a pretrained convolutional neural network that is 22 layers deep. GoogLeNet has been trained on over a million images and can classify images into 1000 object categories (such as keyboard, coffee mug, pencil, and many animals). The network has learned rich feature representations for a wide range of images. The network takes an image as input, and then outputs a label for the object in the image together with the probabilities for each of the object categories. The input size for the image is 224x224x3, but one can provide any image since the toolbox will convert it to the appropriate size. Using just the command in the above Command Window, we were able to get a hold of the GoogLeNet pretrained model in MATLAB. One can inspect the network layers in detailes by calling the analyzeNetwork function: As you can see the network is made up of a bunch of layers connected as a DAGNetwork (Directed Acyclic Graph Network).    DAGNetwork properties:         Layers              - The layers of a network         Connections     - The connections between the layers       DAGNetwork methods:         predict         - Run the network on input data         classify        - Classify data with a network         activations     - Compute specific network layer activations.         plot            - Plot a diagram of the network   The layers are pretty much standard, except that there are some Inception layers specific to GoogLeNet. Inception layers of GoogLeNet consist of six convolution layers with different kernel sizes and one pooling layer.  To create a GoogLeNet convolutional neural network object with the NXP Vision Toolbox one should get a hold of the .mat saved from the GoogLeNet object in Matlab as well as the classes within it. This can be done using the nxpvt.save_cnn_to_file(cnnObj) wrapper provided in the toolbox: As you can see, the googlenet.mat and googlenet_classes.mat files are created which can then be used to create the nxpvt.CNN wrapper object. The creating, the simulation and the deployment of these objects and algorithms will be discussed in detail in a future document.     3.2 AlexNet - Pretrained CNN   In 2012, AlexNet significantly outperformed all the prior competitors and won the challenge by reducing the top-5 error from 26% to 15.3%. The second place top-5 error rate, which was not a CNN variation, was around 26.2%. The image input size for this network in MATLAB is 227x227x3. Creating a MATLAB provided alexnet SeriesNetwork object is done with the following command: To take a peek at the network layers use the analyzeNetwork command as above. This will display the layers with their corresponding weights and biases: Unlike GoogLeNet, the AlexNet object is of type SeriesNetwork. A series network is one where the layers are arranged one after the other. There is a single input and a single output.       SeriesNetwork properties:         Layers                  - The layers of the network.       SeriesNetwork methods:         predict                 - Run the network on input data.         classify                - Classify data with a network.         activations             - Compute specific network layer activations.         predictAndUpdateState   - Predict on data and update network state.         classifyAndUpdateState  - Classify data and update network state.         resetState              - Reset network state.   The CNN object is again created with the help of the generated .mat files: 3.3. SqueezeNet - Pretrained CNN   SqueezeNet is the name of a deep neural network that was released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford University. In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters that can more easily fit into computer memory and can more easily be transmitted over a computer network. SqueezeNet achieves the same accuracy as AlexNet but has 50x less weights. To achieve that SqueezeNet has following key ideas: Replace 3×3 filters with 1×1 filters: 1×1 have 9 times fewer parameters. Decrease the number of input channels to 3×3 filters: The number of parameters of a convolutional layer depends on the filter size, the number of channels, and the number of filters. Downsample late in the network so that convolution layers have large activation maps: This might sound counter intuitive. But since the model should be small, we need to make sure that we get the best possible accuracy out of it. The later we down sample the data (e.g. by using strides >1) the more information are retained for the layers in between, which increases the accuracy The building brick of SqueezeNet is called a fire module, which contains two layers: a squeeze layer and an expand layer. A SqueezeNet stacks a bunch of fire modules and a few pooling layers. The squeeze layer and expand layer keep the same feature map size, while the former reduces the depth to a smaller number and the latter increases it. The MATLAB implementation is a DAGNetwork, just as GoogLeNet.   Using analyzeNetwork on a squeezenet object will describe its contents: The NXP Vision Toolbox CNN object over squeezenet is created using the .mat filse generated with the nxpvt.save_cnn_to_file command. This was a general introduction that highlights the models used by the NXP Vision Toolbox for Machine Learning on the NXP boards. To find out more about how to use the CNN examples provided in the NXP Vision Toolbox and more, please, stay tuned for our next presentations. 4. CNN Comparison & Conclusions Pretrained networks have different characteristics that matter when choosing a network to apply to your problem. The most important characteristics are network accuracy, speed, and size. Choosing a network is generally a tradeoff between these characteristics. A network is Pareto efficient if there is no other network that is better on all the metrics being compared, in this case accuracy and prediction time. The set of all Pareto efficient networks is called the Pareto frontier. The Pareto frontier contains all the networks that are not worse than another network on both metrics. The plot connects the networks that are on the Pareto frontier in the plane of accuracy and prediction time. All networks except AlexNet, VGG-16, VGG-19, Xception, NASNet-Mobile, ShuffleNet, and DenseNet-201 are on the Pareto frontier. Since in this training we are going to deploy the CNN on an embedded system where we need to consider the limitations in terms of memory footprint and processing power, we've selected the first 3 CNN with the smallest size/prediction time requirements: AlexNet, SqeezeNet and GoogLeNet For a more detailed comparison please visit: https://www.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html  
View full article
Single Layer Perceptron    Before getting into the hardware specific details, this course will cover a bit of info about how CNNs work and what are they used for. CNNs are widely used in applications in image and video recognition, thus they certainly stirred up interest in the automotive world. CNNs, like neural networks, are made up of neurons with learnable weights and biases. Each neuron receives several inputs, takes a weighted sum over them, passes it through an activation function and responds with an output. The basic structures of neural networks are perceptrons. The perceptron is depicted in the below figure: The perceptron consists of weights (including a special weight called bias) , summation processor and an activation function There also is an extra input node called a bias, which is a bit of  All the inputs are individually weighted, added together and passed into the activation function.  Every activation function (or non-linearity) takes a single number and performs a certain fixed mathematical operation on it. There are several activation functions that are encountered in practice: Sigmoid: takes a real-valued input and squashes it to range between 0 and 1                σ(x) = 1 / (1 + exp(−x)) tanh: takes a real-valued input and squashes it to the range [-1, 1]                tanh(x) = 2σ(2x) − 1 ReLU: ReLU stands for Rectified Linear Unit. It takes a real-valued input and thresholds it at zero (replaces negative values with zero)                f(x) = max(0, x) The main function of Bias is to provide every node with a trainable constant value (in addition to the normal inputs that the node receives). Multi Layer Perceptron A Multi Layer Perceptron (MLP) contains one or more hidden layers (apart from one input and one output layer).  While a single layer perceptron can only learn linear functions, a multi layer perceptron can also learn non – linear functions.           All connections have weights associated with them, each layer having its own bias. The process by which a Multi Layer Perceptron learns is called the Backpropagation algorithm. Initially all the edge weights are randomly assigned. For every input in the training dataset, the Neural Network is activated and its output is observed. This output is compared with the desired output that we already know, and the error is “propagated” back to the previous layer. This error is noted and the weights are “adjusted” accordingly. This process is repeated until the output error is below a predetermined threshold. The process of "adjusting" the weights makes use of the Gradient Descent algorithm for minimizing the error function, which we will not go into details about. Convonlutional Neural Networks
View full article
      Product Release Announcement Automotive Microcontrollers and Processors NXP Vision Toolbox for S32V234 – 1.1.0     Austin, Texas, USA April 9th, 2019 Automotive Microcontrollers and Processors Model-Based Design Tools Team at NXP Semiconductors, is pleased to announce the release of the Vision Toolbox for S32V234 1.1.0 RFP. This release supports Computer Vision and Machine Learning applications prototyping with MATLAB® for NXP’s S32V234 Automotive Vision Processors.   DownloadLocation: http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=NXP_VISION_TOOLBOX   Activation link http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=NXP_VISION_TOOLBOX Technical Support NXP Vision Toolbox for S32V234 issues are tracked through NXP Model-Based Design Tools Community space     Release Content (updates relative to previous version) A quick guided tour into the Vision Toolbox main features supported in this release can be watched here:  Machine Vision Algorithm development using Vision Toolbox | NXP  Compatible with NXP Vision Software Development Kit for S32V2 RTM 1.3.0 libraries and build tools; Cascade classifiers support, trained from OpenCV using HAAR and LBP features; KALMAN Filter support; Machine Learning support with MATLAB® pretrained CNN/Deep Learning Toolbox and code generation for S32V234 ARM A53; Ready to run examples that can be executed in MATLAB® simulation or directly on S32V234 HW: Faces / Pedestrians / Lanes Detection applications; CNN SqeezeNet / GoogLeNet / AlexNet applications;     MATLAB® Integration The NXP Vision Toolbox extends the MATLAB® Computer Vision System, Image Processing and Deep Learning toolboxes experience by allowing customers to evaluate and use NXP’s Vision SDK RTM 1.3.0 and NXP S32V evaluation boards (EVB and SBC) solutions out-of-the-box with: NXP Support Package for S32V234 Online Installer Guide Add-on; NXP_Vision_Toolbox_for_S32V234 Package integrated with MATLAB® environment in terms of installation, documentation, help and examples;   Target Audience This release is intended for technology demonstration, evaluation purposes, computer vision and machine learning prototyping with S32V234 microprocessors and S32V SBC & EVB boards   Useful Resources NXP Vision Toolbox Home page Other useful documents can be found on Toolbox Home page Documentation      
View full article
      Product Release Announcement Automotive Microcontrollers and Processors NXP Vision Toolbox for S32V234 - 2018.R1     Austin, Texas, USA November 19, 2018 Automotive Microcontrollers and Processors Model-Based Design Tools Team at NXP Semiconductors, is pleased to announce the release of the Vision Toolbox for S32V234 2018.R1. This release supports computer vision applications prototyping on MATLAB® for NXP’s S32V234 Automotive Vision Processors.   Download Location: http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=VISION-MATLAB_v2018.R1   Activation link http://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=VISION-MATLAB_v2018.R1   Technical Support NXP Vision Toolbox for S32V234 issues are tracked through NXP Model-Based Design Tools Community space     Release Content Automatic C++ code generation from MATLAB® m-scripts for S32V234 Automotive Vision Processor ARM® A53 and APU Embedded Processors; Support for APEX programming based on: APEX Core Framework (ACF) using APEX Kernels m-script wrappers and graphs APEX Computer Vision (APEXCV) using m-script wrappers over VSDK classes MEX support for APEX emulation in MATLAB®. All classes from VSDK BASE are supported in emulation to allow users fast prototyping in MATLAB® simulation environment and bit-exact comparison against NXP hardware. Compatible with NXP Vision Software Development Kit for S32V2 RTM 1.2.0+HF1+HF2 libraries and build tools; Support for NXP SBC-S32V234 and S32V234-EVB. The generated code can be built, downloaded and run on the NXP targets directly from MATLAB®; Support for NXP S32V ISP camera object in MATLAB®. Users can capture video frames in real-time from S32V ISP and process data directly in MATLAB for various algorithms development; Ready to run examples from MATLAB® that can run in Simulation or S32V234 for: Faces / Pedestrians / Lanes Detection applications;   APEX Kernels examples: e.g., Sobel, Gauss, Rotate, etc.; APEX Computer Vision examples: e.g., remap, resize, rgb2gray, etc.; IO Examples: video input, video reader, s32v ISP camera;   MATLAB® Integration The NXP Vision Toolbox extends the MATLAB® Computer Vision System and Image Processing toolboxes experience by allowing customers to evaluate and use NXP’s Vision SDK RTM 1.2.0 and NXP S32V evaluation boards (EVB and SBC) solutions out-of-the-box with: NXP Support Package for S32V234Online Installer Guide Add-on; NXP_Vision_Toolbox_for_S32V234 Package integrated with MATLAB® environment in terms of installation, documentation, help and examples;   Useful Resources NXP Vision Toolbox Home page Other useful documents can be found on Toolbox Home page Documentation          
View full article