eIQ Sample Apps - Object Recognition using Arm NN

Document created by Diego Dorta Employee on Jun 19, 2019Last modified by Vanessa Maegima on Aug 8, 2019
Version 12Show Document
  • View in full screen mode

This Lab 1 explains how to get started with Arm NN application demo on i.MX8 board using eIQ ML Software Development Environment.

Get the source code available on code aurora:

 

Setting Up the Board

Step 1 - Create the following folders and grant them permission as it follows:

root@imx8mmevk:# mkdir -p /opt/armnn/model
root@imx8mmevk:# mkdir -p /opt/armnn/data
root@imx8mmevk:# chmod 777 /opt/armnn

 

Step 2 - To easily deploy the demos to the board, get the boards IP address using ifconfig command, then set the IMX_INET_ADDR environment variable as it follows:

$ export IMX_INET_ADDR=<imx_ip>

 

Setting Up Arm NN

Step 1 - Install TensorFlow on host PC for preparing the model for inference:

$ apt-get install python-pip
$ pip install tensorflow
$ git clone https://github.com/tensorflow/tensorflow.git

NOTE: You may need root privileges (sudo) for running the apt-get command.

 

Step 2 - Generate the graph used to prepare the TensorFlow InceptionV3 model for inference:

$ mkdir checkpoints
$ git clone https://github.com/tensorflow/models.git
$ cd models/research/slim/
$ python export_inference_graph.py --model_name=inception_v3 --output_file=../../../checkpoints/inception_v3_inf_graph.pb

 

Step 3 - Download the pre-trained model and prepare it for inference with the generated graph:

$ cd ../../../checkpoints
$ wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz -qO- | tar -xvz # download pretrained model
$ python <path_to_tensorflow_repo>/tensorflow/python/tools/freeze_graph.py \
--input_graph=inception_v3_inf_graph.pb --input_checkpoint=inception_v3.ckpt \
--input_binary=true --output_graph=inception_v3_2016_08_28_frozen_transformed.pb \
--output_node_names=InceptionV3/Predictions/Reshape_1

NOTE: <path_to_tensorflow_repo> refers to the cloned TensorFlow path from Step 1.

 

Step 4 - Copy the prepared model inception_v3_2016_08_28_frozen_transformed.pb to /opt/armnn/models:

$ scp inception_v3_2016_08_28_frozen_transformed.pb root@<imx_ip>:/opt/armnn/model

 

Step 5 - Find three .jpg images on Google, one containing a dog, one with a cat and one with a shark. Rename them to Dog.jpg, Cat.jpg and shark.jpg accordingly (case sensitive) and copy them to the /opt/armnn/data folder on the device.

$ scp Dog.jpg Cat.jpg shark.jpg root@<imx_ip>:/opt/armnn/data

 

NOTE: For the modified demo, download it from eIQ Sample Apps and put it in /opt/armnn folder.

 

1 - Arm NN example: File-Based

Step 1 - At user space, enter the armnn folder which holds the demo files:

root@imx8mmevk:~# cd /opt/armnn
root@imx8mmevk:/opt/armnn#

Here is what the armnn folders should look like:

│...
├── data
│├── Cat.jpg
│├── Dog.jpg
│└── shark.jpg
├── model
│└── inception_v3_2016_08_28_frozen_transformed.pb
│...

 

Step 2 - Run the demo:

root@imx8mmevk:/opt/armnn# TfInceptionV3-Armnn --data-dir=data --model-dir=models
= Prediction values for test #0
Top(1) prediction is 208 with confidence: 93.5791%
Top(2) prediction is 209 with confidence: 2.06653%
Top(3) prediction is 223 with confidence: 0.693557%
Top(4) prediction is 170 with confidence: 0.210818%
Top(5) prediction is 232 with confidence: 0.177887%
= Prediction values for test #1
Top(1) prediction is 283 with confidence: 72.4617%
Top(2) prediction is 282 with confidence: 22.5384%
Top(3) prediction is 286 with confidence: 0.838241%
Top(4) prediction is 288 with confidence: 0.0822042%
Top(5) prediction is 841 with confidence: 0.05987%
= Prediction values for test #2
Top(1) prediction is 3 with confidence: 62.0632%
Top(2) prediction is 4 with confidence: 12.8319%
Top(3) prediction is 5 with confidence: 1.25482%
Top(4) prediction is 154 with confidence: 0.177708%
Top(5) prediction is 149 with confidence: 0.116998%
Total time for 3 test cases: 2.369 seconds
Average time per test case: 789.765 ms
Overall accuracy: 1.000

The TfInceptionV3-Armnn demo runs the inference on the three expected input images: one containing a dog, one with a cat and one with a shark. The output shows the top 5 inference results and their confidence percentage. The higher the confidence, the better the input image fits the expected content.

 

There is a chance to get the following result by running the demo:

Prediction for test case 0 ( x ) is incorrect (should be y)
One or more test cases failed

NOTE: ( x ) refers to the ID of the detected object, ( y ) refers to the ID expected object.

 

This is not an execution error. This occurs because the TfInceptionV3-Armnn test expects a specific type of dog, cat and shark to be found so if a different type/breed of these animals is passed to the test, it returns a case failed.


The expected inputs for this test are:

 

A_IDLabelFile Name
208Golden RetrieverDog.jpg
283Tiger CatCat.jpg
3White Sharkshark.jpg

 

The complete list of supported objects can be found here.

 

Try passing different .jpg images to the test, including the expected types as well as other types and see the confidence percentage increasing when you match the expected breeds. Remember to rename the images according to the expect input (Dog.jpg, Cat.jpg, shark.jpg, case sensitive).

 

To rename a file, use the mv command:

root@imx8mmevk:/opt/armnn/data# mv <name>.jpg <new_name>.jpg

The next section shows how to modify this demo to identify any object.

 

2 - Arm NN example: MIPI Camera

This section shows how to use the TfInceptionV3-Armnn test from eIQ for general object detection. The list of all object detection supported by this model can be found here.

 

Step 1 - Enter the demo directory and run the demo:

root@imx8mmevk:/opt/armnn# python3 camera.py

This runs the TfInceptionV3-Armnn test and parses the inference results to return any recognized object, not only the three expected types of animals.

 

Step 2 - Show the provided flash cards to the camera and wait for the detection message: Image captured, wait. The flash cards should not be twisted or curved on this step.

 

Step 3 - After a few seconds, the demo returns the detected object.

 

Figure 6. Captured Flash Card

 

NOTE: This can return False if the image was not correctly captured. In this case, try showing the flash card again.

 

 

Go to the next eIQ Sample Apps - Handwritten Digit Recognition.

Attachments

Outcomes