eIQ Sample Apps - Object Recognition using Arm NN

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

eIQ Sample Apps - Object Recognition using Arm NN

eIQ Sample Apps - Object Recognition using Arm NN

This Lab 1 explains how to get started with Arm NN application demo on i.MX8 board using eIQ ML Software Development Environment.

Get the source code available on code aurora:

Setting Up the Board

Step 1 - Create the following folders and grant them permission as it follows:

root@imx8mmevk:# mkdir -p /opt/armnn/model
root@imx8mmevk:# mkdir -p /opt/armnn/data
root@imx8mmevk:# chmod 777 /opt/armnn

Step 2 - To easily deploy the demos to the board, get the boards IP address using ifconfig command, then set the IMX_INET_ADDR environment variable as it follows:

$ export IMX_INET_ADDR=<imx_ip>

Setting Up Arm NN

Step 1 - Install TensorFlow on host PC for preparing the model for inference:

$ apt-get install python-pip
$ pip install tensorflow
$ git clone https://github.com/tensorflow/tensorflow.git

NOTE: You may need root privileges (sudo) for running the apt-get command.

Step 2 - Generate the graph used to prepare the TensorFlow InceptionV3 model for inference:

$ mkdir checkpoints
$ git clone https://github.com/tensorflow/models.git
$ cd models/research/slim/
$ python export_inference_graph.py --model_name=inception_v3 --output_file=../../../checkpoints/inception_v3_inf_graph.pb

 

Step 3 - Download the pre-trained model and prepare it for inference with the generated graph:

$ cd ../../../checkpoints
$ wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz -qO- | tar -xvz # download pretrained model
$ python <path_to_tensorflow_repo>/tensorflow/python/tools/freeze_graph.py \
--input_graph=inception_v3_inf_graph.pb --input_checkpoint=inception_v3.ckpt \
--input_binary=true --output_graph=inception_v3_2016_08_28_frozen_transformed.pb \
--output_node_names=InceptionV3/Predictions/Reshape_1

NOTE: <path_to_tensorflow_repo> refers to the cloned TensorFlow path from Step 1.

Step 4 - Copy the prepared model inception_v3_2016_08_28_frozen_transformed.pb to /opt/armnn/models:

$ scp inception_v3_2016_08_28_frozen_transformed.pb root@<imx_ip>:/opt/armnn/model

Step 5 - Find three .jpg images on Google, one containing a dog, one with a cat and one with a shark. Rename them to Dog.jpg, Cat.jpg and shark.jpg accordingly (case sensitive) and copy them to the /opt/armnn/data folder on the device.

$ scp Dog.jpg Cat.jpg shark.jpg root@<imx_ip>:/opt/armnn/data

NOTE: For the modified demo, download it from eIQ Sample Apps and put it in /opt/armnn folder.

1 - Arm NN example: File-Based

Step 1 - At user space, enter the armnn folder which holds the demo files:

root@imx8mmevk:~# cd /opt/armnn
root@imx8mmevk:/opt/armnn#

Here is what the armnn folders should look like:

│...
├── data
│├── Cat.jpg
│├── Dog.jpg
│└── shark.jpg
├── model
│└── inception_v3_2016_08_28_frozen_transformed.pb
│...

Step 2 - Run the demo:

root@imx8mmevk:/opt/armnn# TfInceptionV3-Armnn --data-dir=data --model-dir=models
= Prediction values for test #0
Top(1) prediction is 208 with confidence: 93.5791%
Top(2) prediction is 209 with confidence: 2.06653%
Top(3) prediction is 223 with confidence: 0.693557%
Top(4) prediction is 170 with confidence: 0.210818%
Top(5) prediction is 232 with confidence: 0.177887%
= Prediction values for test #1
Top(1) prediction is 283 with confidence: 72.4617%
Top(2) prediction is 282 with confidence: 22.5384%
Top(3) prediction is 286 with confidence: 0.838241%
Top(4) prediction is 288 with confidence: 0.0822042%
Top(5) prediction is 841 with confidence: 0.05987%
= Prediction values for test #2
Top(1) prediction is 3 with confidence: 62.0632%
Top(2) prediction is 4 with confidence: 12.8319%
Top(3) prediction is 5 with confidence: 1.25482%
Top(4) prediction is 154 with confidence: 0.177708%
Top(5) prediction is 149 with confidence: 0.116998%
Total time for 3 test cases: 2.369 seconds
Average time per test case: 789.765 ms
Overall accuracy: 1.000

The TfInceptionV3-Armnn demo runs the inference on the three expected input images: one containing a dog, one with a cat and one with a shark. The output shows the top 5 inference results and their confidence percentage. The higher the confidence, the better the input image fits the expected content.

There is a chance to get the following result by running the demo:

Prediction for test case 0 ( x ) is incorrect (should be y)
One or more test cases failed

NOTE: ( x ) refers to the ID of the detected object, ( y ) refers to the ID expected object.

This is not an execution error. This occurs because the TfInceptionV3-Armnn test expects a specific type of dog, cat and shark to be found so if a different type/breed of these animals is passed to the test, it returns a case failed.


The expected inputs for this test are:

A_IDLabelFile Name
208Golden RetrieverDog.jpg
283Tiger CatCat.jpg
3White Sharkshark.jpg

The complete list of supported objects can be found here.

Try passing different .jpg images to the test, including the expected types as well as other types and see the confidence percentage increasing when you match the expected breeds. Remember to rename the images according to the expect input (Dog.jpg, Cat.jpg, shark.jpg, case sensitive).

To rename a file, use the mv command:

root@imx8mmevk:/opt/armnn/data# mv <name>.jpg <new_name>.jpg

The next section shows how to modify this demo to identify any object.

2 - Arm NN example: MIPI Camera

This section shows how to use the TfInceptionV3-Armnn test from eIQ for general object detection. The list of all object detection supported by this model can be found here.

Step 1 - Enter the demo directory and run the demo:

root@imx8mmevk:/opt/armnn# python3 camera.py

This runs the TfInceptionV3-Armnn test and parses the inference results to return any recognized object, not only the three expected types of animals.

Step 2 - Show the provided flash cards to the camera and wait for the detection message: Image captured, wait. The flash cards should not be twisted or curved on this step.

Step 3 - After a few seconds, the demo returns the detected object.

NOTE: This can return False if the image was not correctly captured. In this case, try showing the flash card again.

This video is currently being processed. Please try again in a few minutes.
(view in My Videos)

Go to the next eIQ Sample Apps - Handwritten Digit Recognition.

标签 (1)
附件
评论

step 3 returns an error

Traceback (most recent call last):
  File "../tensorflow/tensorflow/python/tools/freeze_graph.py", line 60, in <module>
    from tensorflow.python.training import py_checkpoint_reader
ImportError: cannot import name 'py_checkpoint_reader'

Hi adeokar@iu.edu‌,

I will take a look on this but could you please inform the TensorFlow version in your host machine?

If you're targeting L4.14.98, you should use TensorFlow 1.12 and for L4.19.35 you should use TensorFlow 1.13.

diegodorta‌ FYI.

Hi ,

Is there a similar application on tensorflow lite, I am using imx8mqevk 4.19.35.

Regards

Hi Venessa,

IS there any example qt based video app available for for ML eIQ applications?

Hi Manivannan,

We only use Qt windowing support with OpenCV for showing images or camera playback, this is done for most of our demos. Current list of supported demos can be found in https://community.nxp.com/docs/DOC-343785 .

Thanks,
Vanessa

Hi Venessa,

i have build imx-gpu-sdk for imx8mqevk, and in the /opt/ i find

imx8mqevk:/opt/imx-gpu-sdk# ls
Console GLES2 GLES3 OpenCL OpenVG Window

Does all these apps run on GC7000 gpu (vivante driver) ?how to verify that these apps are running on the gpu ?

I want reference of any app that opens camera on imx8mqevk and runs on GPU, irrespective of any graphic library it uses. Can you share link of such app to test on imx8mqevk or share a closest available app where i can take reference and develop my own on imx8mqevk.

Regards

Mani

Hi manizillion@gmail.com‌,

You can refer to gputop to check if you application is running on GPU. For any other questions on GPU support on iMX, please post your question to i.MX Processors .

Thanks,
Vanessa

Thanks Vanessa,

there is no gputop command support in my image

imx8mqevk:/opt/imx-gpu-sdk/OpenCL/SoftISP# gputop
-sh: gputop: command not found

how to add it to the image, also can you please suggest profiling for imx8mqevk.

Regards

Mani

You need to add gputop package on your Yocto build. For profiling, please post your question to https://community.nxp.com/community/imx 

@diego_dorta  @vanessa_maegima 

I have an issue with “Setting Up Arm NN” Step 2  - It gives an error.

This is the command:

$ python export_inference_graph.py --model_name=inception_v3 --output_file=../../../checkpoints/inception_v3_inf_graph.pb

At first try I got an error for missing tf_slim package so I installed it with apt install then run again the command:

$ python export_inference_graph.py --model_name=inception_v3 --output_file=../../../checkpoints/inception_v3_inf_graph.pb

Now I get the following error:

ati@ati-VirtualBox:~/models/research/slim$ python export_inference_graph.py --model_name=inception_v3 --output_file=../../../checkpoints/inception_v3_inf_graph.pb
Traceback (most recent call last):
File "export_inference_graph.py", line 162, in <module>
tf.app.run()
File "/home/ati/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/ati/.local/lib/python2.7/site-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/home/ati/.local/lib/python2.7/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "export_inference_graph.py", line 128, in main
FLAGS.dataset_dir)
File "/home/ati/models/research/slim/datasets/dataset_factory.py", line 59, in get_dataset
reader)
File "/home/ati/models/research/slim/datasets/imagenet.py", line 187, in get_split
labels_to_names = create_readable_names_for_imagenet_labels()
File "/home/ati/models/research/slim/datasets/imagenet.py", line 96, in create_readable_names_for_imagenet_labels
assert num_synsets_in_ilsvrc == 1000
AssertionError

 

Looking Further I think I have found something. inference_graph.py tries to download this file:

 

              https://raw.githubusercontent.com/tensorflow/models/master/research/inception/inception/data//imagen...

              which is not available anymore. See models/research/slim/datasets/imagenet.py, line 93.

              I found a copy of the requested files elsewhere. If I change base_url to ‘https://git.byr.ac.cn/fdt/models/-/raw/2164c8dbbfdba1e0e27703f84bf5cf995b044d79/inception/inception/...' the python code will create checkpoints/inception_v3_inf_graph.pb .

 

     Now if I continue with the how-to the next Python command exits with this error:

           ImportError: cannot import name 'py_checkpoint_reader'

 

So I tried something else. In “tensorflow” folder one should check out branch”r1.9” or “r2.0”:

           $ cd tensorflow

           $ git checkout r2.0

              “r2.1” and later branches didn’t work for me.

              Now this very long command succeeded for me:

           $ cd …/checkpoints

           $ python ../tensorflow/tensorflow/python/tools/freeze_graph.py --input_graph=inception_v3_inf_graph.pb --input_checkpoint=inception_v3.ckpt --input_binary=true --output_graph=inception_v3_2016_08_28_frozen_transformed.pb --output_node_names=InceptionV3/Predictions/Reshape_1

              It created inception_v3_2016_08_28_frozen_transformed.pb

 

I don't have my board with me to test now but it may be that this is now resolved. I will test on Monday

无评分
版本历史
最后更新:
‎09-10-2020 02:31 AM
更新人: