EIQ toolkit provision for target device selection MCU/CPU/GPU/NPU while creating a model

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

EIQ toolkit provision for target device selection MCU/CPU/GPU/NPU while creating a model

675 Views
_asif_muhammed_
Contributor III

Hi All,

I am playing around with EIQ toolkit for understanding about machine learning and also the capabilities of both imx8mplus and the eiq toolkit.

I have created 3 different models with selecting CPU, GPU and NPU, for understanding the difference in performance in each model.  I have kept all other things such as dataset, training epochs and all other configurations as same.

I am using an IMX8MP based development board suplied by technexion. I tried to benchmark these three models, but I am getting the same inference time for each model using CPU and GPU. 

When tried with CPU, benchmarking gave  35467.5 microsec as average inference time for each model.

When tried with NPU, benchmarking gave 134796 micosec as average inference time for each mode.

While searching for an answer I came across a document which I am attaching below.

 

snippet.png

 

This document is dated to June 2022. So is the feature of target selection available now? or Am I doing something wrong?.

Thank you in advance.

 

ps: benchmarking tool I used is supplied by tensorflowlite, it came with the yocto build.(/usr/bin/tensorflow-lite-2.9.1/examples/benchmark_model)

EIQ toolkit 1.5.2 with EIQ Portal 2.6.10

@anthony_huereca Can you please shed some light onto this?

0 Kudos
0 Replies