imx8mp: NPU / tensorflow-lite in new BSP (6.1) slower 3.8 vs 2.5ms

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

imx8mp: NPU / tensorflow-lite in new BSP (6.1) slower 3.8 vs 2.5ms

1,506 Views
arno_0
Contributor III

For me it seems that the tensorflow lite example delegated to the NPU is getting slower.

With latest BSP (Kernel 6.1) it takes around 50% longer. Any reason?

0 Kudos
7 Replies

1,309 Views
arno_0
Contributor III

There is an update to L6.1.1_1.0.1 - maybe it contains a fix. If you test this, let us know.

0 Kudos

1,482 Views
arno_0
Contributor III

Ok, found in Machine Learning User Guide:

"Known issue: Decreased performance on MobilenetV1, MobileNetV2, VGG16, VGG19, and NasNet Mobile"

But why?

 

0 Kudos

1,469 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Hi @arno_0 

Maybe the VX delegate haven't been updated to compatible with the new features.

You can use the L5.15.X and we will fix this known issue in next release.

1,321 Views
lfant
Contributor II

@Zhiming_Liu : do you have any updates on:

1) the root cause of this performance regression

2) any potential workarounds, if kernel 6.1 needs to be used

3) timeline for a fix of the performance regression

0 Kudos

1,175 Views
lfant
Contributor II

I'm answering the question myself: new release (LF6.1.22_2.0.0) is out which claims to include a fix for the decreased performance (not verified yet if actually true)

@Zhiming_Liu : thanks for the support, and quick response.

0 Kudos

1,440 Views
lfant
Contributor II

Hello,

I will also have to use L6.1.x together with a performance-critical application using the NPU: can you please elaborate what "Maybe the VC delegate haven't been updated" exactly means? 

When will the release containing the fix that you mentioned be available?

0 Kudos

1,460 Views
arno_0
Contributor III
Is a mixed operation possible? So use 6.1 kernel and just the older delegates and libs?
I need the 6.1 kernel for other tthings.
0 Kudos