imx8mp: NPU / tensorflow-lite in new BSP (6.1) slower 3.8 vs 2.5ms

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

imx8mp: NPU / tensorflow-lite in new BSP (6.1) slower 3.8 vs 2.5ms

1,942 次查看
arno_0
Contributor III

For me it seems that the tensorflow lite example delegated to the NPU is getting slower.

With latest BSP (Kernel 6.1) it takes around 50% longer. Any reason?

0 项奖励
回复
7 回复数

1,745 次查看
arno_0
Contributor III

There is an update to L6.1.1_1.0.1 - maybe it contains a fix. If you test this, let us know.

0 项奖励
回复

1,918 次查看
arno_0
Contributor III

Ok, found in Machine Learning User Guide:

"Known issue: Decreased performance on MobilenetV1, MobileNetV2, VGG16, VGG19, and NasNet Mobile"

But why?

 

0 项奖励
回复

1,905 次查看
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Hi @arno_0 

Maybe the VX delegate haven't been updated to compatible with the new features.

You can use the L5.15.X and we will fix this known issue in next release.

1,757 次查看
lfant
Contributor II

@Zhiming_Liu : do you have any updates on:

1) the root cause of this performance regression

2) any potential workarounds, if kernel 6.1 needs to be used

3) timeline for a fix of the performance regression

0 项奖励
回复

1,611 次查看
lfant
Contributor II

I'm answering the question myself: new release (LF6.1.22_2.0.0) is out which claims to include a fix for the decreased performance (not verified yet if actually true)

@Zhiming_Liu : thanks for the support, and quick response.

0 项奖励
回复

1,876 次查看
lfant
Contributor II

Hello,

I will also have to use L6.1.x together with a performance-critical application using the NPU: can you please elaborate what "Maybe the VC delegate haven't been updated" exactly means? 

When will the release containing the fix that you mentioned be available?

0 项奖励
回复

1,896 次查看
arno_0
Contributor III
Is a mixed operation possible? So use 6.1 kernel and just the older delegates and libs?
I need the 6.1 kernel for other tthings.
0 项奖励
回复