Layer-by-Layer inference latency for VGG16 on i.MX 8M with NPU

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

Layer-by-Layer inference latency for VGG16 on i.MX 8M with NPU

1,453 次查看
niliev77
Contributor I

Hi ,
I want to ask a question about published performance data for VGG16 processing (per layer latency for  inferencing with any dataset) for the commercially available i.MX 8M with NPU.
All VGG16 layers, Convolutional and Fully Connected layers, are of interest.

Is this per VGG16 layer data already published  in IEEE, ACM, or other journals and/or conferences ?

Thank you,
Nick Iliev, Ph.D.
Research Associate
AEON lab ECE
Univ. Illinois Chicago

0 项奖励
回复
3 回复数

1,444 次查看
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello niliev77,

Yes, it has published many layer to layer interfaces on MX8MPlus but it rely on the publisher, nxp does not have any publication to be shared.

Regards

 

 

0 项奖励
回复

1,441 次查看
niliev77
Contributor I

Hi,

 thank you for the prompt response. Can you share the link (links) to MX8MPlus (forum?) where this layer-to-layer latency data has been published ?

 

thanks and regards,

niliev77

0 项奖励
回复

1,400 次查看
niliev77
Contributor I

 

Hi, 

 

your last reply stated : 

" ... has published many layer to layer interfaces on MX8MPlus ... "  but I can't find one.  Can you send me a link to one such published result ?

 

thanks and regards,

niliev77

0 项奖励
回复