Layer-by-Layer inference latency for VGG16 on i.MX 8M with NPU

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

Layer-by-Layer inference latency for VGG16 on i.MX 8M with NPU

1,456件の閲覧回数
niliev77
Contributor I

Hi ,
I want to ask a question about published performance data for VGG16 processing (per layer latency for  inferencing with any dataset) for the commercially available i.MX 8M with NPU.
All VGG16 layers, Convolutional and Fully Connected layers, are of interest.

Is this per VGG16 layer data already published  in IEEE, ACM, or other journals and/or conferences ?

Thank you,
Nick Iliev, Ph.D.
Research Associate
AEON lab ECE
Univ. Illinois Chicago

0 件の賞賛
返信
3 返答(返信)

1,447件の閲覧回数
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello niliev77,

Yes, it has published many layer to layer interfaces on MX8MPlus but it rely on the publisher, nxp does not have any publication to be shared.

Regards

 

 

0 件の賞賛
返信

1,444件の閲覧回数
niliev77
Contributor I

Hi,

 thank you for the prompt response. Can you share the link (links) to MX8MPlus (forum?) where this layer-to-layer latency data has been published ?

 

thanks and regards,

niliev77

0 件の賞賛
返信

1,403件の閲覧回数
niliev77
Contributor I

 

Hi, 

 

your last reply stated : 

" ... has published many layer to layer interfaces on MX8MPlus ... "  but I can't find one.  Can you send me a link to one such published result ?

 

thanks and regards,

niliev77

0 件の賞賛
返信