Layer-by-Layer inference latency for VGG16 on i.MX 8M with NPU

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Layer-by-Layer inference latency for VGG16 on i.MX 8M with NPU

1,457 Views
niliev77
Contributor I

Hi ,
I want to ask a question about published performance data for VGG16 processing (per layer latency for  inferencing with any dataset) for the commercially available i.MX 8M with NPU.
All VGG16 layers, Convolutional and Fully Connected layers, are of interest.

Is this per VGG16 layer data already published  in IEEE, ACM, or other journals and/or conferences ?

Thank you,
Nick Iliev, Ph.D.
Research Associate
AEON lab ECE
Univ. Illinois Chicago

0 Kudos
Reply
3 Replies

1,448 Views
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello niliev77,

Yes, it has published many layer to layer interfaces on MX8MPlus but it rely on the publisher, nxp does not have any publication to be shared.

Regards

 

 

0 Kudos
Reply

1,445 Views
niliev77
Contributor I

Hi,

 thank you for the prompt response. Can you share the link (links) to MX8MPlus (forum?) where this layer-to-layer latency data has been published ?

 

thanks and regards,

niliev77

0 Kudos
Reply

1,404 Views
niliev77
Contributor I

 

Hi, 

 

your last reply stated : 

" ... has published many layer to layer interfaces on MX8MPlus ... "  but I can't find one.  Can you send me a link to one such published result ?

 

thanks and regards,

niliev77

0 Kudos
Reply