Hi ,
I want to ask a question about published performance data for VGG16 processing (per layer latency for inferencing with any dataset) for the commercially available i.MX 8M with NPU.
All VGG16 layers, Convolutional and Fully Connected layers, are of interest.
Is this per VGG16 layer data already published in IEEE, ACM, or other journals and/or conferences ?
Thank you,
Nick Iliev, Ph.D.
Research Associate
AEON lab ECE
Univ. Illinois Chicago