My Environment
Hardware: NXP i.XM8MP EVK A01
Software: Android version 10
I try to use NNAPI load ssd_mobilenet to inference in Android. Load model later inference first frame that inference cost time will be more. The second frame, later inference cost time will be normal.
Is this reasonable?
// calculate inference time
long l2 = System.currentTimeMillis();
mTflite.runForMultipleInputsOutputs(new Object[]{localObject}, paramBitmap);
long l1 = System.currentTimeMillis();
inference time | ssd_mobilenet.tflite | ssd_mobilenet_quant.tflite |
CPU(1 thread) | 361ms | 307ms |
CPU(4 thread) | 140ms | 112 ms |
NNAPI (first frame) | 2234 ms | 12722ms |
NNAPI (second frame) | 1101ms | 27ms |
laterat some eventual time in the futureMore (Definitions, Synonyms, Translation)
Solved! Go to Solution.
@Geo,
There is appnote on warm time in NXP.com.
https://www.nxp.com/docs/en/application-note/AN12964.pdf
Check it out for the more detail.
-Manish
@Geo,
There is appnote on warm time in NXP.com.
https://www.nxp.com/docs/en/application-note/AN12964.pdf
Check it out for the more detail.
-Manish