We are developing an application using the NNAPI of TensorFlowLite 2.2.0 in i.MX8QM.(Linux 5.4.47_2.2.0)
In this application, inferences are made for each of the four cameras.
There is one problem, the first Invoke() after initialization is too slow.(about 2 ~ 3 seconds.)
Therefore, it takes about 10 seconds to execute inference.
To speed up the first Invoke(), we ran it in multi-threaded mode, but in rare cases the inference did not work, so we ran it sequentially.
Is there any way to speed up for first Invoke()?