Hi,
I have some comfuse about DeepViewRT. What is eval_time ? Is really run time of model ?
Code : "put_time, post_time, eval_time = modelclient.get_timing_info() "
And.. What is put_time / post_time ? Is loding model time ? or Warm-up time ?
Weilly
Hi, @FelipeGarcia
My Platflorm is NXP i.mx8mplus.
Python Code :
results = modelclient.run(inputs, outputs)
put_time, post_time, eval_time = modelclient.get_timing_info()
So, Is DeepViewRT fast than Tensorflow Lite ?
Weilly
Hello Weily_li,
Regarding the 'get_timing_info' - please check {eIQ_Toolkit_Install|\docs\DeepViewRT_User_Manual.pdf -> A.2 ModelClient
Returns:
put_time: the timing information regarding how long it took to send the most recent rtm to the modelrunner application post_time: the total time to run the model (includes the post request and evaluation) eval_time: the time it took for the modelrunner application to run the model.
Can you share the model you used?
I assume the error you see appeared on the host PC. Any message shown on the board?
Please share the log from %appdata%Roaming\eIQ Portal\logs\
Regards