MODEL_RunInference in evkmimxrt1060_tensorflow_lite_kws example

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

MODEL_RunInference in evkmimxrt1060_tensorflow_lite_kws example

681件の閲覧回数
nahan_trogn
Contributor III

Hi,

I run the evkmimxrt1060_tensorflow_lite_kws example.

After preprocessing audio, I see two function called after that to inference the output: MODEL_RunInference(); and MODEL_ProcessOutput(outputData, &outputDims, outputType, endTime - startTime);

But the example model use the Softmax Regression :

nahan_trogn_0-1618472173929.png

 

And my model use Logistic Regression:

 

nahan_trogn_1-1618472364781.png

 

 

The question is can I re-use 2 function above for my model? If not, could you suggest me somthing to run inference for my model?

0 件の賞賛
返信
3 返答(返信)

664件の閲覧回数
nahan_trogn
Contributor III

I can use those 2 function to predict the output, thanks for your reply.

0 件の賞賛
返信

662件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi nahan_trogn,

   Glad to hear you can reuse it on your side.

  If you still have questions about this case, just kindly let me know.

 Any new issues, welcome to create the new case.

Best Regards,

Kerry

0 件の賞賛
返信

673件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi nahan_trogn,

  Do you use the DeepView tool for your related picture?

  About your Logistic Regression. I think you can try to call that 2 function after your model and test it on your side.

  If you meet any issues, just kindly let me know.

Best Regards,

Kerry

0 件の賞賛
返信