Maximum size of model

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

Maximum size of model

ソリューションへジャンプ
1,454件の閲覧回数
nahan_trogn
Contributor III
Hi, I use MIMXRT1060-EVK to run example evkmimxrt1060_tensorflow_lite_kws. I can run this example successfully. Then, I change model_data.h with my own model, its size about 250Kb, but the kit can't run with this new model, although I increase Heap size to 15Mb. So, the question is, what is the maximum size of model that my kit can support? Thanks in advance.
0 件の賞賛
返信
1 解決策
1,425件の閲覧回数
nahan_trogn
Contributor III

Oh, I have solved this Err by add necessary MODEL_RegisterOps

nahan_trogn_0-1617937288500.png

 

元の投稿で解決策を見る

0 件の賞賛
返信
4 返答(返信)
1,426件の閲覧回数
nahan_trogn
Contributor III

Oh, I have solved this Err by add necessary MODEL_RegisterOps

nahan_trogn_0-1617937288500.png

 

0 件の賞賛
返信
1,419件の閲覧回数
jeremyzhou
NXP Employee
NXP Employee

Hi,
Thanks for your reply and I'm glad to hear that your issue is solved.
TIC

-------------------------------------------------------------------------------
Note:
- If this post answers your question, please click the "Mark Correct" button. Thank you!

 

- We are following threads for 7 weeks after the last post, later replies are ignored
Please open a new thread and refer to the closed one, if you have a related question at a later point in time.
-------------------------------------------------------------------------------

0 件の賞賛
返信
1,436件の閲覧回数
jeremyzhou
NXP Employee
NXP Employee

Hi,

Thank you for your interest in NXP Semiconductor products and for the opportunity to serve you.
1) what is the maximum size of model that my kit can support? Thanks in advance.
-- In my opinion, the maximum size of the runnable model is determined by the memory source of the MCU or GPU, not the TensorFlow lite library.
Maybe you can use the model. summary() command to list the number of weights of the model to estimate the tensor arena, to be honest, different model architectures have different sizes and numbers of input, output, and intermediate tensors, so it’s difficult to know how much memory we’ll need. The number doesn’t need to be exact—we can reserve more memory than we need and we usually do this work through trial and error.
TIC

-------------------------------------------------------------------------------
Note:
- If this post answers your question, please click the "Mark Correct" button. Thank you!

 

- We are following threads for 7 weeks after the last post, later replies are ignored
Please open a new thread and refer to the closed one, if you have a related question at a later point in time.
-------------------------------------------------------------------------------

1,433件の閲覧回数
nahan_trogn
Contributor III

Hi, thanks for quick response,

I received this error when try to embedd my model into NXP kit. Is this cause by my config or my tensor model? Or I need to config maximum size of model or number of output?

Thanks

nahan_trogn_0-1617877740082.png

 

 

My model:

nahan_trogn_1-1617877959948.pngnahan_trogn_2-1617877992828.png

 

0 件の賞賛
返信