Maximum size of model

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Maximum size of model

Jump to solution
1,521 Views
nahan_trogn
Contributor III
Hi, I use MIMXRT1060-EVK to run example evkmimxrt1060_tensorflow_lite_kws. I can run this example successfully. Then, I change model_data.h with my own model, its size about 250Kb, but the kit can't run with this new model, although I increase Heap size to 15Mb. So, the question is, what is the maximum size of model that my kit can support? Thanks in advance.
0 Kudos
Reply
1 Solution
1,492 Views
nahan_trogn
Contributor III

Oh, I have solved this Err by add necessary MODEL_RegisterOps

nahan_trogn_0-1617937288500.png

 

View solution in original post

0 Kudos
Reply
4 Replies
1,493 Views
nahan_trogn
Contributor III

Oh, I have solved this Err by add necessary MODEL_RegisterOps

nahan_trogn_0-1617937288500.png

 

0 Kudos
Reply
1,486 Views
jeremyzhou
NXP Employee
NXP Employee

Hi,
Thanks for your reply and I'm glad to hear that your issue is solved.
TIC

-------------------------------------------------------------------------------
Note:
- If this post answers your question, please click the "Mark Correct" button. Thank you!

 

- We are following threads for 7 weeks after the last post, later replies are ignored
Please open a new thread and refer to the closed one, if you have a related question at a later point in time.
-------------------------------------------------------------------------------

0 Kudos
Reply
1,503 Views
jeremyzhou
NXP Employee
NXP Employee

Hi,

Thank you for your interest in NXP Semiconductor products and for the opportunity to serve you.
1) what is the maximum size of model that my kit can support? Thanks in advance.
-- In my opinion, the maximum size of the runnable model is determined by the memory source of the MCU or GPU, not the TensorFlow lite library.
Maybe you can use the model. summary() command to list the number of weights of the model to estimate the tensor arena, to be honest, different model architectures have different sizes and numbers of input, output, and intermediate tensors, so it’s difficult to know how much memory we’ll need. The number doesn’t need to be exact—we can reserve more memory than we need and we usually do this work through trial and error.
TIC

-------------------------------------------------------------------------------
Note:
- If this post answers your question, please click the "Mark Correct" button. Thank you!

 

- We are following threads for 7 weeks after the last post, later replies are ignored
Please open a new thread and refer to the closed one, if you have a related question at a later point in time.
-------------------------------------------------------------------------------

1,500 Views
nahan_trogn
Contributor III

Hi, thanks for quick response,

I received this error when try to embedd my model into NXP kit. Is this cause by my config or my tensor model? Or I need to config maximum size of model or number of output?

Thanks

nahan_trogn_0-1617877740082.png

 

 

My model:

nahan_trogn_1-1617877959948.pngnahan_trogn_2-1617877992828.png

 

0 Kudos
Reply