Request for features in KV and i.MX RT Toolboxes

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Request for features in KV and i.MX RT Toolboxes

Jump to solution
923 Views
Maciek
Contributor V

I have 2 questions regarding features of the KV and i.MX RT toolboxes:

1. Is it possible to add CAN support in the next release of Kinetis KV toolbox (KV4 and KV5 have CAN peripheral) ?

2. Is it possible to add support for eIQ libraries/workflow in i.MX RT toolbox ?
This would be very important - it's one of the most interesting features of this family. Having at least one option (there are 3 workflows available in eIQ) of incorporating the classifier into the Simulink project (to simulate and generate code) - that would be an alternative solution to Machine Learning/Deep Learning Toolboxes from Mathworks. Info about eIQ library available for cheap (and easy in application) MCU family was the primary reason we have started looking at i.MX RT processors. There are many companies that use Model Based Design and at the same time are familiar with popular open-source Machine Learning frameworks.

Please let us know what are your plans. We are moving into machine learning solutions more and more...

Regards

Maciek

0 Kudos
1 Solution
883 Views
nxa11767
NXP Employee
NXP Employee

Hi Maciek,

 

Q1: Yes, will add CAN support for the next release of the Kinetis toolbox. 

Q2: The next IMXRT release will be in mid-april and we have a couple of feature to cover till then but we can try to squeeze in also eIQ support. (I logged it in your system and planned to next release). Can you please detail you usage of this library (maybe what to expect to be able to use from the toolbox), it will help us to focus on these aspects when doing the support.

Thank you,

Alexandra

View solution in original post

0 Kudos
4 Replies
884 Views
nxa11767
NXP Employee
NXP Employee

Hi Maciek,

 

Q1: Yes, will add CAN support for the next release of the Kinetis toolbox. 

Q2: The next IMXRT release will be in mid-april and we have a couple of feature to cover till then but we can try to squeeze in also eIQ support. (I logged it in your system and planned to next release). Can you please detail you usage of this library (maybe what to expect to be able to use from the toolbox), it will help us to focus on these aspects when doing the support.

Thank you,

Alexandra

0 Kudos
871 Views
Maciek
Contributor V

Hi @nxa11767 ,

the design and training of the classfier is done outside Matlab and NXP Toolbox (using for example Keras/Tensorflow...). Now, having the ready to use (trained model) it would be best to be able to do both:
- test the classifier in simulation: test the behavior of the whole application designed in Simulink (application that contains the classifier)
- generate code for the whole application (including classifier) and load it into MCU

In Mathworks workflow we can design ANN (using Matlab script or graphical tools - in a similar way to Keras/Tensorflow) and then we can 'export' it as a single block to Simulink model. Than it can be used in Simulation and Code generation as any other supported block. I can imagine similar solution when we have a generic 'classifier' block supplied by NXP toolbox, in which we click and choose the Tensorflow model saved on disc (in one form or another). After that such block shows inputs and outputs according to the choosen classifier and can be used in simulation and code generation.
But really, any way that enables us to incorporate trained classifier for code generation and simulation would be fine.

Regards
Maciek

0 Kudos
868 Views
Maciek
Contributor V

Hi @nxa11767,

the design and training of the classfier is done outside Matlab and NXP Toolbox (using for example Keras/Tensorflow...). Now, having the ready to use (trained model) it would be best to be able to do both:
- test the classifier in simulation: test the behavior of the whole application designed in Simulink (application that contains the classifier)
- generate code for the whole application (including classifier) and load it into MCU

In Mathworks workflow we can design ANN (using Matlab script or graphical tools - in a similar way to  Keras/Tensorflow) and then we can 'export' it as a single block to Simulink model. Than it can be used in Simulation and Code generation as any other supported block. I can imagine similar solution when we have a generic 'classifier' block supplied by NXP toolbox, in which we click and choose the Tensorflow model saved on disc (in one form or another). After that such block shows inputs and outputs according to the choosen classifier and can be used in simulation and code generation.
But really, any way that enables us to incorporate trained classifier for simulation and code generation would be fine.

Regards
Maciek

0 Kudos
859 Views
brianmckay
Contributor III

Hi Maciek,

I suspect the best (maybe only) way to deploy this on a processor today is via manual integration in the IDE - NN + algorithm. You obviously need C code for each piece, and then connect it together. 

MathWorks is actively working on building a more cohesive workflow for both simulation and code generation of NNs + algorithms, but it is a work in progress, but we will take some strides forward in our 21a and 21b releases. I think MathWorks probably is the right entity to provide the foundational pieces (simulation, codegen); and once that is done, NXP MBD Toolboxes can leverage this for deployment like they do today for algorithms from MATLAB, Simulink, and Stateflow. Obviously NXP will also, in parallel, have their own workflows for customers who don't use MathWorks tools - which likely will still require some level of manual IDE integration.

At least that is my take, based on what I know.

@Maciek  Can I ask what application you are running, and how big your NN is for KVx class processors?  I'd like to feed these requirements to MathWorks teams working on this topic. If you'd prefer to reach out directly, that would be great - we can exchange contact info and set up a discussion.  

Cheers,

-Brian

 

0 Kudos