I need to convert a tensorflow-model so that it will run on a LPC43S57-Chip using CMSIS-NN.
What I have so far:
Created a small, simple network using Python 3.7.9, Keras 2.4.3 and coverted it to onnx using onnxmltools 1.7.0.
I downloaded the "Glow Installer for windows" from your website, installed it and ran the model-compiler command to received the folder as the output as expected.
I have two questions:
To summarize what we discussed via messenger, Glow does not convert a model into CMSIS-NN code. Instead it compiles a model into a machine executable binary, and as part of that process the compiler has the option, if specified with the -use-cmsis argument, to make use of the CMSIS-NN libraries to speed up the execution of that code. The end result of running the Glow compiler though is a binary file that gets executed as part of a MCUXpresso SDK project.
If the model is not doing image classification, the Glow model-profiler tool (look under the "Compile a bundle for a quantized model" section) can be used to generate a quantization profile. This tool is not included in the current NXP Glow release, but will be part of the next NXP Glow release early next year.
I should note that there are scripts (code_gen.py) from ARM that can convert a Caffe model into CMSIS-NN API calls.
Finally it may be possible to run your model using Glow on a LPC53S57 chip. There's a Glow porting guide available that should help with the porting process. The key requirement will be how large your model is which determines if it'll fit in the memory constraints of that device.