Hello, I am developing an application on i.MX 93 EVK. I would appreciate recommendations for the following which have relatively low compute requirements while running on-device. I am using a i.MX 93 Dual core 1.7GHz.
I am looking for recommendations for
Hello @QuantumPath
On i.MX93, we have enabled Whisper ASR (tiny, base, small) and Moonshine ASR (tiny and base).
We will deliver first the Whisper ASR as a Voice plugin through GStreamer by mid-July.
For TTS, we have enabled ViTS TTS. For LLM, we can run small LLM like Danube 0.5B.
In parallel, we have a complete eIQ Gen Al flow pipeline (Wake word, ASR, LLM, RAG, TTS) running on i.MX95 here : https://github.com/nxp-appcodehub/dm-eiq-genai-flow-demonstrator?tab=readme-ov-file
Hey Laurent_P , can you share how you implemented the Whisper tiny.en TFLite model on the NPU of i.MX93? I’ve been looking for this for ages, and it would really help me in development. I was able to convert the model to TFLite INT8, but the NPU doesn’t fully support all Whisper operations, so I have to use the float32 model on the CPU. Is it even possible to convert it and use it on the NPU?
thank you