Hi @flee-elemind ,
It sounds like you're running into a common issue when dealing with real-time systems and machine learning tasks. The Glow compiler, while efficient, does not inherently support task preemption, which is why your other tasks are being blocked during inference.
One possible solution is to manually insert task yield points into the Glow generated code. This would allow the FreeRTOS scheduler to switch tasks during the inference process. However, this would require modifying the generated code, which might not be ideal.
Another approach could be to split the inference task into smaller tasks that can be scheduled independently. This would allow other tasks to be interleaved with the inference task, reducing the blocking effect. However, this might increase the total inference time due to the overhead of task switching.
Lastly, you could consider using a separate core for the inference task, if your hardware supports it. This would allow the inference to run in parallel with your other tasks. I understand you're trying to avoid using the HiFi core due to power consumption, but it might be worth considering if the blocking issue is severe.
Here are some links to relevant discussions on the NXP community forum that might be helpful:
I hope this helps!