Hi everyone,
I’m working on an edge AI project using NXP hardware, specifically the i.MX 8 series boards, and I’ve run into an interesting challenge. While developing models on my AI laptop, I’ve noticed significant discrepancies in model performance when transitioning to the edge device. This is likely due to hardware-specific constraints like memory and processing capabilities.
Has anyone developed an efficient workflow for optimizing AI models on an AI laptop while ensuring they run seamlessly on NXP boards? I’ve tried TensorFlow Lite and quantization techniques, but I’m looking for additional insights to improve deployment speed and accuracy.
Also, are there any recommended tools for profiling and debugging models directly on NXP boards? I’m exploring DeepView and eIQ but would appreciate real-world feedback.
Looking forward to your expertise!
Hi,
Please refer <i.MX Machine Learning User's Guide> chapter 2.7 and chapter 2.8 about how to run TensorFlow example and benchmark application at i.MX8 EVK board.
Related document was included in latest Embedded Linux for i.MX software release Documentation , latest version is L6.6.36_2.1.0
Wish it helps.
Mike