TensorFlow® Lite, ArmNN, and GLOW are popular open-source machine learning inference frameworks for mobile and IoT devices. In this session, you’ll learn how to use TensorFlow Lite, and ArmNN on NXP i.MX 8 MPU-class devices in Linux, and how to take advantage of not only Arm® Cortex®-A CPU cores, but also dedicated on-chip GPU and NPU accelerators. For NXP i.MX RT MCU-class devices we will introduce two approaches: 1)TensorFlow Lite for Microcontrollers with CMSIS-NN kernel implementation optimized for Cortex-M cores, and 2) GLOW, a neural network compiler, which generates code “Ahead of Time” for Cortex-M cores and DSP.
Presenter:
Robert Kalmar, Machine Learning SW Engineer, NXP