Accelerating Machine Learning on i.MX 8 Microprocessors & Crossover MCUs

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Accelerating Machine Learning on i.MX 8 Microprocessors & Crossover MCUs

Accelerating Machine Learning on i.MX 8 Microprocessors & Crossover MCUs

TensorFlow® Lite, ArmNN, and GLOW are popular open-source machine learning inference frameworks for mobile and IoT devices. In this session, you’ll learn how to use TensorFlow Lite, and ArmNN on NXP i.MX 8 MPU-class devices in Linux, and how to take advantage of not only Arm® Cortex®-A CPU cores, but also dedicated on-chip GPU and NPU accelerators. For NXP i.MX RT MCU-class devices we will introduce two approaches: 1)TensorFlow Lite for Microcontrollers with CMSIS-NN kernel implementation optimized for Cortex-M cores, and 2) GLOW, a neural network compiler, which generates code “Ahead of Time” for Cortex-M cores and DSP.

Presenter:

Robert Kalmar, Machine Learning SW Engineer, NXP

Attachments
%3CLINGO-SUB%20id%3D%22lingo-sub-1259076%22%20slang%3D%22en-US%22%20mode%3D%22CREATE%22%3EAccelerating%20Machine%20Learning%20on%20i.MX%208%20Microprocessors%20%26amp%3B%20Crossover%20MCUs%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1259076%22%20slang%3D%22en-US%22%20mode%3D%22CREATE%22%3E%0A%3CP%3ETensorFlow%C2%AE%20Lite%2C%20ArmNN%2C%20and%20GLOW%20are%20popular%20open-source%20machine%20learning%20inference%20frameworks%20for%20mobile%20and%20IoT%20devices.%20In%20this%20session%2C%20you%E2%80%99ll%20learn%20how%20to%20use%20TensorFlow%20Lite%2C%20and%20ArmNN%20on%20NXP%20i.MX%208%20MPU-class%20devices%20in%20Linux%2C%20and%20how%20to%20take%20advantage%20of%20not%20only%20Arm%C2%AE%20Cortex%C2%AE-A%20CPU%20cores%2C%20but%20also%20dedicated%20on-chip%20GPU%20and%20NPU%20accelerators.%20For%20NXP%20i.MX%20RT%20MCU-class%20devices%20we%20will%20introduce%20two%20approaches%3A%201)TensorFlow%20Lite%20for%20Microcontrollers%20with%20CMSIS-NN%20kernel%20implementation%20optimized%20for%20Cortex-M%20cores%2C%20and%202)%20GLOW%2C%20a%20neural%20network%20compiler%2C%20which%20generates%20code%20%E2%80%9CAhead%20of%20Time%E2%80%9D%20for%20Cortex-M%20cores%20and%20DSP.%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EPresenter%3A%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3ERobert%20Kalmar%2C%20Machine%20Learning%20SW%20Engineer%2C%20NXP%3C%2FP%3E%0A%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1259076%22%20slang%3D%22en-US%22%20mode%3D%22CREATE%22%3E%3CLINGO-LABEL%3EArm%C2%AE%20Processors%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Ei.MX%20Applications%20Processors%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
No ratings
Version history
Last update:
‎04-08-2021 11:43 AM
Updated by: