The “Hello World” of TensorFlow Lite

Showing results for 
Search instead for 
Did you mean: 

The “Hello World” of TensorFlow Lite

No ratings

The “Hello World” of TensorFlow Lite

Our goal is to train a model that can take a value, x, and predict its sine, y. In a real-world application, if you needed the sine of x, you could just calculate it directly. However, by training a model to approximate the result, we can demonstrate the basics of machine learning.
TensorFlow and Keras
TensorFlow is a set of tools for building, training, evaluating, and deploying machine learning models. Originally developed at Google, TensorFlow is now an open-source project built and maintained by thousands of contributors across the world. It is the most popular and widely used framework for machine learning. Most developers interact with TensorFlow via its Python library. TensorFlow does many different things. In this post, we’ll use Keras, TensorFlow’s high-level API that makes it easy to build and train deep learning networks.
To enable TensorFlow on mobile and embedded devices, Google developed the TensorFlow Lite framework. It gives these computationally restricted devices the ability to run inference on pre-trained TensorFlow models that were converted to TensorFlow Lite. These converted models cannot be trained any further but can be optimized through techniques like quantization and pruning.
Building the Model
To building the Model, we should follow the below steps.
  1. Obtain a simple dataset.
  2. Train a deep learning model.
  3. Evaluate the model’s performance.
  4. Convert the model to run on-device.
Please navigate to the URL in your browser to open the notebook directly in Colab, this notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite.
Deploy the mode to the RT MCU
  • Hardware Board: MIMXRT1050 EVK Board

WeChat Image_20200203172635.jpg

Fig 1 MIMXRT1050 EVK Board

  • Template demo code: evkbimxrt1050_tensorflow_lite_cifar10
/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
 Copyright 2018 NXP. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
See the License for the specific language governing permissions and
limitations under the License.

#include "board.h"
#include "pin_mux.h"
#include "clock_config.h"
#include "fsl_debug_console.h"

#include <iostream>
#include <string>
#include <vector>
#include "timer.h"

#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"
#include "tensorflow/lite/string_util.h"

#include "Sine_mode.h"

int inference_count = 0;
// This is a small number so that it's easy to read the logs
const int kInferencesPerCycle = 30;
const float kXrange = 2.f * 3.14159265359f;

#define LOG(x) std::cout

void RunInference()
 std::unique_ptr<tflite::FlatBufferModel> model;
 std::unique_ptr<tflite::Interpreter> interpreter;
 model = tflite::FlatBufferModel::BuildFromBuffer(sine_model_quantized_tflite, sine_model_quantized_tflite_len);
 if (!model) {
 LOG(FATAL) << "Failed to load model\r\n";

 tflite::ops::builtin::BuiltinOpResolver resolver;

 tflite::InterpreterBuilder(*model, resolver)(&interpreter);
 if (!interpreter) {
 LOG(FATAL) << "Failed to construct interpreter\r\n";

 float input = interpreter->inputs()[0];

 if (interpreter->AllocateTensors() != kTfLiteOk) {
 LOG(FATAL) << "Failed to allocate tensors!\r\n";

 // Calculate an x value to feed into the model. We compare the current
 // inference_count to the number of inferences per cycle to determine
 // our position within the range of possible x values the model was
 // trained on, and use this to calculate a value.
 float position = static_cast<float>(inference_count) /
 float x_val = position * kXrange;
 float* input_tensor_data = interpreter->typed_tensor<float>(input);
 *input_tensor_data = x_val;


 // Run inference, and report any error
 TfLiteStatus invoke_status = interpreter->Invoke();
 if (invoke_status != kTfLiteOk)
 LOG(FATAL) << "Failed to invoke tflite!\r\n";

 // Read the predicted y value from the model's output tensor
 float* y_val = interpreter->typed_output_tensor<float>(0);

 PRINTF("\r\n x_value: %f, y_value: %f \r\n", x_val, y_val[0]);

 // Increment the inference_counter, and reset it if we have reached
 // the total number per cycle
 inference_count += 1;
 if (inference_count >= kInferencesPerCycle) inference_count = 0;



 * @brief Application entry point.
int main(void)
 /* Init board hardware */


 std::cout << "The hello_world demo of TensorFlow Lite model\r\n";


 for (;;) {}
Test result
On the MIMXRT1050 EVK Board, we log the input data: x_value and the inferenced output data: y_value via the Serial Port.
Fig2 Received data
In a while loop function, It will run inference for a progression of x values in the range 0 to 2π and then repeat. Each time it runs, a new x value is calculated, the inference is run, and the data is output.
Fig3 Test result
In further, we use Excel to display the received data against our actual values as the below figure shows.
Fig4 Dot Plot
You can see that, for the most part, the dots representing predicted values form a smooth sine curve along the center of the distribution of actual values. In general, Our network has learned to approximate a sine curve.

Labels (1)
Version history
Revision #:
2 of 2
Last update:
‎09-10-2020 02:25 AM
Updated by: