Goal
Our goal is to train a model that can take a value, x, and predict its sine, y. In a real-world application, if you needed the sine of x, you could just calculate it directly. However, by training a model to approximate the result, we can demonstrate the basics of machine learning.
TensorFlow and Keras
TensorFlow is a set of tools for building, training, evaluating, and deploying machine learning models. Originally developed at Google, TensorFlow is now an open-source project built and maintained by thousands of contributors across the world. It is the most popular and widely used framework for machine learning. Most developers interact with TensorFlow via its Python library. TensorFlow does many different things. In this post, we’ll use
Keras, TensorFlow’s high-level API that makes it easy to build and train deep learning networks.
To enable TensorFlow on mobile and embedded devices, Google developed the
TensorFlow Lite framework. It gives these computationally restricted devices the ability to run inference on pre-trained TensorFlow models that were converted to TensorFlow Lite. These converted models cannot be trained any further but can be optimized through techniques like quantization and pruning.
To building the Model, we should follow the below steps.
- Obtain a simple dataset.
- Train a deep learning model.
- Evaluate the model’s performance.
- Convert the model to run on-device.
Please navigate to the
URL in your browser to open the notebook directly in Colab, this notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite.
Deploy the mode to the RT MCU
- Hardware Board: MIMXRT1050 EVK Board

Fig 1 MIMXRT1050 EVK Board
- Template demo code: evkbimxrt1050_tensorflow_lite_cifar10
Code
#include "board.h"
#include "pin_mux.h"
#include "clock_config.h"
#include "fsl_debug_console.h"
#include <iostream>
#include <string>
#include <vector>
#include "timer.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"
#include "tensorflow/lite/string_util.h"
#include "Sine_mode.h"
int inference_count = 0;
const int kInferencesPerCycle = 30;
const float kXrange = 2.f * 3.14159265359f;
#define LOG(x) std::cout
void RunInference()
{
std::unique_ptr<tflite::FlatBufferModel> model;
std::unique_ptr<tflite::Interpreter> interpreter;
model = tflite::FlatBufferModel::BuildFromBuffer(sine_model_quantized_tflite, sine_model_quantized_tflite_len);
if (!model) {
LOG(FATAL) << "Failed to load model\r\n";
exit(-1);
}
model->error_reporter();
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
if (!interpreter) {
LOG(FATAL) << "Failed to construct interpreter\r\n";
exit(-1);
}
float input = interpreter->inputs()[0];
if (interpreter->AllocateTensors() != kTfLiteOk) {
LOG(FATAL) << "Failed to allocate tensors!\r\n";
}
while(true)
{
float position = static_cast<float>(inference_count) /
static_cast<float>(kInferencesPerCycle);
float x_val = position * kXrange;
float* input_tensor_data = interpreter->typed_tensor<float>(input);
*input_tensor_data = x_val;
Delay_time(1000);
TfLiteStatus invoke_status = interpreter->Invoke();
if (invoke_status != kTfLiteOk)
{
LOG(FATAL) << "Failed to invoke tflite!\r\n";
return;
}
float* y_val = interpreter->typed_output_tensor<float>(0);
PRINTF("\r\n x_value: %f, y_value: %f \r\n", x_val, y_val[0]);
inference_count += 1;
if (inference_count >= kInferencesPerCycle) inference_count = 0;
}
}
int main(void)
{
BOARD_ConfigMPU();
BOARD_InitPins();
BOARD_InitDEBUG_UARTPins();
BOARD_BootClockRUN();
BOARD_InitDebugConsole();
NVIC_SetPriorityGrouping(3);
InitTimer();
std::cout << "The hello_world demo of TensorFlow Lite model\r\n";
RunInference();
std::flush(std::cout);
for (;;) {}
}
Test result
On the MIMXRT1050 EVK Board, we log the input data: x_value and the inferenced output data: y_value via the Serial Port.
Fig2 Received data
In a while loop function, It will run inference for a progression of x values in the range 0 to 2π and then repeat. Each time it runs, a new x value is calculated, the inference is run, and the data is output.
Fig3 Test result
In further, we use Excel to display the received data against our actual values as the below figure shows.
Fig4 Dot Plot
You can see that, for the most part, the dots representing predicted values form a smooth sine curve along the center of the distribution of actual values. In general, Our network has learned to approximate a sine curve.