What are and where are the minimum dependancies to build a custom c++ app using tensorflow-lite?

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

What are and where are the minimum dependancies to build a custom c++ app using tensorflow-lite?

Jump to solution
4,651 Views
rbanks
Contributor III

I've been trying to build a simple c++ app that uses tensorflow-lite but I have been unable to get the app to utilize the NPU when linking to the tensorflow-lite.a static library.

My system is the following:
i.MX8MPlus
Hardknott 5.10.52

Can someone please provide the minimum set of files required to link and build an application with tensorflow-lite as well as the steps in code to setup any flags/delegates or other hidden information that can't be easily determined.

Ideally I would be able to take these dependencies and copy them to the device and build on the device.

I have already gone through the "examples" and references. They are of no help since all build configurations are obfuscated through build configurations and embedded #ifdef statements.

0 Kudos
Reply
1 Solution
4,611 Views
rbanks
Contributor III

All of those steps resulted in build failures for Hardknott 5.10.52 even for the demo. Additionally, the primary dependency is buried in source code.

I was able to figure it out finally. Here is the snippet of code that will load a model. Then set the interpreter delegate to utilize the NPU. (This is what is buried in the example code and set by a flag at runtime)

#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"
#include "tensorflow/lite/delegates/nnapi/nnapi_delegate.h"

model = tflite::FlatBufferModel::BuildFromFile(file.c_str());
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
std::map<std::string, tflite::Interpreter::TfLiteDelegatePtr> delegates;
auto delegate = tflite::Interpreter::TfLiteDelegatePtr(tflite::NnApiDelegate(), [](TfLiteDelegate*) {});
delegates.emplace("NNAPI", std::move(delegate));
for (const auto& delegate : delegates) {
	interpreter->ModifyGraphWithDelegate(delegate.second.get());
}

The makefile needs to link the tensorflow-lite static library and include the file path to the tensorflow folder that was built for the imx. I had to use an sdk that was built for Zeus since Hardknott kept failing to build the sdk.

View solution in original post

0 Kudos
Reply
10 Replies
4,625 Views
manish_bajaj
NXP Employee
NXP Employee

@rbanks,

Our latest i.MX_Machine_Learning_User's_Guide.pdf provides steps on how to build application on host machine. We will try to share steps on how to build on target itself.

 

-Manish

0 Kudos
Reply
4,538 Views
Conductor
Contributor II

I looked in that document and it wasn't very helpful to understand the requirements, at least not to me.

 

My boss noticed that kirkstone has a number of commits to fix breakage on tensorflow-lite, so we are thinking it might not be possible.

 

Would be be possible you could just tell us what version is needed as I have posted a message that has not been answered yet and I would really like to understand what I need to do in order to get tensorflow-lite for the apps folks.

Is it at all possible to get tensorflow-lite working correctly on either hardknott (5.10.52) or honister (5.10.72) or should we focus on moving everything to kirkstone (5.15.xx) ? I saw reference to 5.15.32 for tensorflow-lite. Please see this post: https://community.nxp.com/t5/i-MX-Solutions/Is-it-possible-to-use-nnstreamer-tensorflow-lite-on-5-10...

Regards,

Alan

0 Kudos
Reply
4,612 Views
rbanks
Contributor III

All of those steps resulted in build failures for Hardknott 5.10.52 even for the demo. Additionally, the primary dependency is buried in source code.

I was able to figure it out finally. Here is the snippet of code that will load a model. Then set the interpreter delegate to utilize the NPU. (This is what is buried in the example code and set by a flag at runtime)

#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"
#include "tensorflow/lite/delegates/nnapi/nnapi_delegate.h"

model = tflite::FlatBufferModel::BuildFromFile(file.c_str());
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
std::map<std::string, tflite::Interpreter::TfLiteDelegatePtr> delegates;
auto delegate = tflite::Interpreter::TfLiteDelegatePtr(tflite::NnApiDelegate(), [](TfLiteDelegate*) {});
delegates.emplace("NNAPI", std::move(delegate));
for (const auto& delegate : delegates) {
	interpreter->ModifyGraphWithDelegate(delegate.second.get());
}

The makefile needs to link the tensorflow-lite static library and include the file path to the tensorflow folder that was built for the imx. I had to use an sdk that was built for Zeus since Hardknott kept failing to build the sdk.

0 Kudos
Reply
4,517 Views
Conductor
Contributor II

Did you need to move to kirkstone (5.15.xx) to get the tensorflow-lite working?

We seem to be coming to that conslusion, but this seems like something NXP should know, if someone reads this.

> I had to use an sdk that was built for Zeus since Hardknott kept failing to build the sdk.

What did you finally build on, did you get it running on Hardknott but the examples wouldn't work? That would be a good route for us to take as I don't think it's going to be easy to get my boss off 5.10.52, as he just got our camera working on it. Now we're trying to figure out if we go to honister or  just go to kirkstone and skip honister. Getting the tensorflow-lite on 5.10.52, if possible, would be ideal right now.

What version of Ubuntu/RedHat are you cross compiling on? I'm on 20.04, which coincides with WSL on Win10.

Thanks for your reply @rbanks, it gives me some hope.

Alan

0 Kudos
Reply
4,501 Views
rbanks
Contributor III

If I remember correctly. I did a yocto build with Zeus with the SDK settings set in the .conf file. Then I took the static library output, something like tensorflow-lite.a, and put that on my end device and built my application directly on the device. I have had a lot of trouble with cross compiling especially with Windows so compiling directly on device eliminated any issues there. I would assume that the tensorflow-lite static library should work for any version as long as the physical hardware supported by Zeus is the same as whatever new version you use.

0 Kudos
Reply
4,487 Views
Conductor
Contributor II

@rbanks 

I was thinking if I could build it on the device (I seem to have gnu tools plus cmake) and get the binary, I could just link it in if I had the headers.

Unfortunately I don't have any of the nnstreamer/tensorflow-lite declarations in the gstreamers/gst includes. With the .a object I could link that in or put it into a shared library.

The other thing I was thinking is if I could get the developer on Ubuntu using the nnstreamer, but I'm not certain if the API is the same and/or compatible with imx on the EVK.

Alan

0 Kudos
Reply
4,482 Views
manish_bajaj
NXP Employee
NXP Employee

@rbanks @Conductor ,

 

This is a guide on how to build an application with TensorFlow Lite C++ API on i.MX8MP, taking BSP 5.10.72 as an example.

To reduce size of rootfs on target, some dependencies of building the ML application are not included in the target rootfs. To generate the dependencies, you need to follow i.MX Yocto Project User’s Guide to build the TensorFlow Lite libraries, alternatively build the full image. Then copy the dependencies to the target. Finally, You can build the application on the target.

You can also build the C++ ML application with Yocto SDK. See details in i.MX Machine Learning User’s guide.

-Manish

0 Kudos
Reply
4,455 Views
Conductor
Contributor II

@manish_bajaj @rbanks 

I was able to get the kirkstone image that nxp built for kirkstone, L5.15.32_2.0.0_MX8MP.

I see the shared library for tensorflow2-lite in /usr/lib/nnstreamer/filters/libnnstreamer_filter_tensorflow2-lite.so

A couple differences are that we are building ocr-image-dev which uses busybox and doesn't have the init directories, it only has an /etc/init.d/rc.local which rcS file for the startup.

I looking to move the ocr-image-dev to a imx-image-full

I need to add a couple services, so would like it to work with what we have today and what we will have when we get to kirkstone. Is it safe to figure we can remain on ocr-image-dev with plans to release on ocr-image-core, rather than imx-full-image.

Are you going to be in Silicon Valley in a couple weeks? I signed up to go, but not sure if my work will give me the time for it as I'm a consultant.

0 Kudos
Reply
4,439 Views
rbanks
Contributor III

I haven't worked with the nnstreamer on this device and I haven't been keeping up to date on all the new releases that NXP is doing. Manish may be in Silicon Valley but I won't be. I'm also a consultant like you and we've just about wrapped up our project.

4,418 Views
Conductor
Contributor II

@rbanks 

Thanks for your reply, I thought I responded to it, but don't see it. This forum is challenging for me as I had 2 different accounts, trying to consolidate it into this one.

 

I might be able to get to the conference, but if so want to make sure I have plans to help something I need, otherwise I will just work on it. We're getting our hardware soon that is based on the EVK. Ultimately everything needs to run on it.

Your patches for honister (5_10_72) are very much appreciated, although I do want to leap frog to kirkstone if possible, at least it leaves us with an option. However, I am staying on hardknott (5_10_52) until my boss (probably me) that will put his patches in bitbake recipes. Currently they are manually applied.

I have a couple services to start, and was looking at getting the sysV style init directories on my ocr-image-dev, but I think I see a way to just add them to the /etc/init.d/rc.local file and leave it under busybox. The NXP image for kirkstone has those full /etc/rcX.d directories and all the use of rc-update.d.

Alan

Tags (2)
0 Kudos
Reply