I'm trying to benchmark a custom TensorFlow Lite model using the latest Yocto BSP (5.10.35_2.0.0) and I'm hitting a segfault in the nnrt::OvxlibDelegate::process() method (see below for the backtrace from GDB). I didn't see this problem with the previous version of the BSP (5.10.9_1.0.0).
The problem appears to be in the delegate, as I don't encounter it if I disable the NNAPI option (--use_nnapi=false). I also don't run into any issues when benchmarking the MobileNet v1 example, which is smaller than my custom model. Is this a known issue? If so, is there an ETA for a fix?
#0 0x0000fffff7743f9c in nnrt::OvxlibDelegate::process(nnrt::Model*, _vsi_nn_context_t*) () from /usr/lib/libnnrt.so.1
#1 0x0000fffff7734908 in nnrt::PreparedModel::prepare() () from /usr/lib/libnnrt.so.1
#2 0x0000fffff7726dbc in nnrt::Compilation::prepareModel(int*, std::vector<std::shared_ptr<nnrt::ExecutionIO>, std::allocator<std::shared_ptr<nnrt::ExecutionIO> > > const&, std::shared_ptr<_vsi_nn_context_t>&) () from /usr/lib/libnnrt.so.1
#3 0x0000fffff772cb24 in nnrt::Execution::compute() () from /usr/lib/libnnrt.so.1
#4 0x0000fffff7d517ec in tflite::delegate::nnapi::NNAPIDelegateKernel::Invoke(TfLiteContext*, TfLiteNode*, int*) () from /usr/lib/libtensorflow-lite.so.2.4.1
#5 0x0000fffff7d4aba4 in tflite::Subgraph::Invoke() () from /usr/lib/libtensorflow-lite.so.2.4.1
#6 0x0000fffff7ecb324 in tflite::Interpreter::Invoke() () from /usr/lib/libtensorflow-lite.so.2.4.1
#7 0x0000aaaaaaaaa5dc in ?? ()
#8 0x0000aaaaaaaad824 in ?? ()
#9 0x0000aaaaaaaa8d68 in ?? ()
#10 0x0000fffff78d9994 in __libc_start_main (main=0xaaaaaaaa8820, argc=3, argv=0xfffffffffad8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>,
stack_end=<optimized out>) at ../csu/libc-start.c:332
#11 0x0000aaaaaaaa8ab8 in ?? ()
Solved! Go to Solution.