tflite Conv2D error for iMX RT685

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

tflite Conv2D error for iMX RT685

Jump to solution
1,862 Views
t-arakawa
Contributor II

I am trying to import my own built tflite micro model into iMX RT685 AUD EVK.
However I encountered the following error.

tensorflow/lite/micro/kernels/xtensa/conv_hifi.cc:201 xa_nn_conv2d_std_per_chan_sym8sxsym16s( p_out_temp, &input_data[batch * input_height * input_width * input_depth], const_cast<int8_t*>(filter_data), bias_data, input_height, input_width, input_depth, filter_height, filter_width, output_depth, stride_width, stride_height, pad_width, pad_height, output_height, output_width, 0, data.reference_op_data.per_channel_output_multiplier, data.reference_op_data.per_channel_output_shift, 0, output_data_format, static_cast<void*>(p_scratch)) != 0 (-1 != 0)
Node CONV_2D (number 1) failed to invoke with status 1

The model includes a simple Conv2D layer and it was converted with tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8 option.

import tensorflow as tf
import numpy as np

x_train = np.random.rand(100, 10, 1).astype(np.float32)
y_train = np.random.rand(100, 1).astype(np.float32)

model = tf.keras.Sequential([
      tf.keras.layers.Conv1D(filters=16, kernel_size=3, activation='relu', input_shape=(10, 1)),
      tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(1)
])

model.compile(optimizer='adam', loss='mean_squared_error')model.fit(x_train, y_train, epochs=5)
model.save('conv1d_model.h5')


converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]
converter.target_spec.supported_types = [tf.int16]
converter._experimental_disable_per_channel = False
converter.unfold_batchmatmul = False
converter.inference_input_type = tf.int16
converter.inference_output_type = tf.int16
converter.experimental_new_converter = True
converter._experimental_full_integer_quantization_bias_type = tf.int64
tflite_model_16bit = converter.convert()

with open('model_16bit.tflite', 'wb') as f:
      f.write(tflite_model_16bit)


I used SDK_2_15__MIMXRT685-AUD-EVK.
I found bias 64bit alignment gives this error message.

Is there any information to import custom build tflite micro models?

0 Kudos
Reply
1 Solution
1,685 Views
t-arakawa
Contributor II

Thank you for your reply.

Finally, I found the tool to modify the memory alignments in the github repository of tflite-micro.
https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/tools/tflite_flatbuffer_a...
https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/tools/tflite_flatbuffer_a...

Inside the source code, there is a description of the tool.

PYBIND11_MODULE(tflite_flatbuffer_align_wrapper, m) {
  m.doc() = "tflite_flatbuffer_align_wrapper";
  m.def("align_tflite_model", &align_tflite_model,
        "Aligns the tflite flatbuffer to (16), by unpacking and repacking via "
        "the flatbuffer C++ API.",
        py::arg("input_file_name"), py::arg("output_file_name"));
}

 This is the tool I have been looking for.

Installation steps are quite easy.

git clone https://github.com/tensorflow/tflite-micro.git
cd tflite-micro
bazel build //tensorflow/lite/micro/tools:tflite_flatbuffer_align

bazel-bin/tensorflow/lite/micro/tools/tflite_flatbuffer_align <input.tflite> <output.tflite>


Thanks a lot.

View solution in original post

0 Kudos
Reply
5 Replies
1,833 Views
Sam_Gao
NXP Employee
NXP Employee

Hi,

The error message indicates that the xa_nn_conv2d_std_per_chan_sym8sxsym16s function in conv_hifi.cc is failing with status -1. This typically means there is an issue with the bias data alignment or other parameters passed to the function.

Would you plesae give me some details about it? which example do you use? how about the model?

0 Kudos
Reply
1,827 Views
t-arakawa
Contributor II

I used the attached model. This model was generated with TensorFlow 2.17.0 and flatbuffers 24.3.25.

I checked xa_nn_conv2d_std_per_chan_sym8sxsym16s function and    

 

XA_NNLIB_ARG_CHK_ALIGN(p_bias, sizeof(WORD64), -1);

 


returns the error.

size seems fine, but address looks weird.

 

sizeof bias_data=4
sizeof bias_data[0]=8
bias_data[0]=0 0x13dc04
bias_data[1]=0 0x13dc0c
bias_data[2]=0 0x13dc14
bias_data[3]=0 0x13dc1c
bias_data[4]=0 0x13dc24

 


I also tried another tflite model from the TFLM official github repository.
https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/integration_tests/seanet/...

The tflite model works fine. I'm not sure the difference.

0 Kudos
Reply
1,707 Views
Sam_Gao
NXP Employee
NXP Employee

Hi @t-arakawa 

Apologize for a bit delay, I would like to share possbile causes and solution from my side.

By ensuring that p_bias is 8-byte aligned, you can avoid the alignment error returned by XA_NNLIB_ARG_CHK_ALIGN. Additionally, checking the model conversion tools and memory allocation methods are crucial for resolving the issue. I hope these methods help you solve the alignment problem.

1. Pointer Alignment Issue: The p_bias pointer might not be 8-byte aligned. Even if the bias_data array itself is 8-byte aligned, if p_bias points to an address that is not 8-byte aligned, the alignment check will fail.

  • Ensure that p_bias is 8-byte aligned. You can use the alignas keyword to enforce alignment: 
alignas(8) int64_t bias_data[4];

int64_t *bias_data = (int64_t *)aligned_alloc(8, 4 * sizeof(int64_t));

 

2. Model Conversion Issue: Your model might have issues during the conversion process, leading to misaligned bias data. Check the configuration and version of the model conversion tools to ensure they are consistent with those used for the official model.

 

3.  The way memory is allocated might result in p_bias not being correctly aligned. Ensure you use appropriate alignment when allocating memory. Please add debug information to print the address and alignment of p_bias:

printf("p_bias address: %p\n", p_bias);
printf("p_bias alignment: %zu\n", (uintptr_t)p_bias % 8);

 

0 Kudos
Reply
1,686 Views
t-arakawa
Contributor II

Thank you for your reply.

Finally, I found the tool to modify the memory alignments in the github repository of tflite-micro.
https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/tools/tflite_flatbuffer_a...
https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/tools/tflite_flatbuffer_a...

Inside the source code, there is a description of the tool.

PYBIND11_MODULE(tflite_flatbuffer_align_wrapper, m) {
  m.doc() = "tflite_flatbuffer_align_wrapper";
  m.def("align_tflite_model", &align_tflite_model,
        "Aligns the tflite flatbuffer to (16), by unpacking and repacking via "
        "the flatbuffer C++ API.",
        py::arg("input_file_name"), py::arg("output_file_name"));
}

 This is the tool I have been looking for.

Installation steps are quite easy.

git clone https://github.com/tensorflow/tflite-micro.git
cd tflite-micro
bazel build //tensorflow/lite/micro/tools:tflite_flatbuffer_align

bazel-bin/tensorflow/lite/micro/tools/tflite_flatbuffer_align <input.tflite> <output.tflite>


Thanks a lot.

0 Kudos
Reply
1,674 Views
Sam_Gao
NXP Employee
NXP Employee

You are welcome! Great to get update and find the root cause (memory alignments)!