Hi team,
I'm using the Linux 6.6.36 on imx95-19x19-lpddr5-evk board.
I tried to run a python script to confirm different platform performance by using simple case of the ResNet50 model and convert to TensorFlow Lite to do inference.
I followed the i.MX Machine Learning User's Guide that mentioned about if I want to run the inference using the GPU/NPU hardware accelerator, need to use the --external_delegate_path switch:
• For VX Delegate on i.MX 8: --external_delegate_path=/usr/lib/libvx_delegate.so
• For Ethos-U Delegate on i.MX 93: --external_delegate_path=/usr/lib/libethosu_delegate.so
• For Neutron Delegate on i.MX 95: --external_delegate_path=/usr/lib/libneutron_delegate.so
And I also run this example is working well, but my script have different way between this example code about load the external delegate.
here is the part in my script :
import tensorflow as tf
class VelaInference(InferenceBase):
def __init__(self, model_loader, model_path, debug_mode=False):
"""
Initialize the VelaInference object.
:param model_loader: Object responsible for loading the model and categories.
:param model_path: Path to the tflite model.
:param debug_mode: If True, print additional debug information.
"""
super().__init__(model_loader, vela_path=model_path, debug_mode=debug_mode)
def load_model(self):
num_threads=os.cpu_count()
print("num_threads=",num_threads)
delegates = []
for _ in range(2):
try:
self.interpreter = tf.lite.Interpreter(
model_path=self.vela_path,
num_threads=num_threads,
experimental_delegates=delegates,
)
self.interpreter.allocate_tensors()
except RuntimeError as re:
if len(
delegates
) == 0 and "Encountered unresolved custom op: ethos-u." in str(re):
# retry with the Ethos-U delegate
print("using libethosu_delegate")
delegates = [tf.lite.experimental.load_delegate("/usr/lib/libethosu_delegate.so")]
continue
raise re
input_details = self.interpreter.get_input_details()
output_details = self.interpreter.get_output_details()
self.input_index = input_details[0]["index"]
self.output_0_index = output_details[0]["index"]
print('input: ', self.input_index)
print('output: ', self.output_0_index)
print("VelaInference init done.")
def predict(self, input_data, is_benchmark=False):
....
....
This load delegate way is working well on i.MX93, but when I change the library path (
num_threads= 6
Traceback (most recent call last):
File "/home/root/resnet50.py", line 498, in <module>
tflite_vela_inference = VelaInference(
^^^^^^^^^^^^^^
File "/home/root/resnet50.py", line 290, in __init__
super().__init__(model_loader, vela_path=model_path, debug_mode=debug_mode)
File "/home/root/resnet50.py", line 80, in __init__
self.model = self.load_model()
^^^^^^^^^^^^^^^^^
File "/home/root/resnet50.py", line 316, in load_model
raise re
File "/home/root/resnet50.py", line 300, in load_model
self.interpreter = tf.lite.Interpreter(
^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/tensorflow/lite/python/interpreter.py", line 513, in __init__
self._interpreter.ModifyGraphWithDelegate(
RuntimeError: Caught an unknown exception!
What the different between i.MX 93 and 95's delegate library ?
Why 95's delegate library must need to using the specific way like the example script ?