Hi @brian14 ,
thank you for your quick reply.
I don't have a GStreamer pipeline implemented. For the video stream I currently use Aravis. Pylon/Pypylon and GeniCam Harvesters show the same behavior. It is really just a basic loop:
import gi
import signal
gi.require_version('Aravis', '0.8')
from gi.repository import Aravis
class SIGINT_handler():
def __init__(self):
self.SIGINT = False
def signal_handler(self, signal, frame):
print('You pressed Ctrl+C!')
self.SIGINT = True
handler = SIGINT_handler()
signal.signal(signal.SIGINT, handler.signal_handler)
camera = None
camera = Aravis.Camera.new(None)
if not camera:
raise IOError("No camera found.")
camera.set_region(0, 0, 2160, 1620)
camera.set_frame_rate(20.0)
stream = camera.create_stream()
payload = camera.get_payload()
# Allocate 10 buffers
for i in range(10):
stream.push_buffer(Aravis.Buffer.new(payload))
camera.start_acquisition()
while not handler.SIGINT:
buffer = stream.timeout_pop_buffer(1000000)
if buffer != None:
print("Buffer {0}x{1} {2}".format(buffer.get_image_width(), buffer.get_image_height(), buffer))
stream.push_buffer(buffer)
else:
print("buffer is None")
camera.stop_acquisition()
For the inference task I also have just a basic program:
import signal, time
import numpy as np
import tflite_runtime.interpreter as tfl
class SIGINT_handler():
def __init__(self):
self.SIGINT = False
def signal_handler(self, signal, frame):
print('You pressed Ctrl+C!')
self.SIGINT = True
handler = SIGINT_handler()
signal.signal(signal.SIGINT, handler.signal_handler)
interpreter = tfl.Interpreter(
"mobilenet_v1_1.0_224_quant.tflite",
experimental_delegates=[tfl.load_delegate("/usr/lib/libvx_delegate.so")]
)
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.allocate_tensors()
images_shape = input_details[0]['shape']
def process_image():
dummy_data = np.random.uniform(0.0, 255.0, images_shape).astype('uint8')
interpreter.set_tensor(input_details[0]['index'], dummy_data)
interpreter.invoke()
for od in output_details:
trash = interpreter.get_tensor(od['index'])
# warm up
process_image()
while not handler.SIGINT:
time.sleep(0.2)
process_image()
It uses the label_image.py example model that was shipped with tensorflow lite.
The output of the camera thread with ARV_DEBUG="all:2" is:
[01:56:13.568] 🅸 stream> SIRM_INFO = 0x02000000
[01:56:13.568] 🅸 stream> SIRM_REQ_PAYLOAD_SIZE = 0x00000000000d5930
[01:56:13.568] 🅸 stream> SIRM_REQ_LEADER_SIZE = 0x00000400
[01:56:13.568] 🅸 stream> SIRM_REQ_TRAILER_SIZE = 0x00000400
[01:56:13.568] 🅸 stream> Required alignment = 4
[01:56:13.570] 🅸 stream> SIRM_PAYLOAD_SIZE = 0x000d5930
[01:56:13.570] 🅸 stream> SIRM_PAYLOAD_COUNT = 0x00000001
[01:56:13.570] 🅸 stream> SIRM_TRANSFER1_SIZE = 0x00000000
[01:56:13.570] 🅸 stream> SIRM_TRANSFER2_SIZE = 0x00000000
[01:56:13.570] 🅸 stream> SIRM_MAX_LEADER_SIZE = 0x00000400
[01:56:13.570] 🅸 stream> SIRM_MAX_TRAILER_SIZE = 0x00000400
[01:56:13.571] 🅸 stream-thread> Start async USB3Vision stream thread
[01:56:13.614] 🅸 device> [[UvDevice::write_memory] Try 1/5: unexpected answer (0x0000)
[01:56:16.939] 🆆 stream-thread> Payload transfer failed (LIBUSB_TRANSFER_STALL)
[01:56:16.939] 🆆 stream-thread> Trailer transfer failed (LIBUSB_TRANSFER_ERROR)
[01:56:16.940] 🆆 stream-thread> Leader transfer failed (LIBUSB_TRANSFER_ERROR)
[01:56:16.941] 🆆 stream-thread> Payload transfer failed (LIBUSB_TRANSFER_ERROR)
[01:56:16.941] 🆆 stream-thread> Trailer transfer failed (LIBUSB_TRANSFER_ERROR)
[01:56:16.942] 🆆 stream-thread> Leader transfer failed (LIBUSB_TRANSFER_ERROR)
[01:56:16.942] 🆆 stream-thread> Payload transfer failed (LIBUSB_TRANSFER_ERROR)
More about the error messages from pypylon and the usbmon output can be found in this issue .
When I reduce the Image resolution, the error doesn't occur immediately. E.g. 300x300 @20fps takes 30 seconds untill we receive a USB transfer stall while the inference thread is running with 200ms sleep time. When we reduce the size further (100 x 100 @ 20), we can run the tasks for 10 minutes without the error.
I believe this is a timing issue or any resource contention between the USB and NPU drivers. How can I get more debugging information out of those drivers?
Thanks and Regards.