Couldn't find a better place to post this question -- hope its ok.
We have a NIC that its core processor us a lx2160a, its a PCIe Gen3 x8 PCIe smartNIC adapter -- configured as a PCIe EP. The NIC has 2 - 25G interfaces.
We are developing a packet forwarding application to take packets in from the 2 - 25G interfaces and forward them to user space of the x86 host across the PCIe bus, the host is configured as the PCIe RC.
We are using DPDK in that we have several cpus as an Rx core setup to be RSS'd from its assigned interface. Then in a pipeline manner eventually the packets go to another Tx thread (on its own cpu) to be forwarded to the host.
On the host we have our own ring descriptors and packet buffers allocated in many huge-pages -- prior to the forwarder running through PCIe config space we provide the address information of the pages to the forwarder app on the lx2160a. The ring and buffer huge-pages are mapped into user space.
So we use DPDK to bring packets in from the network interfaces and want to use QDMA from the tx thread/cpu to forward the packets to the huge-pages on host.
Our questions are:
1. Does NXP support a user-space QDMA library that allows us to utilize several channel worth of parallel DMA transactions? (we can't afford to do the DMA from kernel)
2. Does the DPDK support from NXP for the lx2160a support PCIe as an interface? If so could we use that to foward the packets -- meaning does it allow one to provide the physical/bus address to burst packets and descriptors to?
Any help from the perspective of pointing to documents or git-based source code or other information is surely appreciated. Thanks for all your help ahead of time.