Following on from all the work we did in librpmsg_lite-imx part 2 I am trying to maximise the performance of the communication between the i.MX8M Mini Cortex-M4 (bare metal) and a QNX application running on the Cortex-A53 using the librpmsg_lite-imx library.
Some questions...
I am currently sending a 212byte message every 10ms from the firmware running on the Cortex-M4 (the remote end of the connection). I'm not seeing any send errors but is that rate possible or am I missing data and not noticing, bearing in mind this is bare metal and no queue. What is the maximum theoretical throughput?
In the multicore examples in the MIMX8MM SDK both of the remote examples have files called rsc_table.[c|h] which defines a remote resource table. What is this used for? As I increase the number of buffers that defined do I need to increase VRING_SIZE?
To give some scope for adding to the size of the data being sent, I have RL_BUFFER_PAYLOAD_SIZE set to 496 and RL_BUFFER_COUNT set to 2048. Does changing these have an impact on the definition of VRING_SIZE?
-Andy.
Hi @AldoG
I'm not sure I understand what you have said.
I thought that RPMsg Lite already used a shared memory area in DDR (in my imx_init_raminfo.c I have reserved the last 8MiB of the 1GiB memory installed in the system). With the comments in the header file about RL_BUFFER_PAYLOAD_SIZE and RL_BUFFER_COUNT it doesn't seem to suggest that only a few flags.
Can you provide a GitHub link to the example that you mentioned and I'll take a look at it to see if it provides any inspiration.
Thanks,
-Andy.
Hello,
Please note that it is not recommended to use such a large messages, as the message should only include certain flags to let know the other processor that data is ready.
i.e. in our i.MX MScale SDK we have a demo that A core decoded music data and put it to DDR buffer and informs M core with the related information.
Then M core will take the ownership of consuming the buffer, it will copy buffer from DDR to TCM.
Best regards/Saludos,
Aldo.