How long to response the ENET receive interrupt?

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

How long to response the ENET receive interrupt?

1,965 Views
hubo1
Contributor I

Now I use the RT1064, but I don't know how long ENET response the receive-interrupt.Because I am testing the delay time  between send_time and recv_time through the ENET. I find that the delay time decreases when the number of the received frames increases.I disable the interrupt coalescing(ENETx_RXIC). I want to know how to set the interrupt response time.

Labels (1)
0 Kudos
11 Replies

1,746 Views
hubo1
Contributor I

What is the purpose of this sentence“AT_NONCACHEABLE_SECTION_ALIGN(static enet_rx_bd_struct_t g_rxBuffDescrip_0[ENET_RXBD_NUM]FSL_ENET_BUFF_ALIGNMENT)”?I am using the latest SDK 2.8.6. I would like to ask if I change ENET_RXBD_NUM from 4 to 1 if it affects the reception. Because my test found that there is a receive buffer when entering the receive interrupt, the obtained message is not updated in real time, and ENET_RXBD_NUM is changed to 1 to be the latest message. Will the modification affect other mechanisms?

0 Kudos

1,635 Views
victorjimenez
NXP TechSupport
NXP TechSupport

Hello, 

I apologize for the delayed response, this case was mishandled and I didn't see your response on time. Regarding your questions please see my comments below. 

I confirm that the message has been sent, but the message is received a long time later. Does the receiving PTP time stamping mechanism affect the subsequent reception of other data messages and increase the delay?
Yes, because you are processing the PTP protocol and this consumes more CPU causing more delays to process all this. Now, this is not the only thing the CPU is doing, it is also receiving other interrupts, processing tasks for the TCPIP stack which adds more. Now, the purpose of the PTP is to synchronize rather than decreasing the delay in the reception of the packages. The PTP will ensure that the master and slave have the same time rather than speeding up the transmission of the packages.
 
What is the purpose of this sentence“AT_NONCACHEABLE_SECTION_ALIGN(static enet_rx_bd_struct_t g_rxBuffDescrip_0[ENET_RXBD_NUM]FSL_ENET_BUFF_ALIGNMENT)”?
This code line will allocate memory from RAM and will ask the compiler to do not add this to the cache memory, this way the memory will not be retained by the cache. It is also saying to have the memory aligned to 64 bytes. This is to ensure that we will not lose data and will always fit the 64 bytes to avoid typical alignment memory issues.
It is possible to reduce from 4 to 1 the number of buffers, however, the cost is more processing from the CPU because it will move more times 1 buffer than multiple buffers at the time. In a general perspective, you will process more time one only buffer. The advantage is that you will save RAM. Also, you may create a bottleneck because you will have only one buffer to store the data received by the FEC. There will be the same amount of interrupts, however, you will have more nesting interrupts because there is only one buffer to pass the data from the FEC and unless this is empty then the interrupt will dump the data there. The effect is that you will be having a lot of retransmissions and may lose some packets. These are ring buffers used to pass the data to the application, at the end.
 
Regards, 
Victor 
 
0 Kudos

1,944 Views
victorjimenez
NXP TechSupport
NXP TechSupport

Hello, 

I'm not sure that I understood correctly the behavior that you are facing, could you please clarify the information provided? Also, are you using the Ethernet examples that we provide within the SDK? If so, could you please tell me which examples and which version of the SDK? 

Regards, 

Victor 

0 Kudos

1,936 Views
hubo1
Contributor I

I use the SDK version 2.7 and AN12449.

0 Kudos

1,921 Views
victorjimenez
NXP TechSupport
NXP TechSupport

Hello, 

I think there's a misunderstanding, application note AN12449 is for Sensor data protection with the SE050 and it uses a Kinetis K64, not an RT. Are you using an example from the RT SDK? If so, could you please tell me how can I reproduce the behavior that you mentioned while using the RT1064-EVK and the SDK examples? Also, you are using an old version of the SDK, I highly recommend you to migrate to the newest. 

 

Regards, 

Victor 

0 Kudos

1,914 Views
hubo1
Contributor I

I use the AN12149 note,not AN12449.It is my fault.I am using RT1064 to test the end-to-end latency of Ethernet. The SDK version is 2.7. There are 20 nodes in the network, and all nodes are synchronized in time. Each node sends multicast data in a periodic time, records the node sending time before sending, and records the node receiving time after receiving interruption. We found that the end-to-end delay is at least 5ms, which feels a little too much.

Is it possible that the receiving interrupt occurs because the received frame reaches a certain number or time? But we did not set interrupt coalescence.

0 Kudos

1,908 Views
victorjimenez
NXP TechSupport
NXP TechSupport

Thanks for providing more information. Measuring the latency in the interrupt handler increases the latency of the TCP protocol. This is not correct since you are messing with the timings of the communication. Hence, your latency measures are not correct. The correct way to measure this would be to use a trace analyzer along with a hub. 

Regarding your question about when the interrupt is triggered, by default our SDK examples are configured to trigger an interrupt in every frame. 

0 Kudos

1,900 Views
hubo1
Contributor I

We are not using the TCP protocol, but sending Ethernet frames directly, without using the TCP protocol stack. What we are testing is not interrupt latency but the end-to-end latency from the sender to the receiver. The delay value tested on the RT1064 platform is 5ms, but we have also tested this delay on other platforms, and its value is only 200us.

0 Kudos

1,877 Views
victorjimenez
NXP TechSupport
NXP TechSupport

The interrupt is not the best place to measure the Ethernet frames or TCPIP packages delays. You should better use network analysis tools, for instance, Wireshark. This tool will give you the exact times and differences of time when the Ethernet frames arrive at and from a device.

I made a couple of tests on my end and I didn't get the results that you mentioned. For the tests, I used the lwip_dhcp_bm example from the SDK. I used Wireshark to measure the times and I got around  600us. See the below image. IP 125 corresponds to a PC and 112 to the RT. 

victorjimenez_0-1605746299100.png

What is the time that you are getting if you measure it as I suggested before? 

0 Kudos

1,715 Views
hubo1
Contributor I

What is the purpose of this sentence“AT_NONCACHEABLE_SECTION_ALIGN(static enet_rx_bd_struct_t g_rxBuffDescrip_0[ENET_RXBD_NUM]FSL_ENET_BUFF_ALIGNMENT)”?I am using the latest SDK 2.8.6. I would like to ask if I change ENET_RXBD_NUM from 4 to 1 if it affects the reception. Because my test found that there is a receive buffer when entering the receive interrupt, the obtained message is not updated in real time, and ENET_RXBD_NUM is changed to 1 to be the latest message. Will the modification affect other mechanisms?

0 Kudos

1,777 Views
hubo1
Contributor I

I now change the SDK to version 2.8.6. I use two network ports for redundant sending and receiving. If I do not receive PTP packets, the network delay is 400us, but after receiving PTP packets and retaining the timestamp (modified according to AN12149) , The delay is increased to 100ms. I confirm that the message has been sent, but the message is received a long time later. Does the receiving PTP time stamping mechanism affect the subsequent reception of other data messages and increase the delay?
Let me describe my application scenario. I now use two nodes as slave nodes and synchronize the time with the master node through PTP messages. Each slave node has two network ports for sending and receiving, but only one network port receives PTP packets from the master node. Both network ports receive data packets from the slave node. Now the delay of receiving the PTP packet of the master node on the slave node is much greater than the delay of another network port that does not receive PTP packets.

0 Kudos