LCP54606 SDK Ethernet interrupt issue

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

LCP54606 SDK Ethernet interrupt issue

1,052 次查看
manu1
Contributor I

I am using LPC54606 MCU and am trying to get the `lwip_udpecho_bm` sample program (from the SDK) to work using Ethernet RX_DMA interrupts. It works without any modifications to the code, using polling. I am also able to get the `lpcxpresso54608_enet_txrx_transfer_rxinterrupt` sample program to work (it uses interrupts), without any problems.

However, when I enable RX DMA interrupt in the `lwip_udpecho_bm` program using `

ENET_EnableInterrupts(ethernetif->base, kENET_DmaTx | kENET_DmaRx); [file: enet_ethernetif_lpc.c, line: 326]

and invokes the following function from the callback

...
else if
(event == kENET_RxIntEvent)

    {

        ethernetif_input(netif);

    }

...

[file: enet_ethernetif_lpc.c, code from line: 140]

the RX DMA stops working after `ENET_RXBD_NUM` number of interrupts (tried changing the value from 5 but without any success).

I tried looking for the place where the descriptors are updated after each DMA. However, it seems that the descriptor buffer is wrapped to form pbuff which is passed to the lwip stack. I am not able to find where this is being released and the ownership of the descriptor is assigned back to the DMA (bit 31 in RDES3). Is this intentional?

I also tried updating the descriptors to release the buffers, but that did not help. Is there a fix for this?

Additionally, it seems the driver funcitons used to read are different between `lpcxpresso54608_enet_txrx_transfer_rxinterrupt` (using ENET_ReadFrame() ) and `lwip_udpecho_bm` (using ethernetif_read_frame() ). Any reason for this?

标签 (2)
0 项奖励
回复
1 回复

1,003 次查看
manu1
Contributor I

Fixed the issue by modifying the code as recommended in lwIP's documentation "Mainloop mode ("NO_SYS")" - lwIP: Mainloop mode ("NO_SYS").

0 项奖励
回复