LCP54606 SDK Ethernet interrupt issue

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

LCP54606 SDK Ethernet interrupt issue

807件の閲覧回数
manu1
Contributor I

I am using LPC54606 MCU and am trying to get the `lwip_udpecho_bm` sample program (from the SDK) to work using Ethernet RX_DMA interrupts. It works without any modifications to the code, using polling. I am also able to get the `lpcxpresso54608_enet_txrx_transfer_rxinterrupt` sample program to work (it uses interrupts), without any problems.

However, when I enable RX DMA interrupt in the `lwip_udpecho_bm` program using `

ENET_EnableInterrupts(ethernetif->base, kENET_DmaTx | kENET_DmaRx); [file: enet_ethernetif_lpc.c, line: 326]

and invokes the following function from the callback

...
else if
(event == kENET_RxIntEvent)

    {

        ethernetif_input(netif);

    }

...

[file: enet_ethernetif_lpc.c, code from line: 140]

the RX DMA stops working after `ENET_RXBD_NUM` number of interrupts (tried changing the value from 5 but without any success).

I tried looking for the place where the descriptors are updated after each DMA. However, it seems that the descriptor buffer is wrapped to form pbuff which is passed to the lwip stack. I am not able to find where this is being released and the ownership of the descriptor is assigned back to the DMA (bit 31 in RDES3). Is this intentional?

I also tried updating the descriptors to release the buffers, but that did not help. Is there a fix for this?

Additionally, it seems the driver funcitons used to read are different between `lpcxpresso54608_enet_txrx_transfer_rxinterrupt` (using ENET_ReadFrame() ) and `lwip_udpecho_bm` (using ethernetif_read_frame() ). Any reason for this?

ラベル(2)
0 件の賞賛
1 返信

758件の閲覧回数
manu1
Contributor I

Fixed the issue by modifying the code as recommended in lwIP's documentation "Mainloop mode ("NO_SYS")" - lwIP: Mainloop mode ("NO_SYS").

0 件の賞賛