Thanks for the reply and apologies for my slow followup.
On your questions:
- Double buffer feature: We're using the NXP SDK 2.0 code. As far as I can tell the code is accounting for it but not attempting to make use of it.
- Decreasing the baudrate: this could help alleviate the problem, but I don't feel it's really addressing the fundamental issue.
- Errata: that doesn't really seem directly relevant.
This issue is continuing to pose a problem for us. I was wrong in my assumption that just using the interrupt based I2C_MasterTransferNonBlocking() interface would provide a good workaround.
The fundamental issue seems to be that while at the surface the I2C controller appears to have an interaction model that is driven by the CPU (which would allow the CPU to do things as it wants without timing restraints), it actually does place service timing conditions on the CPU. This occurs whether the transaction is polling, interrupt or DMA based.
If I put a 25usec delay in either of the polling loop, I2C interrupt handler or the DMA completion handler, the transaction will fail every time. Could you confirm this is expected/acceptable behavior? If so, it places quite tight restrictions on the CPU software. With us already running I2C at the highest IRQ priority, I can think of the following options to work around the problem, but none of them are great for us:
- Spend time ensuring global interrupts are not disabled for longer than 20usec (with a 400K I2C speed) anywhere in software.
- Use the slower 100K I2C speed to lower the chance of the issue occurring (this has a major impact for us due to data length / time).
- Use polling based IRQ transfers with global interrupts disabled (not viable for us due to data length / time)
Reading the reference manual and looking at the SDK I2C polling and interrupt code, I can't see where this service timing dependency comes from. With the I2C DMA code however, a timing dependency would make sense if the DMA transfer was kicking off the last byte on the I2C bus... that would mean the completion handler would need to set the NACK for the last byte read before it completed. However, that would mean that the TXAK would be applying to the current byte in transfer, which is not consistent with the manual.
Would you be able to provide any insight into this?
Thanks.