Hi Mark,
Can you explain when a stop condition is generated with and without the stop hold bit set? I would like to understand whether the driver should use that bit. In our code as it is, SHEN in the FLT register is 0, the default.
We captured an event and I am trying to understand how the peripheral could fail to generate a stop condition.
What we are trying to send: start 00010000 (address 08, write) 00000000 00010111 0110 1110 stop
But the slave is confused and is out of sync. We don't know how it could get confused; that's a question for the manufacturer of the slave. The slave doesn't ack on the 9th bit, but it is obviously acking on the 5th bit of the 00 data byte (scope shows SDA at a different low level for one bit time; see cursors in 2nd image) and on the 5th bit of the 17 data byte (again, SDA seems extra low) and perhaps on the 5th bit of the last data byte, changing it from 6e to 67.
Clearly our driver should stop sending when it detects a nack rather than plowing ahead, but what I'm interested in at this point is correct stop bit generation. When we clear MST in C1, why does the K21 not bring SDA low to generate a valid stop?
Secondarily, how does the peripheral react when it sees a mismatch in the data it is transmitting (sending 6e but bus says 67)? A few bits after the mismatch occurred, why would it not transmit a 0 for the LSB of 6e? The ARBL arbitration lost bit was not set. But is it possible that the data mismatch results in the IICIF bit set before all the data bits are transmitted, leading to the driver clearing MST too soon, and, without SHE enabled, the stop bit generation gets messed up? Is the logic is different for the arbitration lost case of IICIF versus the ARBL bit? FWIW SSIE is 0 and the SMB, SLTH, and SLTL regs are all 0, so they aren't affecting IICIF.


Regards,
Debora