Kl17 I2C Slave using transactional API

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Kl17 I2C Slave using transactional API

901 Views
dougbaker
Contributor III

We are using a KL17 as a slave using the high level Transactional API callback interface.  We are using the GNU compiler version 6.3.1.

 

The master is sending the KL17 two bytes of data and the KL17 is expected to respond with 3 bytes of data back to the master.  The KL17 is the salve, and we have another ARM CPU as the I2C master.  A scope with an I2C analyzer shows the 2 bytes the master sends always looks good but sometimes the KL17 does not receive the 2 bytes properly.  The Kl17 will sometimes ( say 1/100) will not receive the second byte [rx2]  properly in the callback.

 

The scope with an I2C bus analyzer will show: 

Note, the slave address is 50, S=Start, R=Repeat Start, E=Stop

S[50] [rx1] [rx2] R [51]  [tx1] [tx2] [tx3] E

 

 

Sudo code:

Callback

{

                Switch RX Event

                                xfer->data = (unsigned char*) &rxbuff[0]; 

                                xfer->dataSize = 2;

                break

 

                Switch TX Event

                                Check rx data here, this is where the incorrect data is seen [rx2].

                                xfer->data = (unsigned char*) &txbuff[0]; 

                                xfer->dataSize = 3;

                break

 

                Switch Complete Event

                break

}

 

The callback code flow is RX Event followed by a TX Event and finally the Complete Event.

 

Question 1: Is the expected sequence for a 2 byte I2C write followed by a three byte read?

 

Question 2: Would we ever expect to see the callback come back from the first 2 byte receive is it had not received the two bytes?

 

Question 3: what do we expect xfer->transferredCount to be each time we get the callback?

0 Kudos
3 Replies

671 Views
dougbaker
Contributor III

Just to add some more information, we found that slowing down the I2C bus from 400k bits/sec to 100kbits/sec seems to have made the problem go away or at least greatly reduce the problem.  This is not the solution we are looking for because we want to run the bus at 400k bits/sec but it may help understand what the problem is.

0 Kudos

671 Views
jorge_a_vazquez
NXP Employee
NXP Employee

Hi Doug Baker

According to your description  this seams to be a Hardware related problem, please consider that while your speed increase, your SDA and SCL lines need to be shorter, I2C need short lines because it could cause you noise and problems in the communication.

Also, could you clarify what do you mean with

Answering your questions:

Question 1: Is the expected sequence for a 2 byte I2C write followed by a three byte read?

If you ask if it is possible, then yes, there shouldn't be any problem in read 2 bytes and send 3.

Question 2: Would we ever expect to see the callback come back from the first 2 byte receive is it had not received the two bytes?

The Slave callback is set with the kI2C_SlaveReceiveEvent, so if Master is sending data, this callback will be called and inside the receive event you have to specify how many data you will save and the address of where you going to save it.

 

Question 3: what do we expect xfer->transferredCount to be each time we get the callback?

transferredCount is a variable that is used by the I2c to keep the count of bytes that already were transfered,  in the callback you will not be able to see other value of this variable but 0. If you check I2c driver you will find the code:

 /* Receive data. */
                *handle->transfer.data++ = data;
                handle->transfer.dataSize--;
                xfer->transferredCount++;

that is called with every read in D register.

Hope this information could help you.
Have a great day,
Jorge Alcala

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos

671 Views
dougbaker
Contributor III

A solution to this problem was found so now the I2C works 100% correctly at 400k bits/sec.

If I set the I2C IRQ priority to the max level using:

NVIC_SetPriority(I2C0_IRQn, 0);

And I also set the priority of the other IRQ's I use (RTC and Timer 0) to a lower priority using for example: NVIC_SetPriority(RTC_IRQn, 3),

it works 100% correctly.

 

I do not understand why this fixed the problem.  Without changing the priorities, the system will work correctly 99 out of 100 times.  The 1/100 case where it fails, we will get a callback where we requested 2 bytes of data to be received and the callback will indicate we only got 1 byte of data and the second byte of data we receive is not correct. It acts like the I2C callback comes early before the Kinetis actually receives the 2 bytes of data.  I would like to be able to explain why changing the priorities fixed this because I do not understand how this would fix it.  Also, FYI, the IRQ callback from Timer0 only cleared the interrupt and returned so I made it as short as possible and the I2C would still sometimes fail and only get 1 byte of data when we requested 2 bytes of data.

 

I did check that the CPU clock is running as fast as it can by verifying SystemCoreClock it set to 48000000.

 

void BOARD_BootClockRUN(void)

{

    CLOCK_SetSimSafeDivs();

    CLOCK_SetMcgliteConfig(&g_defaultClockConfigRun.mcgliteConfig);

    CLOCK_SetSimConfig(&g_defaultClockConfigRun.simConfig);

    SystemCoreClock = g_defaultClockConfigRun.coreClock;

}

0 Kudos