MKE04Z SPI Slave 1st Byte Wrong.

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

MKE04Z SPI Slave 1st Byte Wrong.

531 Views
dave_harmonjr
Contributor III

I am trying to configure a MKE04Z device to be a SPI Slave. I have a cs/MISO/MOSI line connected to it. I also have several other devices connected to the Master which is a MK41Z that work perfectly fine using the serial manager library / freertos on the master. I also have a data ready GPIO output on the slave to send a signal to the master to have it read from it on demand when an important message. The messages are 16 Bytes in length.

 

The basic flow at startup is that the slave boots, inits the spi, then starts an async slave transfer using the drivers for the MKE04Z with a custom protocol defined by myself to request that the master send some data over. There is a callback function that sets a bool slaveFinished = true to signify that the master had read the message from the slave. After the master reads that message it loads some dummy data using the slave async transfer function because the master needs to do another "read" to send its response over after recognizing the command.

 

After the slave gets its response back it loads more dummy data into the SPI_SlaveTransferNonBlocking function because you must have a SPI_SlaveTransferNonBlocking function called and loaded the spi handlers buffer, or apparently there is no way for an interrupt to trigger to respond to the master asserting the CS line and clocking in a message? At least I have been unable to figure that out.

 

The problem I had with this was when you run the SPI_SlaveTransferNonBlocking function it will automatically load 2 Bytes into the Spi TX buffer because the interrupt for an empty TX buffer (SPTEF) will trigger when the function enables spi interrupts. So even if I run the abort function its going to send those 2 preloaded bytes out of the Spi TX Buffer first which offsets any replacement messages data. The abort doesn't do anything to clear the clear the Buffer as shown, it just clears the SPI Handlers count of remaining bytes and disables the interrupt. Thats it. FSL_FEATURE_SPI_HAS_FIFO is set to 0 FYI for this chip.

void SPI_MasterTransferAbort(SPI_Type *base, spi_master_handle_t *handle)
{
    assert(handle != NULL);
    uint32_t mask = (uint32_t)kSPI_TxEmptyInterruptEnable;

    /* Stop interrupts */
#if defined(FSL_FEATURE_SPI_HAS_FIFO) && FSL_FEATURE_SPI_HAS_FIFO
    if (handle->watermark > 1U)
    {
        mask |= (uint32_t)kSPI_RxFifoNearFullInterruptEnable | (uint32_t)kSPI_TxFifoNearEmptyInterruptEnable;
    }
    else
    {
        mask |= (uint32_t)kSPI_RxFullAndModfInterruptEnable | (uint32_t)kSPI_TxEmptyInterruptEnable;
    }
#else
    mask |= (uint32_t)kSPI_RxFullAndModfInterruptEnable | (uint32_t)kSPI_TxEmptyInterruptEnable;
#endif

    SPI_DisableInterrupts(base, mask);
    /* Transfer finished, set the state to Done*/
    handle->state = (uint32_t)kSPI_Idle;

    /* Clear the internal state */
    handle->rxRemainingBytes = 0;
    handle->txRemainingBytes = 0;
}

 

My solution to this was to have the MKE04Z keep track of when it had loaded dummy data in waiting for any random message to arrive, so that way it could have the master read that dummy message out first then read again for the real message. This sort of worked but after 2 transfers the first byte in the RX buffer on the master side would be wrong. This was really confusing to me, so I set a breakpoint in the interrupt code in the slave MKE04Z side spi driver that continuously loads data into the data register when the TX Empty interrupt triggers, to see if it loaded in the 1st byte correctly.

            /* As a slave, send data until SPTEF cleared */
            while (((base->S & SPI_S_SPTEF_MASK) != 0U) && (handle->txRemainingBytes >= bytes))
            {
                SPI_WriteNonBlocking(base, handle->txData, bytes);

                /* Update handle information */
                if (handle->txData != NULL)
                {
                    handle->txData += bytes;
                }
                handle->txRemainingBytes -= bytes;
            }

The break point triggered 2 times and loaded the 2 bytes correctly. So I am not sure what is going on. I am not sure how only the first byte could be wrong and not 2 bytes because the TX Buffer seems to load in 2 bytes to fill the buffer which it says it is supposed to in the reference manual.

dave_harmonjr_0-1685641944647.png

 

I made a work-around by just changing my protocol to completely ignore the 1st byte and that works fine. But I wanted to know why this was happening? I thought that if I was able to flush the TX Buffer between transfers it would work fine as well but I cannot figure out how to do that or if its even possible?

 

If anyone has any ideas for troubleshooting it would be appreciated. Thanks!

Labels (1)
0 Kudos
1 Reply