When configured for slave mode, my testing indicates that it ignores the burst length and always transfers 32-bits to/from the FIFO. I notice the manual only mentions master mode when it talks about the burst length.
Is this by design? Anyone have it working for 1 byte per word?
I set the burst length to 7 (8 - 1) so the control register was loaded with 0x0070e301 and the cfg register with 0. I load the FIFO with 1 byte per word. On a scope I see each byte sent followed by 3 zero bytes. I expect to see the bytes sent consecutively.
Works great with 32-bits per word other than the known SS termination issue.
Commenting on my own post.
I notice the Linux driver in the BSP (master mode only) configures the burst length to be the number of bits per word, but the response I received from tech. support case #00163683 leads me to believe that the burst length should be set to the number of bits in the entire transfer (at least in slave mode). If that's true, then burst length has a different definitions in master and slave mode -or- the Linux BSP should not be configuring it as bits per word -or- I'm very confused.
For the slave mode the maximum length of the single SPI burst is defined by FIFO size
or SS negation. But due to errata ERR009535, the slave must configure the burst length
to match the total number of bits sent by the master in a single transfer (burst).