It is exasperating... the ECSPI-1 is not working in slave mode as it should.
I'm using the BSP L2.6.35_11.09.01_ER_source_bundle and the included kernel linux-18.104.22.168.tar.bz2 and linux-22.214.171.124-imx_11.09.01.bz2.
I configured the SPI controller for the following operation mode.
# SPI mode 0 (POL = 0, PHA = 0)
# Slave mode
# BURST LENGTH = 7 --> only 8-Bit bytes are transmitted
# SMC bit = 0 according to MX53 errata
# no DMA
The IOMUX is configured as follows:
I've a small coprozessor that is communicating with the i.MX535 via SPI. The coprocessor is the master, configured in SPI mode 0. The baurate is very low (329kHz).
I want to transmit 64 bytes at once (size of the MX53 RXFIFO). So the chip select stays low as long as all of the 64 bytes has been transmitted.
If I got it right, the MX53 should interrupt every received data byte (BURST LENGTH = 7) if RREN bit is enabled in the INTREG.
The problem is, that an interrupt only occurs only once after all of the bytes has been transmitted and the chip select is inactive again.
Moreover, the time from the rising edge on chip select to calling the interrupt handler is about 70ms!!!!! That is really strange as interrupts should be handled immediately.
Furthermore the TXFIFO is filled with a counter value 1, 2, 3...64. The values are not correctly transmitted. It seems that only the 4th byte has valid data and the data is the same for the whole transfer --> counting data ist not received.
Do I need to reassert the chip select between each transmitted byte?
Why is calling the interrupt handler so slow?
Do I need to set the clock divider if operation in slave mode?
Attached you find the simple driver including the configuration of the SPI controller.
Any advice would be very helpful!!
Thanks for that!!
Original Attachment has been moved to: 56-stm32_spi_slave.c
I changed to master mode, but still there can be seen very strange behaviour.
The new driver uses RX and TX DMA to transfer data via the SPI interface. There will be transmitted 16384 bytes per transfer. This relates to 4096 32 bit bursts. The chip select is low during the complete transfer and is not controlled by the SPI controller but manually by setting the corresponding GPIO pin.
If the CPU is busy by executing a user space program, everthing is fine. The transfer is done at once in about 5.5ms for 16384 bytes. If the CPU is not busy, the DMA transfer seems to enter sleep mode or something like that. The 16384 bytes are not transfered at once, but block by block which takes even several seconds!!!
See attached the oscilloscope screenshots of the transmission.
Does anybody have an idea, why this strange behaviour occurs? Any suggestions to solve this problem? Do I need to change something in the kernel configuration?
I am using the i.MX53 QSB and seem to be seeing
the same strange behavior in my ECSPI Slave mode
driver with DMA Rx.
The DMA engine seems to respond slowly, but is much
improved if I run a background user-space task to
make Linux busy.
Did you ever find the solution to this problem?