I have determined where this clock glitch is coming from, but I do not yet know how to resolve it.
The glitch happens inside each call to Lpspi_Ip_AsyncTransmit when it calls Lpspi_TransmitTxInit.
Inside Lpspi_TransmitTxInit, the code updates the CCR register (technically only when FirstCmd==TRUE, but for the non-DMA case, I never see where FirstCmd gets set to anything other than TRUE). To update the CCR register, it disables the SPI hardware:
Base->CR &= ~LPSPI_CR_MEN_MASK;
This line makes the SCK signal go low (which I think is wrong -- the SPI SCK is active low, so disabling SPI shouldn't make it look active!).
A few lines later, we write to the TCR:
Base->TCR = TransferCommand;
This line makes the SCK go high again, but only momentarily, because this is the command that starts the I/O.
I've tried a couple things, like not enabling/disabling if the CCR/CCR1 registers aren't changing (which was a little tricky, since the CCR1 is really derived from the low 16-bits of CCR), but doing so did not work -- the I/O never starts if we don't go through a disable/enable sequence. I also tried configuring the SCK pin as pull-up (hoping that would keep it high when the SPI was disabled), but it did not change the behavior.
I definitely need a way to get rid of this clock glitch. My peripheral device does not use the CS signal -- it assumes it's always the chosen SPI peripheral -- and uses the CLK signal to control I/O. With a glitching clock signal, there's not a way to synchronize with just 3 signals.