AnsweredAssumed Answered

Processor Expert Asynchronous Serial Race Condition?

Question asked by Mark Wyman on Oct 14, 2016
Latest reply on Oct 17, 2016 by Mark Wyman

Again I am running into nuisance problems in Processor Expert. This time I am running into problems with the Serial_LDD on the MKV31F512VLL12 processor. 

 

I have events for OnBlockReceived, OnBlockSent, and OnTxComplete all turned on. In each of these interrupts I have a volatile flag set to TRUE when these events complete.

 

The problem is when I make multiple calls to the following routine:

/************************************************
* Send a string to the console. If entering here
* before send is complete, this will block until
* previous send has completed.
*/
void AS1_SendString(char *null_term_string)
{
uint16_t len;

if (mySerialPtr == NULL) return;

len = strlen(null_term_string);
//Cap length, these are strings after all, not full data.
if (len > MAX_SEND_STRING_LEN) len = MAX_SEND_STRING_LEN;

//Wait for previous send to fully complete.
while(globals.AS1_TXComplete == FALSE)
{
}

Error = ERR_BUSY;
//Wait for all bytes to be put into the queue.
while (Error == ERR_BUSY)
{
//Wait until completely sent to begin another.
Error = AS1_SendBlock(mySerialPtr, null_term_string, len);
//Flag that new block has not yet been sent.
globals.AS1_DataSent = FALSE;
globals.AS1_TXComplete = FALSE;
}
}

It trashes the completion of the previous send, the last byte in the previous send gets aborted. It wold be the assumption that the OnTxComplete() interrupt should fire after the last bit goes out hardware-wise, and then the AS1_SendBlock() will also not report busy, but it appears the interrupt for AS1_OnTxComplete() fires at the start of the last byte transmission, and then the call to AS1_SendBlock() thinks all is ready (as there are no bytes left in the buffer), and the next send setup trashes the last byte currently in-progress.

 

I am able to add a delay and make it work in order to allow that last byte to complete, but of course that is certainly not ideal on a high-performance DSC as the less blocking, the better.

 

Any suggestions on a work-around to prevent needing to spin a delay in this code? I am trying to dump data arrays to a console for analysis, and getting lost characters. I suppose I could set up a massive buffer and dump the whole thing at once and avoid the problem, but I would like to get to the bottom of this one.

Outcomes