SCI interrupt driven or polled?

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

SCI interrupt driven or polled?

7,327 次查看
FC
Contributor III
I have written a routine using SCI interrupts for transmit and recieve.  Transmitting data using interrupts is way more complicated than polling.  What is the advantage of interrupt driven transmit? Assuming the baud rate is fast, polling wait time will be less.
标签 (1)
0 项奖励
回复
9 回复数

2,102 次查看
admin
Specialist II

I just like to run my serial comm on an interrupt basis just for the simplicity and the cleanness of the code.  Not to pimp someone else's stuff, but I found a very slick, efficient serial comm driver (interrupt based) from one of Freescale's distinguished competitors.  I've attached it for those interested.

-Tomahawk

0 项奖励
回复

2,102 次查看
glork
Contributor I


FC wrote:
I have written a routine using SCI interrupts for transmit and recieve. Transmitting data using interrupts is way more complicated than polling. What is the advantage of interrupt driven transmit? Assuming the baud rate is fast, polling wait time will be less.





I do a lot of serial comm. I always use interrupt mode for receiving and almost never use interrupts for transmitting.

If my (foreground) program can be suspended long enough to transmit the entire longest-possible message in one go then it doesn't need to be interrupt-driven. Else it does. It really just comes down to this. And it is very much simpler to code a polled transmit routine than an interrupt-driven one.
ron
0 项奖励
回复

2,102 次查看
rocco
Senior Contributor II
At my highest baud rate, which is 500 kilo-Baud, it takes the SCI 20 microseconds to transmit one byte. That is 160 program cycles on the HC08GP32. A ten byte message would take 200 microsecond, mostly spinning in a loop. A lower baud rate would make things worse. At 9600 baud, you would spend over a millisecond (8000 cycles!) waiting for each byte.

For my applications, this is an unacceptable waste of CPU cycles. It also will result in an inordinate delay in scheduling the next task.

So my transmit routine just copies the data into a buffer, and then lets the SCI transmitter interrupts feed the data out the port. Even if the CPU has nothing else to do, I would rather have it sitting in a WAIT instruction than spinning in a loop, waisting power.
0 项奖励
回复

2,102 次查看
glork
Contributor I
Hi Rocco.
Your comment 'It also will result in an inordinate delay in scheduling the next task' points to an exception in my normal firmware architecture which I should have made clearer: If your application is based upon a task-scheduler architecture then you probably shouldn't even consider polled transmit mode. Its basically incompatible with that type of firmware.

I think I've probably written a scheduled framework but in general I don't need that complexity. My normal framework is what I call 'background/foreground'. With this technique anything that is timed or time-critical is managed by the general 'heart-beat' isr or by a more specific timed isr (background). Communications receiving is also always handled by an isr (also background). Anything else is done typically in a foreground routine called main or some such.

This is simpler than a task-scheduler architecture but is inherently more efficient for the kind of work I do.
ron
0 项奖励
回复

2,102 次查看
rocco
Senior Contributor II
Hi, Ron:
I hear ya. I'm certainly not going to argue against simplicity.

I also hesitate to call the code I use a "Scheduler". It is really just a main loop that runs subroutines if the bit associated to that subroutine is set. There is simply a table of eight subroutine addresses, each associated with a bit in a page-zero byte. The priority is fixed, higher order bits being higher priority, because I simply scan left-to-right. An ISR or another routine can request a "task" to be run with a simple BSET instruction. If no bits are set, the loop executes a WAIT instruction.

I originally wrote this for the MC68701 twenty years ago, and have use it in every project since. It is an easy, debugged starting point for any project. It's also only 65 bytes of program space (including the dispatch table) and 1 byte of ram. I can post it, if anyone is interested.

Of course, as you pointed out, it forces most of my I/O routines to be interrupt driven. That's a price I no longer hesitate to pay.
0 项奖励
回复

2,102 次查看
glork
Contributor I


rocco wrote:
Hi, Ron:
I hear ya. I'm certainly not going to argue against simplicity.

I also hesitate to call the code I use a "Scheduler". It is really just a main loop that runs subroutines if the bit associated to that subroutine is set. There is simply a table of eight subroutine addresses, each associated with a bit in a page-zero byte. The priority is fixed, higher order bits being higher priority, because I simply scan left-to-right. An ISR or another routine can request a "task" to be run with a simple BSET instruction. If no bits are set, the loop executes a WAIT instruction.

I originally wrote this for the MC68701 twenty years ago, and have use it in every project since. It is an easy, debugged starting point for any project. It's also only 65 bytes of program space (including the dispatch table) and 1 byte of ram. I can post it, if anyone is interested.

Of course, as you pointed out, it forces most of my I/O routines to be interrupt driven. That's a price I no longer hesitate to pay.




Rocco
Ah, I originally thought you were talking about a time slicer. The thing you describe sounds pretty useful and if you originally developed it for a 68701 then it has a lot of miles on it. Some say 'if'n it aint broke don't fix it', whereas I say 'if your wheel is already round don't invent another one'.
It actually sounds like your framework is somewhat similar to mine.
ron
0 项奖励
回复

2,102 次查看
peg
Senior Contributor IV

Hi all,

On the recieve side you often use interrupts because as you can't really implement a byte by byte flow control, you MUST be ready to process each byte as they come or you will drop bytes.

However on the transmit side YOU are in control. This means that all is going to happen if you don't service it in time you will generate idle line condition for some time in between the bytes. So the only reason to NEED to use interrupts on Tx is to maximise the actual transmit rate in a system that would be otherwise to busy to achieve that using polling. The only other way to guarantee this is to go into a special poll loop until the message is sent however this would be a the expense of reponse time to other non-interrupt tasks.

David

 

0 项奖励
回复

2,102 次查看
bigmac
Specialist III

Hello all,

Another instance where I have found interrupts useful for the sending of SCI data is where it is required to echo characters as they are received, when commands are entered via terminal emulation.  If non-interrupt processes are lengthy, the character echo delay can be quite noticeable to the user.  Of course it is possible to simply send the echo character within the SCI receive ISR, but this cannot be done if rudimentary editing of the entered command is required.  For example, a destructive backspace would require three characters to be echoed (BS+space+BS).

For the receive ISR, some applications will require a FIFO (circular) buffer, perhaps with hardware and/or software flow control, and others may be able to use a simple line buffer, where processing of the line entry commences when CR is received.  In the latter case, usually applicable to manual entry, flow control would not be necessary if a "prompt" character is sent when processing has been completed.

For a send ISR (when used) the complexity of a FIFO buffer shouldn't be necessary.  A technique I have previously used is to set up a table to identify the start of the data for each message string that are mostly located in PROM or flash memory.  However, I also allow for a small send buffer in RAM to be included in the table.  It then becomes a relatively simple process (at least in assembler) to identify the required string by an index value, and then point to the start of the required null terminated data.  The send ISR would then send the message, a single character for each interrupt, and then set a control flag when the null character is reached.

Regards,
Mac

 

0 项奖励
回复

2,102 次查看
tonyp
Senior Contributor II

Interrupt-driven RX is almost always a necessity (especially as the baud rate increases) unless your application spends most of its time waiting for commands from the SCI that return an acknowledgement to the other side before it sends the next command.

Interrupt-driven TX, on the other hand, is not a necessity (especially as the baud rate increases) but it provides smoother transmission between characters of a single message/packet/whatever and, depending on the size of queue used, can free up the main program immediately when arriving at 'print' statements.  This may speed up the program's overall responsiveness and, if dealing with a user terminal, and have an RTOS it will appear more smoothly rather than irregular sized chunks of text arriving in bursts.

Now, if your baud rate is such that by the time you fetch the next byte the previous one has already been sent (or there about), then there is no significant advantage using interrupts.  Polled mode may actually be a better choice since the coding is usually smaller (and easier).

Nevertheless, if you need an example for int-driven RX/TX have a look inside:

http://aspisys.com/asm11d84.zip

Look for file SCI_INT.MOD (68HC11 assembly language, very similar to HC08) from which you can derive the flowchart to use with your programming language (if not assembly).  The key is turning off TX ints from within the interrupt handler when TX queue is empty so it stops firing, and turning them back on when any byte is added to the TX queue.  See the code for more.

0 项奖励
回复