Possible scheduler bug in MQX 3.7 for Kinetis

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Possible scheduler bug in MQX 3.7 for Kinetis

1,885 Views
konrada_anton
Contributor III

Hello all,


the following describes what I believe to be a bug in the scheduler of MQX for Kinetis.

The scenario involves two tasks, two ISRs and the PendSV handler. The tasks in my example have (Cortex) priorities 0xC0 (LowTask) and 0x60(HighTask), the interrupts have priorities 0xA0 (LowInt) and 0x80(HighInt). 

 

  1. We start with LowTask running with BASEPRI=0xC0.
  2. LowInt fires. The LowInt handler requests the scheduler to run(set_pend_sv) (e.g. by setting an event).
  3. set_pend_sv notices that the PendSV-Pending flag is 0, computesPendSV-priority as BASEPRI-0x10=0xB0, and sets PendSV-Pending to 1.
  4. LowInt returns.
  5. PendSV fires immediately afterwards. 
  6. PendSV is interrupted by HighInt.
  7. The HighInt handler sets another event, which calls set_pend_sv. 
  8. set_pend_sv notices that PendSV-Pending flag is 1, and returns immediately.
  9. The HighInt handler returns.
  10. PendSV handler calls the scheduler, which consults the ready queuesand readies HighTask. In doing so, the scheduler sets BASEPRI=0x60.
  11. PendSV handler returns into HighTask, with PendSV-Pending=1.
  12. HighTask tries to sleep (or calls another blocking syscall), which calls set_pend_sv. 
  13. set_pend_sv notices that PendSV-Pending flag is 1, and returns immediately.
  14. The blocking syscall returns immediately to HighTask, because thePendSV handler has priority 0xB0 and can't fire.
  15. HighTask does whatever tasks do which believe to have slept.

The precise mechanism of (6) is still unclear to me, though I know of two hypotheses.

Miro Samek writes in http://embeddedgurus.com/state-space/2011/09/whats-the-state-of-your-cortex/ that he has observed

cases in which the late-arrival mechanism of the NVIC caused PendSV to be pending within PendSV (he doesn't use MQX).

 

My hypothesis is that the PendSV handler is interrupted before executing its first line, which is a disable-all-interrupts statement. I have performed experiments in which HighInt is a very busy timer interrupt which calls set_pend_sv, and I manage to find PendSV pending within the PendSV handler every couple of 10000 HighInt calls.

 

I'm aware of two suggested fixes. In his blog posting, Samek suggests clearing the PendSV-pending flag within the PendSV handler (although as a fix to a different problem).

 

Another fix involves set_pend_sv. It could be changed to update the PendSV priority every time instead of only when PendSV isn't pending. If in aptly named step (13), set_pend_sv were to change PendSV priority to 0x50, then HighTask would be properly interrupted.

 

I hope this was understandable. I've applied the second fix to my example program, and I don't get sleepless HighTasks any more.

 

Greetings

KA

0 Kudos
6 Replies

535 Views
GottiLuca
Contributor IV

 

Dear konrada ,

Many thanks for this post .

It would be really useful to the Kinetis community if you could post the file you have modified or explain with greater precision the modification you have done in order to solve the problem ( i.e. : the line of the source code where you've done the modification(s)  ) ..

Obviously i hope that some Freescaler pay attention to this post ...

 

 

Regards

 

L.Gotti

0 Kudos

535 Views
konrada_anton
Contributor III

I changed the behaviour of set_pend_sv in the case that the PendSV-pending flag is set. In that case, I simply update the PendSV priority with the numerical minimum of PendSV priority and (BASEPRI-0x10).

_set_pend_sv:                ; If PendSV is pending with low priority on reentry into a high-priority                ; task, that task cannot sleep because the original implementation                ; will happily notice that PendSV is pending and leave it at that.                ; The new implementation ensures that PendSV has high enough priority                ; to interrupt the task that demanded a task switch.                ; PendSV priority is in 0xE000ED22, third byte of SHPR3.                push {r0-r3, lr}                ; R2=Basepri-0x10                mrs r2, BASEPRI                 sub r2, r2, #0x10                                ; get PendSV flag                ldr r0, =0xE000ED00                ldr r1, [r0, #4]                        ; 0xE000ED04                tst r1, #0x10000000                ; if not pending, write PendSV prio without comparing it first.                beq _set_pend_sv_init_prio                                ; So the pending-flag is 1. Update PendSV priority if that increases the                ; priority of PendSV.                ; read PendSV priority                 ldrb r3, [r0, #0x22]                     ; 0xE000ED22                ; compare BASEPRI-0x10 (r2) with the current PendSV priority (r3)                 cmp r2, r3                ; if (BASEPRI-0x10)>= PendSVprio, no need to change PendSVprio                bhs _set_pend_sv_end#ifdef DEBUG_MQX_PENDSV_PROBLEM                                ; Every time this insn is reached, the old _set_pend_sv would have                ; caused a sleepless timer task                nop#endif           _set_pend_sv_init_prio:                                 ; so r2 is the right priority to write to SHPR3.                strb r2, [r0, #0x22]                                ; Set the PendSV-pending flag in ICSR.                ldr r2, =0x10000000                orrs r1, r2                str r1, [r0, #4]                        ; 0xE000ED04_set_pend_sv_end:                pop {r0-r3, pc}
0 Kudos

535 Views
GottiLuca
Contributor IV

 

Thanks again konrada,

 

Since it seems that anyone at Freescale seems interested to this topic , i'll open a service request for that.

 

Regards

 

L.Gotti

 

0 Kudos

535 Views
cyborgnegotiato
Senior Contributor II

good point, thank you for your input, you can find fix for this issue in MQX version 3.8

0 Kudos

535 Views
kaskasi
Contributor III

Just wondering if this issue is truly fixed in MQX 3.8?

 

0 Kudos

535 Views
c0170
Senior Contributor III

Hi kaskasi,

 

according to the information I have, it was fixed for MQX 3.8.

 

Regards,

MartinK

0 Kudos