<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>MQX Software Solutions中的主题 Time Slicing Rendered Ineffective</title>
    <link>https://community.nxp.com/t5/MQX-Software-Solutions/Time-Slicing-Rendered-Ineffective/m-p/330443#M10588</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;In some of our MQX based solutions we are experiencing situations where the&lt;/P&gt;&lt;P&gt;MQX time slicing is rendered ineffective due to timing effects. As a result,&lt;/P&gt;&lt;P&gt;some of our task starve for unacceptably long periods of time, rendering the&lt;/P&gt;&lt;P&gt;system useless.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;The Issue&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="tsgranularity.png"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/46803i0FDF55D560501AD2/image-size/large?v=v2&amp;amp;px=999" role="button" title="tsgranularity.png" alt="tsgranularity.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Consider the situation depicted in the figure. We have tasks running at two&lt;/P&gt;&lt;P&gt;different priority levels, high and low. The High Priority group contains&lt;/P&gt;&lt;P&gt;one task which waits on an event, does some work and then waits on the&lt;/P&gt;&lt;P&gt;same event again. The low priority task group contains two tasks, dubbed LPT1&lt;/P&gt;&lt;P&gt;and LPT2. LPT1 is a long-running task, possibly being always ready. LPT2 is&lt;/P&gt;&lt;P&gt;another low prio task which is ready. Since it is never running in the figure,&lt;/P&gt;&lt;P&gt;it is not depicted graphically. We are running an MQX scheduler with time&lt;/P&gt;&lt;P&gt;slicing enabled and both low priority tasks have time slicing enabled.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When the situation depicted in the figure persists, there is no fairness in&lt;/P&gt;&lt;P&gt;the low priority task group. The time slice of a task is only ever incremented&lt;/P&gt;&lt;P&gt;in _time_notify_kernel() which runs during the timer interrupt service routine. &lt;/P&gt;&lt;P&gt;This point in time is marked (b). Shortly before the timer ISR is run, a high&lt;/P&gt;&lt;P&gt;priority task is made ready and consequently the dispatcher runs the&lt;/P&gt;&lt;P&gt;high priority task. This point in time is marked (a). Shortly after the timer&lt;/P&gt;&lt;P&gt;ISR, the high priority task again becomes blocked. Consequently, at (c), the&lt;/P&gt;&lt;P&gt;dispatcher runs the task at the head of the low priority ready queue, LPT1. &lt;/P&gt;&lt;P&gt;When _time_notify_kernel() runs at (b), it increments the time slice of HPT, if&lt;/P&gt;&lt;P&gt;applicable. The time slice of LPT1 is left untouched. As per the figure, if&lt;/P&gt;&lt;P&gt;the HPT is always running when the timer IRQ is asserted, the time slice of&lt;/P&gt;&lt;P&gt;LPT1 will never be incremented. Thus, LPT2 will be starved.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;Triggering the Issue&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;We have a number of setups that exhibit the described problem. We have the&lt;/P&gt;&lt;P&gt;scheduler interval set at 1ms. The scheduler is called by the timer ISR, which&lt;/P&gt;&lt;P&gt;also runs every 1 ms. We have a bus system requiring service by a high&lt;/P&gt;&lt;P&gt;priority task every 1ms, nominally. The timer runs on a local oscillator,&lt;/P&gt;&lt;P&gt;whereas the bus service interrupt is timed through the bus. Effectively this&lt;/P&gt;&lt;P&gt;means that the bus interrupt runs off some other clock source. We have low&lt;/P&gt;&lt;P&gt;priority tasks which are ready for long periods of time, corresponding to LPT1&lt;/P&gt;&lt;P&gt;and we have other low priority tasks such as LPT2. The phase between HPT&lt;/P&gt;&lt;P&gt;becoming ready and the timer IRQ being asserted slowly drifts, according to&lt;/P&gt;&lt;P&gt;the beat frequency between the oscillator frequency and the oscillator&lt;/P&gt;&lt;P&gt;controlling the bus services. When HPT is running between two timer ISRs, we&lt;/P&gt;&lt;P&gt;see normal system behavior. When the HPT is running when the timer IRQ is&lt;/P&gt;&lt;P&gt;asserted, we see large latency for LPT2. In practice we see the system running&lt;/P&gt;&lt;P&gt;as expected for e.g. 50 seconds, then be unresponsive for the next 15 seconds.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Distilling from the previous paragraphs, all of the three following&lt;/P&gt;&lt;P&gt;conditions need to be met to have long latencies:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;1. Coarse time slicing granularity.&lt;/P&gt;&lt;P&gt;2. A low prio task group with multiple ready tasks, at least one task being continually ready.&lt;/P&gt;&lt;P&gt;3. Timer IRQ asserted when a high prio task is running.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;Solutions&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Since all three conditions need to be fulfilled to trigger the misbehavior,&lt;/P&gt;&lt;P&gt;removing one of them will fix the issue.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Changing 1.) requires changes to the scheduler and dispatcher. The code&lt;/P&gt;&lt;P&gt;running at points (a) and (b) is part of the scheduler and written in C. The&lt;/P&gt;&lt;P&gt;code running at (c) is running in the dispatcher and written in&amp;nbsp; Coldfire&lt;/P&gt;&lt;P&gt;assembler. The dispatcher has no knownledge about the offset of the time slice&lt;/P&gt;&lt;P&gt;length and offset within the task descriptor structure. The offset and time&lt;/P&gt;&lt;P&gt;slice length can vary according to compile time configuration. If I had to&lt;/P&gt;&lt;P&gt;guess, the MQX scheduler was never designed to accound for sub-tick&lt;/P&gt;&lt;P&gt;timeslices, even though the time slicing uses tick structs which allow for&lt;/P&gt;&lt;P&gt;sub-tick resolution.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Changing 2.) is difficult since we have large bodies of opaque code running on&lt;/P&gt;&lt;P&gt;our platform. And after all, it is the scheduler's responsibility to provide&lt;/P&gt;&lt;P&gt;fairness in a task group. Changing 2.) is difficult for us and it feels like a&lt;/P&gt;&lt;P&gt;hack, not like solution.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Changing 3.) is a realistic possibility in our setups with the two&lt;/P&gt;&lt;P&gt;plesiochronous 1ms IRQs. The scheduler interval can be set to a value&lt;/P&gt;&lt;P&gt;different from 1 ms. This is not a general solution since other IRQs might&lt;/P&gt;&lt;P&gt;still accidentally be asserted at critical points in time, trigger the&lt;/P&gt;&lt;P&gt;abovementioned issue. If these "misplaced" IRQs happen only sporadically,&lt;/P&gt;&lt;P&gt;latencies are still very acceptable.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;Our favourite solution&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We are thinking about changing 1.) Our changes would look as follows: During&lt;/P&gt;&lt;P&gt;all (a) like transfers of control from a low prio task to a high prio task we&lt;/P&gt;&lt;P&gt;increment the time slice of the low prio task by one. At (b), in&lt;/P&gt;&lt;P&gt;_time_notify_kernel, we do not only check the time slice of the task at the&lt;/P&gt;&lt;P&gt;head of the active ready queue, but we check the time slices of all the tasks&lt;/P&gt;&lt;P&gt;at the heads of all the lower priority ready queues as well. This pretty much&lt;/P&gt;&lt;P&gt;assures non-starvation in the lower prio task group, albeit at the cost of&lt;/P&gt;&lt;P&gt;tasks receiving potentially very short time slices.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG style="font-size: 14pt;"&gt;Other Ideas?&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Are there other options to solve the issue? Have other people experienced this&lt;/P&gt;&lt;P&gt;phenomenon as well? Is there a well known name for this sort of problem?&lt;/P&gt;&lt;P&gt;Is there a well known solution?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 16 Oct 2014 14:57:45 GMT</pubDate>
    <dc:creator>michaelmeier</dc:creator>
    <dc:date>2014-10-16T14:57:45Z</dc:date>
    <item>
      <title>Time Slicing Rendered Ineffective</title>
      <link>https://community.nxp.com/t5/MQX-Software-Solutions/Time-Slicing-Rendered-Ineffective/m-p/330443#M10588</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;In some of our MQX based solutions we are experiencing situations where the&lt;/P&gt;&lt;P&gt;MQX time slicing is rendered ineffective due to timing effects. As a result,&lt;/P&gt;&lt;P&gt;some of our task starve for unacceptably long periods of time, rendering the&lt;/P&gt;&lt;P&gt;system useless.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;The Issue&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="tsgranularity.png"&gt;&lt;img src="https://community.nxp.com/t5/image/serverpage/image-id/46803i0FDF55D560501AD2/image-size/large?v=v2&amp;amp;px=999" role="button" title="tsgranularity.png" alt="tsgranularity.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Consider the situation depicted in the figure. We have tasks running at two&lt;/P&gt;&lt;P&gt;different priority levels, high and low. The High Priority group contains&lt;/P&gt;&lt;P&gt;one task which waits on an event, does some work and then waits on the&lt;/P&gt;&lt;P&gt;same event again. The low priority task group contains two tasks, dubbed LPT1&lt;/P&gt;&lt;P&gt;and LPT2. LPT1 is a long-running task, possibly being always ready. LPT2 is&lt;/P&gt;&lt;P&gt;another low prio task which is ready. Since it is never running in the figure,&lt;/P&gt;&lt;P&gt;it is not depicted graphically. We are running an MQX scheduler with time&lt;/P&gt;&lt;P&gt;slicing enabled and both low priority tasks have time slicing enabled.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When the situation depicted in the figure persists, there is no fairness in&lt;/P&gt;&lt;P&gt;the low priority task group. The time slice of a task is only ever incremented&lt;/P&gt;&lt;P&gt;in _time_notify_kernel() which runs during the timer interrupt service routine. &lt;/P&gt;&lt;P&gt;This point in time is marked (b). Shortly before the timer ISR is run, a high&lt;/P&gt;&lt;P&gt;priority task is made ready and consequently the dispatcher runs the&lt;/P&gt;&lt;P&gt;high priority task. This point in time is marked (a). Shortly after the timer&lt;/P&gt;&lt;P&gt;ISR, the high priority task again becomes blocked. Consequently, at (c), the&lt;/P&gt;&lt;P&gt;dispatcher runs the task at the head of the low priority ready queue, LPT1. &lt;/P&gt;&lt;P&gt;When _time_notify_kernel() runs at (b), it increments the time slice of HPT, if&lt;/P&gt;&lt;P&gt;applicable. The time slice of LPT1 is left untouched. As per the figure, if&lt;/P&gt;&lt;P&gt;the HPT is always running when the timer IRQ is asserted, the time slice of&lt;/P&gt;&lt;P&gt;LPT1 will never be incremented. Thus, LPT2 will be starved.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;Triggering the Issue&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;We have a number of setups that exhibit the described problem. We have the&lt;/P&gt;&lt;P&gt;scheduler interval set at 1ms. The scheduler is called by the timer ISR, which&lt;/P&gt;&lt;P&gt;also runs every 1 ms. We have a bus system requiring service by a high&lt;/P&gt;&lt;P&gt;priority task every 1ms, nominally. The timer runs on a local oscillator,&lt;/P&gt;&lt;P&gt;whereas the bus service interrupt is timed through the bus. Effectively this&lt;/P&gt;&lt;P&gt;means that the bus interrupt runs off some other clock source. We have low&lt;/P&gt;&lt;P&gt;priority tasks which are ready for long periods of time, corresponding to LPT1&lt;/P&gt;&lt;P&gt;and we have other low priority tasks such as LPT2. The phase between HPT&lt;/P&gt;&lt;P&gt;becoming ready and the timer IRQ being asserted slowly drifts, according to&lt;/P&gt;&lt;P&gt;the beat frequency between the oscillator frequency and the oscillator&lt;/P&gt;&lt;P&gt;controlling the bus services. When HPT is running between two timer ISRs, we&lt;/P&gt;&lt;P&gt;see normal system behavior. When the HPT is running when the timer IRQ is&lt;/P&gt;&lt;P&gt;asserted, we see large latency for LPT2. In practice we see the system running&lt;/P&gt;&lt;P&gt;as expected for e.g. 50 seconds, then be unresponsive for the next 15 seconds.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Distilling from the previous paragraphs, all of the three following&lt;/P&gt;&lt;P&gt;conditions need to be met to have long latencies:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;1. Coarse time slicing granularity.&lt;/P&gt;&lt;P&gt;2. A low prio task group with multiple ready tasks, at least one task being continually ready.&lt;/P&gt;&lt;P&gt;3. Timer IRQ asserted when a high prio task is running.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;Solutions&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Since all three conditions need to be fulfilled to trigger the misbehavior,&lt;/P&gt;&lt;P&gt;removing one of them will fix the issue.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Changing 1.) requires changes to the scheduler and dispatcher. The code&lt;/P&gt;&lt;P&gt;running at points (a) and (b) is part of the scheduler and written in C. The&lt;/P&gt;&lt;P&gt;code running at (c) is running in the dispatcher and written in&amp;nbsp; Coldfire&lt;/P&gt;&lt;P&gt;assembler. The dispatcher has no knownledge about the offset of the time slice&lt;/P&gt;&lt;P&gt;length and offset within the task descriptor structure. The offset and time&lt;/P&gt;&lt;P&gt;slice length can vary according to compile time configuration. If I had to&lt;/P&gt;&lt;P&gt;guess, the MQX scheduler was never designed to accound for sub-tick&lt;/P&gt;&lt;P&gt;timeslices, even though the time slicing uses tick structs which allow for&lt;/P&gt;&lt;P&gt;sub-tick resolution.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Changing 2.) is difficult since we have large bodies of opaque code running on&lt;/P&gt;&lt;P&gt;our platform. And after all, it is the scheduler's responsibility to provide&lt;/P&gt;&lt;P&gt;fairness in a task group. Changing 2.) is difficult for us and it feels like a&lt;/P&gt;&lt;P&gt;hack, not like solution.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Changing 3.) is a realistic possibility in our setups with the two&lt;/P&gt;&lt;P&gt;plesiochronous 1ms IRQs. The scheduler interval can be set to a value&lt;/P&gt;&lt;P&gt;different from 1 ms. This is not a general solution since other IRQs might&lt;/P&gt;&lt;P&gt;still accidentally be asserted at critical points in time, trigger the&lt;/P&gt;&lt;P&gt;abovementioned issue. If these "misplaced" IRQs happen only sporadically,&lt;/P&gt;&lt;P&gt;latencies are still very acceptable.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14pt;"&gt;&lt;STRONG&gt;Our favourite solution&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We are thinking about changing 1.) Our changes would look as follows: During&lt;/P&gt;&lt;P&gt;all (a) like transfers of control from a low prio task to a high prio task we&lt;/P&gt;&lt;P&gt;increment the time slice of the low prio task by one. At (b), in&lt;/P&gt;&lt;P&gt;_time_notify_kernel, we do not only check the time slice of the task at the&lt;/P&gt;&lt;P&gt;head of the active ready queue, but we check the time slices of all the tasks&lt;/P&gt;&lt;P&gt;at the heads of all the lower priority ready queues as well. This pretty much&lt;/P&gt;&lt;P&gt;assures non-starvation in the lower prio task group, albeit at the cost of&lt;/P&gt;&lt;P&gt;tasks receiving potentially very short time slices.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG style="font-size: 14pt;"&gt;Other Ideas?&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Are there other options to solve the issue? Have other people experienced this&lt;/P&gt;&lt;P&gt;phenomenon as well? Is there a well known name for this sort of problem?&lt;/P&gt;&lt;P&gt;Is there a well known solution?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 16 Oct 2014 14:57:45 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MQX-Software-Solutions/Time-Slicing-Rendered-Ineffective/m-p/330443#M10588</guid>
      <dc:creator>michaelmeier</dc:creator>
      <dc:date>2014-10-16T14:57:45Z</dc:date>
    </item>
    <item>
      <title>Re: Time Slicing Rendered Ineffective</title>
      <link>https://community.nxp.com/t5/MQX-Software-Solutions/Time-Slicing-Rendered-Ineffective/m-p/330444#M10589</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;If scheduler works as you described, it looks like a bug.&lt;/P&gt;&lt;P&gt;When time slicing is enabled for both low priority tasks, these tasks have to alternate in running.&lt;/P&gt;&lt;P&gt;I reported it in our internal bug database and designers will analyze that.&lt;/P&gt;&lt;P&gt;In mean time, could you please specify your version of MQX and just for sure also MCU and your toolchain (CW/IAR/Keil,…)?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Have a great day,&lt;BR /&gt;RadekS&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-----------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;Note: If this post answers your question, please click the Correct Answer button. Thank you!&lt;BR /&gt;-----------------------------------------------------------------------------------------------------------------------&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 21 Oct 2014 12:51:24 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MQX-Software-Solutions/Time-Slicing-Rendered-Ineffective/m-p/330444#M10589</guid>
      <dc:creator>RadekS</dc:creator>
      <dc:date>2014-10-21T12:51:24Z</dc:date>
    </item>
    <item>
      <title>Re: Time Slicing Rendered Ineffective</title>
      <link>https://community.nxp.com/t5/MQX-Software-Solutions/Time-Slicing-Rendered-Ineffective/m-p/330445#M10590</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thank you for reporting the bug.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We are running MQX 3.7 on an MCF54418. We are using the CW Toolchain.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We have checked the release notes of MQX 3.7 up to 4.1 and have not found any mention of changes related to time slicing or fairness.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 21 Oct 2014 12:57:46 GMT</pubDate>
      <guid>https://community.nxp.com/t5/MQX-Software-Solutions/Time-Slicing-Rendered-Ineffective/m-p/330445#M10590</guid>
      <dc:creator>michaelmeier</dc:creator>
      <dc:date>2014-10-21T12:57:46Z</dc:date>
    </item>
  </channel>
</rss>

