Best way to share system state data between tasks?

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Best way to share system state data between tasks?

1,737 Views
robertyork
Contributor II

I've got a project which has several tasks running and an overall system state I want to track (things like standby mode, whether a toggle state is in one mode or another). Normally, I would have a structure somewhere and use get and set methods to read and write to the state. With multiple tasks, it's not so simple. It seems like in MQX, there's a few ways to tackle this.

The one that immediately comes to mind is that I could send messages to get and set the state, but that becomes either a lot of messages, or passing a large state structure as a message. This is especially cumbersome if I have one task managing the state. It seems like I would have to send a message to make a change, the when another task wants to read a state, it has to send a message requesting it, then wait for the state to get shipped back. Seems like a lot of overhead. Is there something a little simpler? Perhaps sending a message when the state changes, some task updating the state somewhere, then the state is visible to all other tasks through some shared memory or a simple get_state() function call. Any thoughts or ideas here would be appreciated.

The overall concept is a task that is in charge of managing the state (so only one task can write) but anyone can read it. Seems this should be a fairly common problem, and I'm sure there's some sort of design pattern that fits this with little overhead in MQX.

Tags (3)
14 Replies

802 Views
dave408
Senior Contributor II

Wouldn't using a lightweight semaphore do the trick for you as well?  You can use OSA_SemaWait to get access to the shared memory, then call OSA_SemaPost afterward so the other tasks can access it.

0 Kudos

802 Views
robertyork
Contributor II

I'm still learning MQX, so I'm not yet familiar with how to do shared memory in it. I'm familiar with the concepts, of course, and I could certainly use a semaphore to lock down a memory address somewhere. My question is, should I need to lock memory if only one task can write to it? I suppose it would make sure the data read is valid and not only half-valid. I'll need to find an example of something implementing shared memory and sharing data between tasks that way. I'm not sure how MQX handles creating a shared memory space, other than I know there's some sort of MQX malloc().

0 Kudos

802 Views
dave408
Senior Contributor II

I'm in the same boat.  I would think you only need to lock shared memory if the write operation takes more than one instruction.  In your case, maybe you can get away with it.  To be safe, the way I would do it is to brute force it and write code that reads and writes the struct, then generate the assembly code and look at how many ops it takes.

As far as shared memory goes, if you are okay with it, MQX will let you use a global variable, or you could pass the shared memory structure to your task as an argument.

802 Views
robertyork
Contributor II

I don't suppose you have a good example of code creating a shared memory space, defining a structure in it, then sharing it with another task, and that task accessing the structure? I want to make sure I don't screw something up when it comes to creating the memory block and passing around pointers to it.

0 Kudos

802 Views
dave408
Senior Contributor II

I don't at the moment, but creating the "shared memory" is literally defining your variable for the structure in your os_tasks.c file, and then all of your tasks can directly modify it because it's file scope.  You then create the semaphore with OSA_SemaCreate(), lock it with OSA_SemaWait(), and release it with OSA_SemaPost().

0 Kudos

802 Views
robertyork
Contributor II

While I'm thinking about it, why do you suggest a semaphore over a mutex? Perhaps a bit off topic, but I would normally think a mutex would be more appropriate. However, it seems MQX implements them differently, from what I've read. Just trying to understand this better.

0 Kudos

802 Views
PetrL
NXP Employee
NXP Employee

Hi Robert,

one think to consider while selecting between LWsem, Semaphore or Mutex is code size and speed as mentioned by David.

For your case (accessing a structure with state data from multiple tasks) I would suggest to use  LWsem to protect consistency of data. Functionality it provides is sufficient for you use case and gives you fastest response and lower memory footprint.

Petr

802 Views
DavidS
NXP Employee
NXP Employee

Hi Robert,

For MQX the mutex has added features that semaphore does not.  Like controls for priority inversion.

A mutex basically is a semaphore with one key.  Or a semaphore with one key is like a mutex but doesn't have option for priority inversion.

Look at MQX_User_Guide.pdf in the MQX_4.1.1/docs/mqx folder:

ScreenHunter_141 Apr. 30 16.38.gif

Regards,

David

0 Kudos

802 Views
robertyork
Contributor II

Okay, now that I'm thinking about this a little more, I was thinking exactly that.

On scope, I have a main.c which only has a TASK_TEMPLATE_STRUCT. Is that .c file (and variables declared within) in scope for all those tasks defined there? Sorry if this seems like a silly question. Part of me wants to think it is, being in main.c, but the other part isn't not sure, with all of the context switches that occur with multi-tasking. I've been bitten by scope problems and multitasking before.

For some reason I was thinking I needed dynamic memory allocation, but I really shouldn't. This seems like a lot easier.

0 Kudos

802 Views
DavidS
NXP Employee
NXP Employee

Hi Robert,

The variables declared in your main.c would be global variable and not something using the dynamic heap.  The other tasks could see the global variables.

If task in a separate file then extern declaration needed.

Regards,

David

0 Kudos

802 Views
robertyork
Contributor II

I would like to avoid dynamic memory allocation. So a static global that anyone can read, then locking it down with mutex/semaphore, I believe, would be the best way to do it. I assume I could then also make some get/set methods around it and treat it like I have in the past.

I'll give this a try and see how things work. Doesn't sound too tough.

0 Kudos

802 Views
DavidS
NXP Employee
NXP Employee

Hi Robert,

I would suggest using lwevents.

Probable to the intended use case for it but you have the ability to set/clear any of the bits in the 32-bit (4-byte) field so you can enumerate states for the tasks and update that lwevent to indicate whatever.

I played around with the lwevent.c example in the C:\Freescale\Freescale_MQX_4_1_1\mqx\examples\lwevent folder and attached it here for reference.

I'm only using one bit of the lwevent and it gets set in the ISR and cleared in the task waiting for the flag to set.  I added a task to monitor the lwevent flag too.

Regards,

David

802 Views
robertyork
Contributor II

I hadn't thought of using an event flag. I currently use one to synchronize that all my tasks have initialized message queues before they start going off and posting messages to each other. I have probably 20-30 (depending how I structure things) different variables I want to watch. Originally I'd use a structure for them, but most of them are really Boolean states, so an event flag might work, with each bit representing one of them.

I may want to store time stamps for when each of those flags changed, in which case the message system sounded more appealing. I also don't want anything waiting on this 'event', I just need to check what state it's in. I'll have to think about the idea of using this. It would certainly be a simple approach. And I like simple.

0 Kudos

802 Views
DavidS
NXP Employee
NXP Employee

Hi Robert,

WRT not wanting to wait on event,you don't need to call the _lwevent_wait_XXX if you are just looking to see what that VALUE is set to.

_lwevent_set/clear are none blocking but do run the scheduler so if another higher priority task came available it would switch to it.  But usually that is a good thing :-).

Regards,

David

0 Kudos