dynamic allocation question on S08

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

dynamic allocation question on S08

3,901件の閲覧回数
Stephman
Contributor I

Hi there,

 

I'm writting high level layers routines (HIL and services) which are supposed to run on various platforms (SO8, Coldfire, Kinetis, ... whatever in fact)

In a queue management routine, I need a dynamic allocation and my code is using the malloc and free functions.

 

On Codewarrior 10.2 for S08, the default heap size is 2000 bytes which is above the RAM available on the CPU where I'm testing my code. I don't want to rebuild the library as this becomes not really portable. My code should compile on any compiler at the end (this is the aim of high level layers isn't it ?)

 

I know that dynamic allocation is not well accepted in embedded, and clearly forbidden by some standards like AUTOSAR in the automotive field.

 

My questions are :

 

  1. Why dynamic allocations and its possible memory 'leaks' are different from an 'usual' stack overflow due to recursive functions pushing in the stack. I mean that stack overflows is a design issue and the designer shall ensure this is not happening in its code. So why dynamic allocation would be worst ? I've seen older post on the forum saying that using dynamic allocation especially on 8-bits microcontrollers is 'poor design'. Why ? Again, for me this is like a stack management, isn't it ? Why is it considered as different ?
  2. If malloc is "forbidden or not recommended", this is also the case for any proprietary dynamic allocation routine. So it means more or less "don't use dynamic allocation, ... this is bad (and you may go to hell !). Ok, but then ? How can we manage my memory allocations ? Shall I put all my variables as global and then run out of memory after three functions ? I found that memory allocation is an elegant way to save processor ressources so how can we do without using dynamic allocation ?
  3. Just in case I feel brave and decide to not follow the recommendations to not implemente dynamic allocation in my embedded applications, where could I find an example of small proprietary routine, in order to replace malloc, that could work on any processor, S08 included, without having to rebuild any library.

 

Any clue appreciated. Thanks

 

Stephane

ラベル(1)
0 件の賞賛
返信
13 返答(返信)

2,964件の閲覧回数
kef
Specialist I

If you don’t have free() calls, then it is obvious that you don’t need malloc() at all. If you have free() calls, then what do you know about memory fragmentation? Say you have 1k heap. You malloc() 0.3k, then again 0.3k, then free the first 0.3k. Now, when you try to allocate 0.5k – it fails, though you have 0.7k of free space left. So you need to have few times more free space than the biggest piece of RAM you may need to allocate! It doesn't sound good.

 

Malloc() makes sense on desktop OSes. Smart OS may have mechanisms to share dynamic memory among apps. But even using OSes, not very smart developers tend to allocate memory once. So again, why malloc?

Say you have two objects, which are not allocated simultaneously. What about

union{

     char data1[50];

     double data2[70];

  } dynamic_memory1;

Just don't access union field, which is not yet virtually allocated.

Too simple? Say you have (data1 or data2) and (data3 or data4) allocated simultaneously:

struct {

     union{

        char data1[50];

        double data2[70];

     } dm1;

     union{

        char data3[50];

        double data4[70];

     } dm2;

  } dynamic_memory;

Using defines you may simplify field accesses like:

 

#define data1 dynamic_memory.dm1.data1

You say you need malloc for custom queue size. You don’t. You need to match queue size with the app. App won't work properly with too small queue, but it is OK too have too big queue. If you have free space left, then you can increase size of the queue to make it using all available memory. BTW on s08, queue with constant size should operate much faster (if coded properly).

 

1. Dynamic memory leaks when developer forgets to free() unused stuff or there's a bug, which makes some free() omitted. But there’s also memory fragmentation, and I agree: using malloc() in embedded is poor design, unless your embedded is using something like Linux or WinCE with more than one app running simultaneously.

2. In embedded, where you are boss for all available CPU resources, malloc is not elegant.

3. You can DIY your own malloc. It is not hard at all, but doesn't make a lot of sense for S08 and other small MCUs.

Using older CodeWarrior, you could change LIBDEF_HEAPSIZE in libdef.h, add alloc.c and heap.c to your project and change linker priority to make these files having higher priority than *.lib file. Linker should ignore copy from library and use what you added to project. Instead of editing libdef.h in CW files, you can make a copy of libdef.h and edit Access Paths to make compiler search your project folder first.

2,964件の閲覧回数
Stephman
Contributor I

Hi Edward,

Thanks for your long, nice and valuable answer.

For embedded OS, many of them are using malloc & free like FreeRTOS and very likerly MQX in their queue management. This is why I find surprising that malloc&free  are not recommended in embedded designs as it is used by default in such OS which are dedicated for microcontrollers (by the way does it mean we can't use the queue routines of these OS if they have to be implemented in automotive or any other safe requrement system ?)

I don't understand this statement that on desktop OS we can use dynamic allocations but not in embedded. Leaking memory is bad whatever the platform is, right ? desktop PC has just a more powerful processor with larger memory attached but is it a reason to authorize memory leaks just because we have enough memory to keep it stable ? I admit that I may be completely wrong in my approach but this is not clear for me.

I understand the memory leak process but can't find in my applications cases where it could happen as I may be one these 'not very smart developpers' who can use dynamic allocation only once during MCU initialization to create SPI or SCI buffers for instance. I do agree that it could be replaced by hard coded tabs but I always thought it was less elegant, especially at HIL levels.

I had also used the dynamic allocation to create large buffers of different sizes not used at the same time. The RAM available wasn't enabling to code these buffers as global variable so I've used there the dynamic allocation. Of course the blocs were released with free() before allocating the other.

The second main benefit of dynamic allocation was for me the ability to have a unique function taking care about creating queues of customized length. I don't know how to do other than doing what I was doing previously, i.e. hard coding all the memory blocs where the function needs it.

Your suggestion of using unions for allocating same memory to differents objects is nice and would have probably saved me for this last example but I don't find a way to code that properly enabling the user at Application level to define the size of the buffer he needs without hardcoding it.

Any advice ?

Many thanks

Stephane


0 件の賞賛
返信

2,964件の閲覧回数
kef
Specialist I

Of course malloc works in embedded, but using malloc()
makes no sense at least for me. In fact dynamic memory (at least on small MCUs
with no memory mapping and access controllers) is some big static array (heap) plus
some access routines, which return pointers to that array on request and track
declared memory usage for each malloc() request. So what are benefits of using
malloc??? I see none for all my embedded apps, malloc is just an overhead and
risk of memory fragmentation.

FreeRTOS? MQX? Well, the most of embedded apps don’t need
any RTOS. Each app has to follow some structure, very similar to cooperative
RTOS with main loop for all the tasks. Preemptive RTOS allows more, even wait
forever loops for event, which may never happen. But since preemptive RTOS has
overhead requirement for
RAM, I never use it.
When it is really required, it is more effective to allow interrupt nesting or
do simple two  tasks switching in
periodic interrupt. Each commercial RTOS has a lot of routines, which save
developers work at the cost of more CPU cycles,
RAM and code sice overhead.
The same for malloc. You can use it if you see benefit.

  • Leaking memory is bad whatever
    the platform is, right ?

Leaking memory is caused by bugs in the code. Mamory
fragmentation has no vaccines, except allocating always the same size or not
freeing anything.

Do you need user control of queue size? Just allocate
array, big enough for max queue size setting. Make in/out pointers/indexes
resetting when they cross queue size border. What else you are going to use
free heap for? What happens when you have max queue size settings? Do other
thing fit remaining space? If so, then what the purpose of malloc()? Just neat and
nice code with great malloc() call? Is it OK to have few kB of code space
occupied with this nice malloc()? In case you need to allow user choose bigger
buffer and smaller another buffer, or vice versa, then what’s the problem of
having single array for both buffers and moving top and bottom of each queue up
and down?

It’s up to you to use malloc or not.

0 件の賞賛
返信

2,964件の閲覧回数
Stephman
Contributor I

I definitely agree on the fragmentation problem that I hear for the first time.

So yes, I'd like to replace malloc with a static array queue management but I don't know how to implement static arrays queues management with different sizes.

Before trying implementing queues with malloc, I had coded a circular buffer routine which works fine. In fact this is this circular buffer routine that I've upgraded to add the malloc because I don't know how to to create several different circular buffers with different sizes, by using this unique circular buffer routine. I don't want to duplicate the circular buffer routine each time I need a buffer. As long as the buffers were with the same size, no problem, I could create many buffers. But how to manage then different sizes ?

Here ishow I've implemented my circular buffer routine before implementing the malloc :

#define CIRCULAR_BUFFER_SIZE 100

/* buffer structure */

typedef struct{

  U8 u8tab[CIRCULAR_BUFFER_SIZE];

  U8 *u8p_in, *u8p_out;

  U32 u32Count;

}stBUFFER;

/* Circular buffer prototypes */

void CircularBuffer_Reset(stBUFFER *stBuffer);

char CircularBuffer_Push(stBUFFER *stBuffer, U8 u8byte);

U8 CircularBuffer_Pull(stBUFFER *stBuffer);

The circular buffer routines are just playing with the pointers. The limitation is that my buffer size is the same for any buffer.... So what would be the method to make it flexible ?

Then to manage several buffer sizes I've added the following function which allocate the memory

the typdef became :

typedef struct

{

     U8 *u8p_in, *u8p_out;

     volatile U32 u32Count;

     U32 u32Size;

     U8 *u8tab;

}stBUFFER;

and I've added the allocation routine :


/*--------------------------------------------------------------------------------

Description    : Initialize a circular buffer

Call           : CircularBuffer_Init (stBUFFER *stBuffer, U32 u32TableSize)

Input(s)       : *stBuffer = Pointer to the buffer to initialize

                        u32TableSize = size to allocate

Output(s)      : none

Return         : none

--------------------------------------------------------------------------------*/

void CircularBuffer_Init (stBUFFER *stBuffer, U32 u32TableSize)

{

     stBuffer->u8tab = malloc (u32TableSize * sizeof(char));

    stBuffer->u32Size = u32TableSize;

}

The full code is attached (Buffer rouines renamed in queues routines)

any advice on the method I could use ?

Thanks

Stephane


0 件の賞賛
返信

2,964件の閲覧回数
kef
Specialist I

In case free() is not required, small malloc() can be implemented this way

#define MYHEAP_SIZE 1000 char myheap[MYHEAP_SIZE]; char *myfreeheap = myheap;

#define malloc(size) ( (myfreeheap + (size)) < (myheap + MYHEAP_SIZE) ) ? myfreeheap : NULL; myfreeheap +=(size)

To free all resources just reset myfreeheap to &myheap[0].

2,964件の閲覧回数
Stephman
Contributor I

I'm not familiar with c++ so I may not understand the #define line

but it still call malloc ? what's the benefit of that ?


0 件の賞賛
返信

2,964件の閲覧回数
kef
Specialist I


Well, it is not C++, just C. You may convert this define to function, but in case stdlib.h is included, define will silently suppress malloc declaration. Never mind, you decide what to do with it.

No benefit of malloc, but since you want buffer allocation at runtime, it is still malloc like stuff. Also above snippet answers how one may code it's own malloc. Still, IMO buffers should be allocated statically at design-n-compile time. #defines and enums are fine to specify buffer sizes.

0 件の賞賛
返信

2,964件の閲覧回数
Stephman
Contributor I

wow. I do not understand at all this syntax. But I'm going to investigate.

By the way, spending hours on the web I've found the way to define multiple queues in static arrays. The method seems to be linked lists. This apparently works, but it sounds not so easy to make it robust.

Not sure about that, but  at the end I understand that the only benefit is that memory blocs are contiguous avoiding then framgmentation, but I feel that there is still the problem of memory leak if you don't free queues anyway...

This could be then the answer for very small RAM sizes but this isn't this method considered as dynamic allocation anyway ??

Stephane

0 件の賞賛
返信

2,964件の閲覧回数
kef
Specialist I

Queue either works, or doesn't work and you have some data missing or part of queue memory blocked and not used. Just fix it and you won't see any leaks.

0 件の賞賛
返信

2,964件の閲覧回数
Stephman
Contributor I

Well I guess things starts to become clear in my mind and I got the syntax you have written. I had never used the C ternary operator before so I was confused.

So to summarize (may help some other people)

-> Dynamic allocation using malloc/free functions has the disadvantage of :

  • having fragmented memory which requires larger RAM memory to work,
  • having possible memory leaks because of blocs not properly relreased
  • be forbidden by some standards for safety embedded systems

-> If memory allocation is only used once (at MCU initialization for instance) and never released with free(), then one way is to define a static array and implement memory blocs within this array which will never be released. This enables having customized size blocs, no fragmentation, no memory leaks,... One implementation is the last code you have shown, but may need to be improved to make it more robust.

-> if memory allocation is used on the fly byè the application then one can implement a linked list system within the static array. The main disadvantage is that on low RAM systems, fragmentation may happen very quickly and the system may crash because it's impossible anymore to allocate a large enough memory bloc.

This can be bypassed by preallocating memory blocks. For instance 10, 50, 100, 200, 500, 1000 bytes....

The user could then call a dedicated function like sendqueue10(...), sendqueue500 (...), and so on.

The functions just need to check that there are still free blocks available to assign. The main disadvantage is that RAM usage is not optimized at all as blocks sizes are not flexible and some RAM may be wasted but it looks like it bypass all the other disadvantages of dynamic allocation.

Or an other solution could be to have defragmentation routine but this would take time to do that.

I'm going to implement these solutions in my OS and see how it runs and if it fits my requirements.

Many thanks for all you very valuable help Edward. You gave me enough information to let me undertstand more about memory allocations.

Regards

Stephane

One way to go could be as well to preallocate static queues of different sizes. Then the user could use the queue of a given length. (for instance, 50, 100, 200, 500, 1000 bytes) This is not optimal for RAM usage but would work.

I'm going to code this static array with linked list to manage my queues.


0 件の賞賛
返信

2,964件の閲覧回数
kef
Specialist I

Or an other solution could be to have defragmentation routine but this would take time to do that.           

IMO it is malloc implementaion dependent, wheter you can or can't defragment dynamic memory. It is quite stright forward for implementations, in which malloc always allocates block at least/highest possible address in the heap. Then you always keep free space for largest alllocated block, you start from the block with least/highest address, allocate new block of the same size, copy block to new location, free and reallocate old block (expecting to remove free block island), copy old block back and free new copy block, etc..

But how would you defragment dynamic memory if new block is allocated in circular or even random direction? You newer know how it is made until you look at source code of malloc.

0 件の賞賛
返信

2,964件の閲覧回数
Stephman
Contributor I

could one way to do be to allocate a local array in function 1 and then transmitt the pointer and the length of the array to function 2 ?

Data would be then in stac. We would comme back to stack management issues which is I feel worst than malloc management as there is no stack overflow control whereas there is in malloc a 'is there any free space' control.

Stephane

0 件の賞賛
返信

2,964件の閲覧回数
kef
Specialist I

could one way to do be to allocate a local array in function 1 and then transmitt the pointer and the length of the array to function 2 ?

What for? You still have the same amount of RAM for everything. Does you MCU have 2k of RAM? OK, then you have 2k for static variables + stack + heap(if using malloc). Bigger stack usage means less space left for static vars and heap. Bigger heap usage means less space for stack and static variables. So what's the point?

0 件の賞賛
返信