MFS Crash with Heavy Weight Memory Allocator

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

MFS Crash with Heavy Weight Memory Allocator

Jump to solution
1,850 Views
pmt
Contributor V

I have an issue where MFS appears to be crashing during init.  The only change I made to the "out-of-box configuration" is to not use the Light Weight memory allocator.  There seems to be enough spare memory available, so I don't think that is the issue.  Configuration:

MQX           4.0.0

BSP:          TWRK60F120M

Project:       rtcs_shell

Config:        debug

Tools:         Keil UV4

Modification:

Added the following line to user_config.h

#define MQX_USE_LWMEM_ALLOCATOR             0

The initialization crashes here:

Call stack looks like this:


     Ram_disk_start()

     mqx_status = _io_mfs_install(dev_handle1, "a:", (_file_size)0);

     MFS_mem_alloc_system_zero()

     _mem_alloc_system_zero_uncached()

     _mem_alloc_internal()

Crash is right after the _int_enable()

             /* allow pending interrupts */

             _int_enable();

             _int_disable();

Any ideas?

PMT

0 Kudos
Reply
1 Solution
1,142 Views
Martin_
NXP Employee
NXP Employee

Hi PMT,

Ad option 1) I think so

Ad option 2) yes. also need to initialize _BSP_sram_pool with the pool_id that sits in the SRAM. see for example twrk70f120m BSP init_bsp.c (search for  MQX_USE_UNCACHED_MEM)

View solution in original post

0 Kudos
Reply
10 Replies
1,142 Views
pmt
Contributor V

I did verify that the same problem exists with MQX 4.0.1.  Can some try the rtcs_shell example on their target not using the LW Mem allocator?  My suspicion is that you'll get the same crash during init.

Thanks,

PMT

0 Kudos
Reply
1,142 Views
pmt
Contributor V

Additional information:  It looks like MFS init is failing in _mem_alloc_system_zero_uncached() due to kernel_data->UNCACHED_POOL being NULL.   What is the UNCACHED POOL all about, and what do I need to do to get one?

Thanks,

PMT

0 Kudos
Reply
1,142 Views
Martin_
NXP Employee
NXP Employee

The idea with uncached memory is for targets, where the kernel data can sit in the cacheable memory (for example TWR-K70F120M), to provide a memory region, configured to be non-cacheable in LMEM (local memory controller). MQX then uses this non-cacheable region for some internal data, like for example MACNET buffers and buffer descriptors and/or usb buffers/buffer descriptors. Another use case is, as the MQX kernel accesses kernel data frequently, to locate kernel data in the memory with efficient random access.

For twrk60f120m with MEM allocator, I can use the following user configuration:

#define MQX_USE_UNCACHED_MEM     0

#define MQX_USE_LWMEM_ALLOCATOR  0

otherwise the MQX_USE_UNCACHED_MEM defaults to 1 (per mqx_cnfg.h). With this user configuration the /rtcs/examples/shell works.

0 Kudos
Reply
1,142 Views
pmt
Contributor V

Martin,

I tried modifying user_config.h to disable UNCACHED memory and the LW allocator, and indeed this did seem to make a difference. 

However now RTCS is acting up in different ways depending on startup timing (sometimes enet not initializing, and sometimes it is but not getting a DHCP address).  I do think there is some basic config incompatibilities that are not being flagged. 

Is defining UNCACHED memory TRUE and LW Allocator FALSE an invalid configuration?  It seems like the UNCACHED memory is a good thing.  Is there any reason this is incompatible with MFS?

Additionally, when I trace down in MFS_mem_alloc_system_zero() it's still executing the code path _mem_alloc_system_zero_uncached().  And note the comment about needing uncached memory.

{

    if ( _MFS_pool_id )

    {

        return _mem_alloc_system_zero_from(_MFS_pool_id, size);

    }

    else

    {

        /* memory must be uncached, otherwise reading sector may fail on 54418 */

        return _mem_alloc_system_zero_uncached(size);

    }

}

0 Kudos
Reply
1,142 Views
pmt
Contributor V

It looks like when using the LW allocator then UNCACHED memory components are never used (mqx_cnfg.h). 

#if MQX_USE_LWMEM_ALLOCATOR && MQX_USE_UNCACHED_MEM

#error Set MQX_USE_UNCACHED_MEM to 0 when using MQX_USE_LWMEM_ALLOCATOR

#endif

Which leads me to believe that MFS hasn't been tested extensively with the heavy weight allocator.  At the same time the MFS memory allocation seems to need uncached memory (from the comments).

Can you clarify?

Thanks,

PMT

0 Kudos
Reply
1,142 Views
Martin_
NXP Employee
NXP Employee

Hi PMT,

we have used UNCACHED memory TRUE and LW Allocator FALSE in the twrk70f120m BSP. It is a valid and tested configuration, but you need to provide a special section in the linker. I would recommend to compare the MQX twrk60f120m linker command file with the twrk70f120m to see what has to be provided for the system startup.

0 Kudos
Reply
1,142 Views
pmt
Contributor V

Martin,

yes you are right, the twrk60f120m bsp does not contain a linker section for UNCACHED memory (which just seems synonymous for "fast" memory if I understand this correctly) .  So if I want to use UNCACHED memory I will need to add this linker section.

Now my next question, is UNCACHED memory a requirement of running MFS or any other middleware or MQX system component?  See this comment in the MFS init:   

       /* memory must be uncached, otherwise reading sector may fail on 54418 */

The issue is that the default twrk60f120m only has the internal SRAM memory, so in essence all RAM is UNCACHED by default.  I am creating a new BSP based on this K60 BSP with added external memory.  So is it a requirement that I create an UNCACHED region in the linker script to have this available? 

Thanks,

PMT

0 Kudos
Reply
1,142 Views
Martin_
NXP Employee
NXP Employee

A real need of uncached memory is when the memory is used by multiple crossbar switch master ports. Typical example would be with DMA, for example when there is a valid data in the CPU cache and DMA moves some different data to the physical memory to that address (which is valid and sits in the cache). If we don't specifically invalidate the cache line after DMA has updated the data, CPU doesn't know about new data.

The souce code comment in MFS is referring to this bug (see MQX Release notes):

"TWR-MCF5418 demo web_hvac cannot browse webpages from USB - Problem with memory cache. Changed MFS_mem_alloc_system_zero function - replaced _mem_alloc_system_zero with _mem_alloc_system_zero_uncached."

and this is connected with another comment from release notes:

"user application buffers used by USB EHCI class drivers have to reside in un-cached memory space."

This is exactly the case, when MFS mounts on a USB Host MSD class on MCF54418, the physical data is moved between USB bus and memory by USB crossbar master. The same would apply to KHCI USB controller (USB-OTG master) but there only BDT memory is shared between CPU and USB-OTG (BDT section - see linker command files), and BDT is put into internal SRAM by MQX linker command files.

Thus, the requirement for uncached memory is driven by application scenario. In case the only master accessing your external memory will be the CPU (code bus or system bus for all load/store accesses), MFS won't need uncached memory, as all data will be in sync as seen by CPU via data cache. But if you plan to add another master between the CPU and the physical data (for example use DMA or USB to move data from source memory to destination memory) MFS would need uncached memory, or, alternatively, you can design your software with cache invalidate/flush sequences in appropriate places. That is, flush data cache after CPU store, and invalidate data cache after non-CPU master store.

By the way, it applies to Ethernet master on crossbar switch too, that is, Ethernet module buffers and buffers descriptors need to be in uncached memory. The "uncached" memory is not the same as "fast". They really mean two things, non-cacheable, due to multiple masters, and fast, to efficiently execute CPU random accesses to system memory. For the Ethernet, a memory for ethernet buffers and buffer descriptors has to be both fast and uncached, thus, it needs to be in the internal SRAM.

If you plan to use mem_extend() to increase the memory available in your system (I guess this is all about the memory allocator question) then please consider that the mem_extend() puts the extended memory pool to the beginning of the free memory pools. All subsequent mallocs (after mem_extend) would allocate memory from the extend memory, until it is consumed. Thus, my recommendation would be, during MQX system startup, allocate as much as possible into internal SRAM, like stacks for tasks, interrupt stack, ethernet buffers and buffer descriptors etc.) and call mem_extend() after these MQX components are allocated in the internal SRAM.

0 Kudos
Reply
1,142 Views
pmt
Contributor V

Martin,

Thanks for the detailed response.  Extremely helpful.  I think I understand all the issues now.  So given that I want to produce a BSP derived from the “twrk60f120” but with added external SRAM memory that I want to add to the common pool with _mem_extend() would both of these options work:

Option 1:  Call _mem_extend() but only after initializing drivers and devices such as USB, RTCS, Enet that allocate buffers written to by DMA.  This would force these buffers into UNCACHED internal SRAM.

Option 2:  Enable MQX_USE_UNCACHED_MEM, but add the “twrk70f120” example linker sections for UNCACHED memory, and corresponding UNCACHED memory initialization in kernel/mem.c.  This would give the flexibility to move _mem_extend() to any point in the startup as needed.

Also, I plan to use my own DMA drivers for serial port and DSPI.  I’m thinking I can just reserve a named memory block in internal SRAM in the linker file, and use appropriate #pragma’s to force the serial port and DSPI static DMA buffers into this section.  This would save me from allocating memory using the internal MQX memory allocation functions to get uncached memory.     

What would really be nice in the standard MQX memory allocator is a way to tag both an attribute and priority to a block of memory.  This would give the ability to prioritize faster memory over slower memory, or allocate UNCACHED memory when calling MQX alloc functions.  You can sort of get this with explicit memory pools, but this is somewhat more cumbersome.

Thanks,

PMT   

0 Kudos
Reply
1,143 Views
Martin_
NXP Employee
NXP Employee

Hi PMT,

Ad option 1) I think so

Ad option 2) yes. also need to initialize _BSP_sram_pool with the pool_id that sits in the SRAM. see for example twrk70f120m BSP init_bsp.c (search for  MQX_USE_UNCACHED_MEM)

0 Kudos
Reply