Slow execution causing lag in GPIO signals (trying to enable cache to speedup)

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Slow execution causing lag in GPIO signals (trying to enable cache to speedup)

Jump to solution
2,666 Views
paulholmquist
Contributor I

I'm experiencing very slow execution times with running code that controls GPIO pin signals on the Vybrid chip.  The code assert one GPIO then asserts another with only a few lines of code in-between to setup the LWPTIMER.  Its taking about 12 usec between these GPIO assertions which is about 10 times longer then it should base on how a similar set of operations was performed on another ARM processor (at about the same CPU clock speed).

I suspect slowness might be caused by reusing the current cache-disabled configuration that came with the example MQX programs but I'm having difficultyin finding documentation on how the add seperate memory regions where some have cache enable and others disabled.  I'm running everything using the intram.icf linker file so this maps all code/data to internal memory

The example MQX user_config.hhas cache disabled (#define MQX_USE_UNCACHED_MEM 1).  However, when I tried to review this "compile-time" option under the MQX user guide document, it doesn't even mention this symbol (3.14.1 MQX Compile-time COnfiguration Options). So then I started looking at the PSP component code to reverse engineer the MMU calls but that lead to a dead end per the MQX reference manual statement on any of the MMU API's ("see the PSP Release Note").  But I can't find the "PSP Release Note" document.

I believe that turning on cache would fix the slowness here but what are the consequences of this (why didn't MQX example programs come with data-cache enabled instead of disabled)?

I will try to just update/change the MQX_USE_UNCACHED_MEM to a value of 0 instead but that appears to enable it for all memory which I can at least tell me if the execution speeds up (assuming it doesn't break another else given the lack of documentation for it)...  However, I will eventually still need to have separate regions of cache/no-cache.

Message was edited by: Paul Holmquist Updating MQX_USE_UNCACHED_MEM had no effect.  Which is what I suspected after reviewing the logic in init_bsp.c:_bsp_enable_card().  The following "_mmu_add_vregion" calls don't even appear to be consistent with the intram.icf since its using the "PSP_PAGE_TYPE_CACHE_NON" attribute on the internal ram.  Besides that, the ram size doesn't even seem match the size in the intram.icf file contents... why is this OK...? /* add region in sram area */         _mmu_add_vregion((pointer)__INTERNAL_SRAM_BASE, (pointer)__INTERNAL_SRAM_BASE, (_mem_size) 0x00100000, PSP_PAGE_TABLE_SECTION_SIZE(PSP_PAGE_TABLE_SECTION_SIZE_1MB) | PSP_PAGE_TYPE(PSP_PAGE_TYPE_CACHE_NON)   | PSP_PAGE_DESCR(PSP_PAGE_DESCR_ACCESS_RW_ALL));

Labels (1)
Tags (2)
0 Kudos
Reply
1 Solution
2,046 Views
ioseph_martinez
NXP Employee
NXP Employee

Paul,

Thread https://community.freescale.com/thread/308619 has been answered by MQX team. The first call, _mmu_vinit will init mmu with default values. As you start adding more regions _mmu_add_vregion, those get a new configuration values with the flags you are sending as arguments.

So there should not be issues on enabling cache on the OCRAM.

View solution in original post

0 Kudos
Reply
7 Replies
2,046 Views
paulholmquist
Contributor I

Independent of the slowness of the GPIO signals, I need to enable cache on different regions of internal RAM.  The documentation is lacking on both Vybrid data sheet and MQX with respect to MMU/Cache details.  The only one that I found that comes close is the MMU description for a Cortex A8 (didn't find anything under A5) on the freescale web site.

Per function _bsp_enable_card() in init_bsp.c, the code has the following statements for configuring MMU/CACHE:

         /* None cacheable is comon with strongly ordered. MMU doesnt work with another init configuration */         _mmu_vinit(PSP_PAGE_TABLE_SECTION_SIZE(PSP_PAGE_TABLE_SECTION_SIZE_1MB) | PSP_PAGE_DESCR(PSP_PAGE_DESCR_ACCESS_RW_ALL) | PSP_PAGE_TYPE(PSP_PAGE_TYPE_STRONG_ORDER), (pointer)L1PageTable);         /* add region in sram area */         _mmu_add_vregion((pointer)__INTERNAL_SRAM_BASE, (pointer)__INTERNAL_SRAM_BASE, (_mem_size) 0x00100000, PSP_PAGE_TABLE_SECTION_SIZE(PSP_PAGE_TABLE_SECTION_SIZE_1MB) | PSP_PAGE_TYPE(PSP_PAGE_TYPE_CACHE_NON)   | PSP_PAGE_DESCR(PSP_PAGE_DESCR_ACCESS_RW_ALL));

This seems to disable cache for all internal RAM.  Could I just replace the PSP_PAGE_TYPE_CACHE_NON above with PSP_PAGE_TYPE_CACHE_WBNWA instead?  Would this jeopardize all the logic that works with memory-mapped registers or cause other issues with MQX that I'm not aware of?  The comment above the _mmu_vinit calls seems to indicate forcing non-cache with "strong-ordered" is required.  If thats the case what combination is acceptiable here?

And then the comment "MMU doesn't work with another init configuration" above suppose to mean?  Does this mean the MMU on Vybrid is broken?

0 Kudos
Reply
2,046 Views
ioseph_martinez
NXP Employee
NXP Employee

Not sure about those comments or why the cache comes disabled by default, will ask internally.

The OCRAM address range should not interfere with any of the memory mapped registers and so, they are mapped to different locations.

I would check is how this is done when having everything in external RAM (DDR) and copy that, just changing to OCRAM addresses and range.

2,046 Views
paulholmquist
Contributor I

Just modified _bsp_enable_card() as Ioseph suggested replacing *CACHE_NON with *CACHE_WBNWA for internal RAM and recompiled the bsp libirary.

Then I remeasured the GPIO assertion times which resulted much faster execution interval of 2.3 usec between the pins (instead of the 12 usec before).  This confirms that having cache disabled was the issue here.  The only issue now is what consequence of this is with regard to MQX kernel and the MQX examples applications...still looking for documentation for both.

0 Kudos
Reply
2,046 Views
ioseph_martinez
NXP Employee
NXP Employee

Paul, I posted this https://community.freescale.com/thread/308619 on the MQX forum. I don't think there should be any problems with the change you did but I want to confirm with the MQX team.

0 Kudos
Reply
2,047 Views
ioseph_martinez
NXP Employee
NXP Employee

Paul,

Thread https://community.freescale.com/thread/308619 has been answered by MQX team. The first call, _mmu_vinit will init mmu with default values. As you start adding more regions _mmu_add_vregion, those get a new configuration values with the flags you are sending as arguments.

So there should not be issues on enabling cache on the OCRAM.

0 Kudos
Reply
2,046 Views
ChrisNielsen
Contributor III

It's not designed for MQX, but it's possible that this working MMU/cache example for a bare metal system might help the debug:

Vybrid Bare-Metal MMU (Direct Mapping)

https://community.freescale.com/message/326432#326432

Chris

0 Kudos
Reply
2,046 Views
ChrisNielsen
Contributor III

1.2us between pulses is on the quick side even for a 500 MHz A5 (Vybrid).  Can you describe more about the pulse requirements in terms of min/max between them?  It might be worth considering one of the many Vybrid peripherals to drive these 2 pulses, especially if the timing needs to be well controlled -- software is notorious for unreliable "fast" GPIO timing :smileyhappy: (even my software!), just a thought, Chris