MBDT with Vehicle Network Toolbox build error - Dual CAN RX

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

MBDT with Vehicle Network Toolbox build error - Dual CAN RX

1,253 Views
andyknitt
Contributor II

I am building off of the FlexCAN Traffic example in the MBDT to create a model that receives multiple CAN messages.  I am using a method very similar to what Razvan just happened to post here.  

However, I am also trying to use the Simulink Vehicle Network Toolbox to unpack the data in the CAN messages into usable signals using a .dbc file. 

Example.png

In some scenarios I am getting a build error indicating that stack and heap are overlapping.  Some scenarios:

  • Single CAN message RX with multiple signals being unpacked - builds ok
  • Two CAN message RX with a single signal being unpacked from each message - builds ok
  • Two CAN message RX with one signal being unpacked from one message and multiple signals being unpacked from the other message - build fails with error

If I don't use the Vehicle Network Toolbox and manually unpack the data from the CAN message for multiple messages I don't seem to have a problem, although I haven't tried that extensively yet.  

Examples of each of the above scenarios are attached, along with a build log showing the error I get in the third scenario.  Can someone offer some insight as to what is causing the stack/heap issue and how to avoid it when using the vehicle network toolbox for CAN message unpacking?

Thanks,

Andy K.

0 Kudos
3 Replies

942 Views
constantinrazva
NXP Employee
NXP Employee

Hello andyknitt‌,

From what I can see, the problem is that you run out of memory. Let me go into a few details regarding our linker file:

At the top you can find the heap and stack sizes defined:

HEAP_SIZE = DEFINED(__heap_size__) ? __heap_size__ : 0x00000400;
STACK_SIZE = DEFINED(__stack_size__) ? __stack_size__ : 0x00000400;

After those, you can find the memory areas defined (in bold what's important to your issue):

/* Specify the memory areas */
MEMORY
{
/* Flash */
m_interrupts (RX) : ORIGIN = 0x00000000, LENGTH = 0x00000400
m_flash_config (RX) : ORIGIN = 0x00000400, LENGTH = 0x00000010
m_text (RX) : ORIGIN = 0x00000410, LENGTH = 0x0003FBF0

/* SRAM_L */
m_data (RW) : ORIGIN = 0x1FFFE000, LENGTH = 0x00002000

/* SRAM_U */
m_data_2 (RW) : ORIGIN = 0x20000000, LENGTH = 0x00001000
}


If we go further, we can see that m_data_2 is comprised of the following:

  • customSectionBlock
  • bss
  • heap
  • stack

The first 3 sections are placed one after the other in memory, while the stack is placed staring from the end of m_data_2.

At the end we have the following assert 

ASSERT(__StackLimit >= __HeapLimit, "region m_data_2 overflowed with stack and heap")

where we check that stack and heap don't overflow. In your case the bss gets so big that pushes the heap over the stack. To bypass this, you can change the HEAP_SIZE or STACK_SIZE, but that depends on what you are looking for. When you're not using that toolbox, the bss is smaller and heap and stack are not overlapping.

If you want to change the heap/stack sizes, the file you'll want to edit is found here:

{ROOT_DIR}\mbdtbx_s32k14x\src\linker\gcc\S32K142_16_flash.ld       

(or _32_flash.ld at the end, depending on what SRAM you have on your S32K - you can select the option from the main configuration block - MCU tab - SRAM).

I did some tests and found that your application Dual_CAN_Test_Build_Fails works with the following adjustments:

HEAP_SIZE = DEFINED(__heap_size__) ? __heap_size__ : 0x00000400;
STACK_SIZE = DEFINED(__stack_size__) ? __stack_size__ : 0x000003F0;

So with just a slightly lower value for STACK SIZE, you can make it work - but when you'll make a more complex model that will put more data in the .bss you'll probably get the same error. I can not really know how it's more beneficial for you to manage the heap/stack/bss sizes.

P.S.: in the <model_name>_rtw folder generated when building a model, you can find the linker file under the name of S32K14x.ld, regardless of the selected MCU.

Hope you find this helpful,

Razvan.

0 Kudos

942 Views
andyknitt
Contributor II

Thank you Razvan.  Pardon my ignorance, but is there an output of the build process that would allow me to determine how much bss data my model is using so that I can compare different models to try to find the root cause of the excessive usage?

Regards,

Andy

0 Kudos

942 Views
constantinrazva
NXP Employee
NXP Employee

Hello andyknitt‌,

From the generated .map file ( <model_name>.map ) you can see at what addresses the following tags are:

__BSS_END

__BSS_START

__heap_start__

__heap_end__

__stack_start__

__stack_end__

From here you can just subtract  *END - *START and get your size of all of these.

I was now thinking that maybe it would make sense for your application to move sections from SRAM_U (m_data_2) to SRAM_L (m_data). From what I can see you have ~80% free memory from SRAM_L. Maybe it would be useful for you to put the whole .bss section into SRAM_L (m_data) - this way you'll open up SRAM_U (m_data_2) to just be used by heap and stack. Again, this depends on what your application ends up looking like.

You can also use readelf if you have something like cygwin installed or on some UNIX OS. 

readelf --section-details Dual_CAN_Test_Build_Fails.elf

If you run the command above, you'll get all sections' information - address, offset, size, etc.

If you want to get only sections from SRAM you can use grep to match 1fff or 2000, depending on which SRAM you want to view:

readelf --section-details Dual_CAN_Test_Build_Fails.elf | grep -B1 2000

Using this you'll get something like this:

$ readelf --section-details Dual_CAN_Test_Build_Fails.elf | grep -B1 2000
[10] .bss
NOBITS 20000000 018000 000810 00 0 0 8
[11] .heap
NOBITS 20000810 018000 000400 00 0 0 1
[12] .stack
NOBITS 20000c10 018000 0003f0 00 0 0 1

The header of the information is the following:

Type            Addr     Off    Size   ES   Lk Inf Al

Hope this helps,

Razvan.

0 Kudos