Code is slower executing from tightly-coupled (ITC) memory

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

Code is slower executing from tightly-coupled (ITC) memory

ソリューションへジャンプ
845件の閲覧回数
expertsleepers
Contributor III

As an experiment, I tagged a fairly costly function with

__RAMFUNC(SRAM_ITC)

To my surprise, it ran about 10% slower. Could anyone share any insight on why that might be?

This is on an iMXRT1062, running from flash (XIP) if not running from ITC.

 

ラベル(1)
0 件の賞賛
返信
1 解決策
645件の閲覧回数
expertsleepers
Contributor III

This was indeed it.

The full situation was:

  • The code uses a lot of double precision maths.
  • The project was set to use a single precision floating point ABI, and so was full of function calls instead of .f64 operations.
  • When the code was put in ITCM every one of those function calls went through a veneer.

Having selected a double precision ABI, the function is now twice as fast and doesn't slow down when in ITCM. It doesn't get any faster either, but I'm sure there are more mundane reasons for that.

I find it odd that the project was created with the wrong ABI - it was created from a iMXRT1062 template.

This thread was useful regarding the FP ABI:

https://community.nxp.com/t5/i-MX-RT/FPU-Type-options-for-MCUXpresso-for-double-precision-floating/m...

 

元の投稿で解決策を見る

0 件の賞賛
返信
10 返答(返信)
655件の閲覧回数
Masmiseim
Senior Contributor I

Hello @expertsleepers,

does the code you execute from the ITCM call functions that are located in flash or SDRAM/OCRAM? These are now more expensive because the compiler inserts a veneer function.
When calling Std-Lib functions, it is also not quite obvious where they are located.

Regards

646件の閲覧回数
expertsleepers
Contributor III

This was indeed it.

The full situation was:

  • The code uses a lot of double precision maths.
  • The project was set to use a single precision floating point ABI, and so was full of function calls instead of .f64 operations.
  • When the code was put in ITCM every one of those function calls went through a veneer.

Having selected a double precision ABI, the function is now twice as fast and doesn't slow down when in ITCM. It doesn't get any faster either, but I'm sure there are more mundane reasons for that.

I find it odd that the project was created with the wrong ABI - it was created from a iMXRT1062 template.

This thread was useful regarding the FP ABI:

https://community.nxp.com/t5/i-MX-RT/FPU-Type-options-for-MCUXpresso-for-double-precision-floating/m...

 

0 件の賞賛
返信
640件の閲覧回数
Masmiseim
Senior Contributor I

Hello @expertsleepers,

The FPU setting for all M7 cores of the iMXRT family should be FPv5-D16. The only exception is the iMXRT1011, here it must be FPv5-SP-D16.
For the M4 and M33 cores it is also FPv5-SP-D16.

The reason why it is not faster in the ITCM than when executing from the flash is that the function is probably so small that it fits completely into the cache.
However, if the code base increases in size, cache trashing occurs. This means that the speed at which the function is executed is no longer deterministic.

Regards

0 件の賞賛
返信
654件の閲覧回数
expertsleepers
Contributor III
Ah yes, it could be that.
Just to be clear, are function calls from ITC to other ITC functions still fast?
0 件の賞賛
返信
642件の閲覧回数
Masmiseim
Senior Contributor I

Hello @expertsleepers,

If the function to be called is a maximum of four megabytes “away” in the address range, a direct jump “BL” can be used without veneer.

Regards

0 件の賞賛
返信
686件の閲覧回数
Omar_Anguiano
NXP TechSupport
NXP TechSupport

It is possible that the function has some routine that expects data contained on another memory inducing some wait states to the execution.

Best regards,
Omar

0 件の賞賛
返信
673件の閲覧回数
expertsleepers
Contributor III

But would that not also be true if the function was executing from flash?

The function accesses OC SRAM and external DRAM. Both will cause waits, I'm sure, but I can't see how that would make ITC slower than XIP.

 

0 件の賞賛
返信
808件の閲覧回数
Juozas
Contributor III
Interesting. Would you be able to provide a test case? I'd like to reproduce this behaviour.
0 件の賞賛
返信
806件の閲覧回数
expertsleepers
Contributor III

Unfortunately the code in question is part of a very large project. I'd have to try to isolate it into a fresh project - which of course may not exhibit the same behaviour. I'll post if I manage to create a small test case.

 

0 件の賞賛
返信
801件の閲覧回数
Juozas
Contributor III
I understand. Following this topic.
0 件の賞賛
返信