RAM banks

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

RAM banks

718 Views
lpcware
NXP Employee
NXP Employee
Content originally posted in LPCWare by charchar on Sat Jan 28 09:09:52 MST 2012
Figure 5 in the 4300 User Manual shows the AHB matrix connections.  I have some questions--- Is there anything special about "AHB SRAM" vs "LOCAL SRAM"--- for example, do they have the same access speeds?  I see that they are attached to the M4 buses differently, namely the AHB SRAM is attached to the system bus but not the I and D buses. 

But more specifically, I'm wondering if the following 2 scenarios are the same from a performance perspective:

1) Data in 72Kb local SRAM bank
   Code in 128K local SRAM bank

2) Data in 16Kb AHB SRAM bank
   Code in 128K local SRAM bank

I'm assuming that in the 2nd scenario, the data in the 16K AHB SRAM bank can be accessed at the same rate over the system bus when compared to the 1st scenario, where the data is in the 72Kb local SRAM bank, which is connected to the D bus.  (ie, I'm a little unclear about the system bus vs the D bus.)

tnanks!

Labels (1)
0 Kudos
Reply
2 Replies

573 Views
lpcware
NXP Employee
NXP Employee
Content originally posted in LPCWare by charchar on Mon Jan 30 13:06:37 MST 2012
Thanks!

Another question is about simultaneous access--- in section 3.5 of the user manual:

"When two or more bus masters try to access the same slave, a round robin arbitration scheme is used; each
master takes turns accessing the slave in circular order."

This sounds good--- if the M4 and M0 want to access the same RAM bank at the same time, the 1st requester will get access 1st. But the RAM is 0 wait-state and the M4 and M0 are on the same clock, so simultaneous access is the only case that needs arbitration.  Is there a fixed priority that gets applied?


0 Kudos
Reply

573 Views
lpcware
NXP Employee
NXP Employee
Content originally posted in LPCWare by atomicdog on Sun Jan 29 13:34:10 MST 2012
From reading the book "The definitive Guide to the ARM Cortex-m3" I believe in both scenarios the data/code access is simultaneous.

The data access can be given higher priority over code access for better performance when trying to access the same memory region. I assume that NXP made separate local SRAM's (memory regions) for increased performance.

I don't know anything about bus access speed though so one scenario may still be faster even though data/code access is simultaneous in both.
0 Kudos
Reply