I am having problems figuring out some of the details from the latest LS1021A Data sheet... So I have some questions:
1) If I want to do a synchronous interface with the highest possible bandwidth, it seems like I need to use the GPMC mode.
2) Everything is based off of the "Module Clock", which appears to be the "Platform Clock" in the reference manual. I have not been able to find the specification for how this clock must be configured. I have seen references to it being a 300MHz Clock and others to it being a 400 MHz clock. How do I determine what the clock speed will be - I'm guessing 300MHz from the docs in the SDK, but I am not sure I understand the constraints.
3) The data sheet seems to indicate that the max IFC Clock is 100 MHz. Is this correct? The Reference Manual would seem to indicate that the IFC clock can be the module clock divided by 2 -- but that would be 150 MHz. So is this an issue where the actual chip implementation has more stringent requirements than the IP (maybe because of the speed of the pin drivers or something)?
4) I am trying to determine the maximum bandwidth that can be achieved through the IFC port.
So far I have come up with the following:
If CLKDIV=1 (÷2) then the IFC_CLK would be 150 MHz.
So I think you could set:
TEADC = 2
TEHAC = 2
TACSE = 1
TCS = 1
TCH = 1
TWP = 2
TACO = 1
TRAD = 1
TAAD = 2
TRHZ = 0 (20 Clocks)
AM = 0xffff (64kB bank)
BURST_LEN = 7 (128 beats)
If I understand the waveforms correctly, this would mean that for a full write burst, it should take:
TEADC + TACSE + TCS + 128 * TWP + TCH = 261 module clocks
to transfer 128 * 2 = 256 bytes. This would give an efficiency of 98%, and if the module clock is 300 MHz, this should give a write performance of about
294 MB/s ≈ 2.35 Gb/s
If the IFC_CLK is really limited to 100 MHz, then this should provide a throughput of about 2/3:
196 MB/s ≈ 1.569 Gb/s
On the read side, I think it should take:
TEADC + TACSE + TACO + TRAD + 128 * TAAD
This works out to the same as above.
4a) Is this analysis correct (or even close)? - unfortunately, the documentation is pretty abstract
4b) Does the TRHZ apply when using the GPMC machine?
4c) If AM =0xffff (e.g. 64kB bank) does the IP forgo the MSW ALE address cycle on the port in GPMC mode?
4d) What is required to actually get a full burst transfer?
4e) Are there any hidden transaction times that I am missing?
(4d) is especially important; since the overhead is 15 or 16 cycles per burst, if I can't actually get full bursts, the efficiency will go way down. Does the IFC/GPMC buffer transactions and do a burst based on the buffered values. Do I need to have a DMA burst to the memory area to ensure that the GPMC does a full burst?