Hi,
The timing calculation we shared above is already based on JEDEC.
Since you keep insisting on looking at LPDDR4 JEDEC, we had checked it again.
To repeat, our issue is that we can't match the timing we calculate based on spec with the observed values.
Is it possible to be more specific what you think we have missed from the JEDEC spec?
Below I share some more information regarding the calculation and observation.
Can you please go through this and help us find out the reason for the difference?
Looking at the DDR Performance Monitor Unit data we get the following information.
For every 8 byte read to the uncached read, there is on one read command, one activate command and one prefetch command issued to the DDR.
And refresh command is like one command for every 50 read commands or so.
Based on this the major contributor to time for reading 8 bytes are the following.
tRCD(activate to read command), tRL(read latency), tRP (precharge to activate) and burst read time.
We have read the DDR timing registers in the DDRC to get those values.
tRCD = 15, tRL = 14, tRP = 15 all in number of clocks.
And 16/2 = 8 clock cycles for the read burst.
With 1.6 GHz clock we can calculate the total time as
52 X 625ps = approx 35 ns. (neglecting the refresh time).
The observed value is approximaltely four times this. that is around 140 ns.
We are not able to find the reason for this difference.
Looking at the DDR performance monitor unit data, there isn't much other access to DDR during the measurement time.
We can reproduce this with a memcpy performed on buffers allocated using imxdmabuffers.
I have attached a small program to reproduce this using imxdmabuffers.
Can you check the above data and tell us what is the possible reason for the mismatch between the calculated and observed values?
Or what we have possibly missed in the calculation?