SDRAM Controller in MCF532x/537x - some questions

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

SDRAM Controller in MCF532x/537x - some questions

9,163 Views
w_wegner
Contributor III
Hi,

as my CPU is finally running, I am trying to initialize the board and set up Codewarrior/other tools to communicate with everything. Here I have problems initializing the SDRAM controller.

I can do this for the evaluation board (Cobra5329, SDRAM: K4M283233-HN75 mobile SDRAM) with a modified .cfg-File, containing the settings I found in the vendor-supplied dBug source code - but there is already a small difference that I do not understand:

Register SenTec my setting
SDCFG1 51211400 54211400
SDCFG2 54730000 dito
SDCR C0180002 dito
SDMR 008B0000 00890000
why is SWT2RWP so short and, much more dubious, why is bit 17 of SDMR set?
However, sdram access is working with both settings!

With my own board (MCF5373 with MT46H16M16LF-75 mobile DDR) I do not get a valid SDRAM initialization: as soon as I make the first access to SDRAM region after initialization, the debug communication gets stuck and I have to restart the TBLCF BDM pod (in fact, the first access gives a result, but after this the communication is not possible any more).

My configuration:

writemem.l 0xFC0B8110 0x40000018 ; SDCS0
writemem.l 0xFC0B8008 0x33611530 ; SDCFG1
writemem.l 0xFC0B800C 0x56570000 ; SDCFG2
; Issue PALL
writemem.l 0xFC0B8004 0xE1082002 ; SDCR
; Issue LEMR
writemem.l 0xFC0B8000 0x80010000 ; SDMR
; Write mode register
writemem.l 0xFC0B8000 0x008D0000 ; SDMR
; Wait a bit
delay 1000
; Issue PALL
writemem.l 0xFC0B8004 0xE1082002 ; SDCR
; Perform two refresh cycles
writemem.l 0xFC0B8004 0xE1082004 ; SDCR
writemem.l 0xFC0B8004 0xE1082004 ; SDCR
writemem.l 0xFC0B8000 0x008D0000 ; SDMR
; enable automatic refresh
writemem.l 0xFC0B8004 0x71082C00 ; SDCR

Another thing I do not understand: what happens with A12 during mode register accesses? It seems to be always "0", but I did not find anything about it in MCF5373 data sheet.

Thank you for reading and looking forward to any comments,
Wolfgang

PS: my hardware connections are as on the MCF5329-10 CARD ENGINE, which is the MCF5329 evaluation board, AFAIK
Labels (1)
0 Kudos
18 Replies

964 Views
SimonMarsden_de
Contributor II

WolfgangW wrote:With my own board (MCF5373 with MT46H16M16LF-75 mobile DDR) I do not get a valid SDRAM initialization: as soon as I make the first access to SDRAM region after initialization, the debug communication gets stuck and I have to restart the TBLCF BDM pod



Possibly a silly suggestion, but are you trying to initialise the SDRAM controller twice - once via the BDM connection using the CFG file, and a second time in your application's initialisation code?

That could lead to the effect you describe, especially if you're running out of SDRAM. The dBUG code checks whether the SDRAM controller is already initialised before doing so again.

Just a thought.
0 Kudos

964 Views
w_wegner
Contributor III
Hi Simon,

sorry if this was not clear:
I only initialize the SDRAM controller once, from the debugger. I am not yet running any code on the coldfire itself.

Thank you for the answer anyways - I think it must be some very silly thing I am overlooking.

Regards,
Wolfgang
0 Kudos

964 Views
w_wegner
Contributor III
Hi list,

I hope somebody reads this although the thread is rather old...

I finally have some more insight on my new hardware with MCF5373L and Micron MT46H16M16LF. There is no software yet, I am only using the hardware diagnostics of Codewarrior up to now. As soon as I try to access the SDRAM, I get an access error. Initialisation should be OK according to CPU and RAM data sheet (already tried altering the sequence a bit, because CPU and RAM datasheet differ), but now after finally getting the logic analyzer connected, I see this sequence for the RAM access:

[initialisation]
1. Row and Bank Active
2. NOP
3. Read
4. Burst Terminate

Depending on my configuration, the number of NOPs between active and Read changes, but there is always a burst terminate immediately following the read in the next clock cycle (before any DQS from the RAM).

I tried with my own setting as well as with MCF5329 EVB setting (which can not work as there regular DDR RAM is used), and can see some of the expected timing changes, but this behaviour (burst terminate) is always the same.

Can this be normal when accessing the external SDRAM from BDM? As far as I understand the MCF5373L data sheet, transfer size should always be 16 bytes, resulting in 8 beats on my bus. What might be going wrong here?

Any hints are appreciated!

Regards,
Wolfgang
0 Kudos

964 Views
JWW
Contributor V
Wolfgang,

Do you have an image of the LA waveforms that you can share?
This might be helpful.

The controller will issue a burst terminate on less than line transfer sizes. This allows the SDRAM controller to save some of the bus bandwidth by allowing another read cycle to start earlier than waiting for a burst to complete.

On older DDR controllers, we waited the complete burst length before trying another read.
But on the new generations we can issue burst terminates.

Coldfire line (burst) length on this family of parts is 16 bytes. You are correct. But only DMA, CPU cache, and MoveM instructions can cause burst cycles on this family. The BDM can not issue a burst read...if memory serves me correctly...

I would load a movem ASM instruction with four (32bit) data registers into internal SRAM.. Then point PC to that address and single step. You should then see a burst read on the DDR bus.

Last comment... If the DQS signal from the DDR memories is not properly latched by the 5373, it coult cause a hung bus internally and may cause something like a loss of BDM communications... I haven't tried it, but it is possible.

Let me know how it goes...

-JWW
0 Kudos

964 Views
w_wegner
Contributor III
JWW,

thank you for the answer! Now the behaviour of the SDRAM controller, esp. with respect to the burst cycles, is clear.

Being impatient, I also sent a message to freescale support and got a similar answer.

It seems we either have broken boards or a soldering problem. I can measure the signals at the termination resistors, the logic analyzer screenshots from a read and write cycle are attached. "CMD" is composed of the signals CS,RAS,CAS,WE (in this order), so "3" is bank active, "4" is write, "5" is read, and "6" is burst terminate.

We only have very few prototypes made, only two of them are running yet at all; both of them show the same behaviour, and I even had one of them reworked because of the possible soldering problems with the very small BGA components (like the SDRAM). Still, behaviour is the same.
(RAM ballout is - of course - double-checked. :smileywink: )

We will now either (or both) have the boards X-rayed or have another company populate another prototype to be sure it is not only a soldering process problem.

Thank you and best regards,
Wolfgang

PS: .png would be nice as a valid attachment type :smileywink:
0 Kudos

964 Views
JWW
Contributor V
Wolfgang,

Nothing funny in your attachments...other than no read DQS signals from SDRAM.. :smileysad:
I agree... you probably have some assembly issue.

But not getting those read DQS signals will definately cause the 5373 some issues.

Let us know if you get it figured out. I'm sure the forum community would like to know how you debugged the problem.

Good Luck. :smileyhappy:

-JWW
0 Kudos

964 Views
w_wegner
Contributor III
JWW,

thank you for the comment!

I will definitely let you all know about the progress; however, this may take some time because we want to get new boards made for the next assembly (with other problems fixed).

Maybe we also have one of the old boards reworked one more time for improving our (re-)soldering process, but it depends on workload of the people in this department.

Alban, thank you for looking at png - I was so used to it that I only stumbled over the forum not accepting my first attachments, and then saw the gifs are about twice in size.

Best regards,
Wolfgang
0 Kudos

964 Views
nspon
Contributor II
I am encountering a very similar problem with an MCF5208 and the same Micron Mobile DDR part. It may not be quite the same; what kills my BDM debugger is two consecutive SDRAM reads. If I alternate reads and writes I can get results. Unfortunately they indicate some kind of burst issue. If I write $12345678, I read back $5678FFFF. I am looking into the SDRAM configuration, but so far haven't found much. Does anyone have the MT46H8M16LF working with the 5208?

I was also wondering if there any further information on how the GPIO_MSCR registers that control the 1.8V drive strength should be set up. I'm not sure whether half or full strength is appropriate, although I doubt thats causing this problem...

Thanks,
Nigel
0 Kudos

964 Views
JWW
Contributor V
Nigel,

Ok... Some of this seems familiar to some debug I did on a different chip/board combo several years ago... But I could use some more information to narrow down where we need to search.

Could you outline how you have the DDR controller hooked up. Even down to what you think might be the simplest concepts... We need to make sure you have everything.

At first glance, it looks like you are missing the first DQS edge coming back from the DDR memories during the read cycle. If the 5208 catches the second edge and thinks it is the first edge, you might see the shift in data pattern. I don't think you are debugging a burst problem yet, as a 32 bit read should always complete as you can't terminate a half cycle.. Meaning you will get 16bits on the rising edge and 16 bits on the falling edge. It just looks as if you are getting the later 16 bits when the controller thinks it is getting the first beat of data and the second DQS edge is latching a floating bus. Unless a burst terminate is issued to DDR memories, they will always continue driving DQS edges till the end of the burst cycle. But the Coldfire device stops looking for them after it gets the right count.. Meaning if you need 2 edges (2x16bits to get your long word) then if you miss the first and get the second and third edges, you might get what you are seeing.

One thing you can play with is a register setting called read_latency. I don't remember the actual "bit field" name off hand, but normally the spec says to set it to a 6 or 7. Try moving this value and see if your behaviour changes. This bit field allows the SDRAM controller to mask out "noisey" DQS lines when the bus is tristated between bus cycles. This keeps the controller from seeing false DQS edges. By making this number larger and smaller you can move when the 5208 starts looking for edges. It is counting in 2x clock cycles. So at 83 Mhz, it is counting 166Mhz cycles or 6ns increments (moving from a setting of 6 to 7 adds 6ns of delay before the 5208 looks for a DQS edge during a read cycle).


Sorry this is a lot of infomation.

Lastly... Drive strength is pretty simple. Use the smallest that gets the job done.. :smileywink:
I vary my methodoloy from board to board. If I simulate a low cost PTP (point-to-point) design, then I typically try to use a low drive if one is available on the chip that I'm using. If I'm doing multiple banks (chipselects) of SDRAM, I tend to use a high drive if it is available. Remember low drive typically means slower edge rate and less EMI..

Hope this was of some help.

-JWW
0 Kudos

964 Views
nspon
Contributor II
OK.

Our hardware connection is exactly as per the AN2982 application note, page 6. If there is an error in that, we have copied it. We have 22R series resistors on all the control lines and a 100R between SD_CLK and SD_CLKN. SD_DQS2 is hooked to UDQS, SD_DQS3 to LDQS (what happens if those are the wrong way round?).

I'm setting up the SDRAM as per the Micron data sheet, doing a PALL, setting the mode register and the mobile extended mode register (the CW header files don't have a constant for that, incidentally), then doing two IREFs. I have dropped the clock speed back to 120MHz CPU, 60MHz bus without any effect. I am setting up the SDRAM and the controller for CAS-2 operation, if I change the RD_LAT field from 6 to 7 (i.e CAS 2.5) I hang on any read. With RD_LAT at 6 I can read, but I can't do two successive reads (even with a large delay between them). I can however alternate reads and writes. If I do this, using 16-bit reads and writes, I consistently get back the values I wrote at addr+1 mod 8: so if I write 0..15 to addresses 0..15, I read back 1,2,3,4,5,6,7,0,9,10,11,12,13,14,15,8.

Thanks for your comments, this is looking a bit tricky to debug...
0 Kudos

964 Views
JWW
Contributor V
Nigel,

This has that feeling like I've seen this before somewhere... But I'm still banging my head trying to remember where I've seen this.

But I did come up with a couple of comments.

1. Umm.. Probably should have done the DQS routing the other way...Meaning swamp your DQS signals. But at this time I don't think that is your problem, because the SDRAM controller drives all DQS signals during a write cycle and does byte writes using the DM signals to mask the extra byte lane. During reads, the SDRAM provides 16 bits of data on each edge...regardless... and it is 16 bit word addressed. The SDRAM controller just throws away the extra data during the read. I believe the only real reason you should flip those lines on a future design is because the byte lanes are timed to the DQS path. Meaning the SDRAM launches data on the upper order byte lane relative to UDQS. The lower byte lane relative to LDQS. Coldfire times the read of its upper byte lane to DQS3 and DATA[15:8]. So if you tied the data lines correctly DQ[15:0]=> DATA[15:0] and the DQS lines were flipped, the SDRAM would launch UDQS and the SDRAM controller would latch DATA[7:0] relative to UDQS instead of LDQS. This could affect your timing. But honestly in your system and at these speeds, that shouldn't be an issue. Your data valid window should be huge at 60mhz :smileyhappy:

But in case you ever do a high speed design, you would want to watch the DQS and byte lane alignment as the SDRAMs can have some skew between each lane.

One quick suggestion... Ignore the datasheet... :smileywink: Try a rd_latency of 5... Let me know if that changes the behaviour. This really feels like a missing DQS problem. Meaning if the DQS is too short or missing the controller gets the wrong count and doesn't terminate the internal bus cycle.. On some architectures, a write cycle can free this problem.

One other quick suggestion.. In your config.. What burst length are you setting the SDRAM controller to? And what burst length are you setting the SDRAM (Mode reg) to?



-JWW
0 Kudos

964 Views
nspon
Contributor II
Well,

It looks as though I have found a solution, it just doesn't make any sense. I was setting the SDRAM mode register to a CAS latency of 2. This seemed sensible because I had RD_LAT set to 6, meaning CL=2. Out of curiosity I tried setting the SDRAM mode register to CAS=3 - and suddenly everything is working. Is it possible that Freescale and Micron disagree on the meaning of CAS latency? If so its a rather fundamental thing to have so ill-defined. If that isn't what has happened, why does this work?

And yes, we do have DQS and DQM round the wrong way; we'll patch the board to fix that. But it doesn't sound as though that is going to make CL=2 work.

Burst lengths are 8 on both the SDRAM mode and the controller. Its all very mysterious.

Thanks,

Nigel.
0 Kudos

964 Views
JWW
Contributor V
Nigel,
 
The SDRAM controller really has no idea about CAS latency.  This is a SDRAM memory concept.  I know this sounds weird...but give me a chance to explain.
 
The SDRAM controller only cares about DQS edges...but... because the DDR standard allows for a parallel terminated bus and the DQS lines are bi-directional, the SDRAM controller has to be told roughly where in "time" the SDRAM memory will start driving the DQS signal back to the controller.  The point we care about is the "preamble" and there are a range of acceptable values for the size of the preamble and the results actually vary a little from vendor to vendor.  The RD_LATENCY setting in the SDRAM controller is really a timer that tells the SDRAM controller to wait, after issuing a read command, for an incoming DQS.  This effectively masks off input DQS edges, from causing false data latches, from the SDRAM controller until the timer expires.  The value of "6" is a value that on most systems allows the SDRAM controller's timer to expire such that the read preamble has started and the DQS bus is driven to a "low" state.  The SDRAM controller then waits for the DQS edges in order to latch data.  So you see the SDRAM controller really doesn't care about CAS latency, other than it needs to know where in time the SDRAM controller will start driving the DQS lines.  Ok.. Now you say.. Isn't that the same thing?  Ah...this is the catch.  (by the way...the previous example is true as well for a setting of  "7" for CAS 2.5)
 
The SDRAM controller timer is counting in 2x clocks.  That is why bumping the counter to "7" is the typical setting for CAS 2.5.  CAS 2.5 setting for an SDRAM causes the SDRAM to wait a half cycle longer before driving data.  So a setting of "7" in the SDRAM controller causes the SDRAM controller to wait one 2x clock (0.5 a 1x clock) which give that little extra delay you need for a CAS 2.5.
 
The values of "6" and "7" work fine for CAS 2 and CAS 2.5 most of the time.  The whole system depends on a few delays (the catch).  The output delay of a command to tell the SDRAM you want to read.  The turnaround time in a SDRAM plus the delay caused by a programmed CAS latency.  Then the delay (propagation time) back to the SDRAMC.
 
So why isn't RD_latency and CAS the same thing?  Because... If you routed a board and placed the processor 10 inches from the SDRAM, you would incur something on the order of 1.8ns of trace delay due to propagation (assume 180ps per inch).  On fast DDR systems (yours doesn't really fit this mold, but the controller is designed for faster systems on other Coldfire parts) an additional 1.8ns might cause the round trip time from launching a read command to latching the data to take longer than "6" counts of the timer.  In this case you can add an extra count.  This allows the controller to adapt to a variety of trace lengths.  The reason "6" and "7" work most of the time, is that most people don't have long trace lengths or extremely short trace lengths.  At 180ps per inch it takes quite a bit of trace length to add up to anything.
 
Know for your system..  Did you try rd_latency "5" before trying CAS 3?
What is your average trace length?  My guess is that you have the opposite problem.. Your trace length is probably very short, since I think you increased CAS latency without changing RD_Lat.  Am I correct?  I suspect that the SDRAM is driving the DQS before the timer expires, and you are missing the first edge.  The SDRAM controller gets confused when it doesn't get enough DQS edges to match the burst.  The cycle completes internally, but the SDRAM controller is still waiting for more edges to clear its state machine.  When you do a write cycle, the DQS output, driven by the processor during writes, will cause the DQS read state machine to affectively reset itself.  Then you can perform another read.
 
I hope this helps you and others reading this post. 
 
-JWW
 
 
0 Kudos

964 Views
nspon
Contributor II
That is most interesting and you are quite right about the latency. Setting the RAM to CAS-2 and the rd_lat field to 5 also works, with a slight performance boost. Our trace lengths are indeed very short, on the order of 25mm. I hope some of this information can be added into the manual in the future, as it is too easy to take what is there as gospel when you don't actually know what is going on.

Many thanks for your assistance,

Nigel
0 Kudos

964 Views
w_wegner
Contributor III
Hi everybody,

finally, I got my SDRAM working!

The main problem definitely were manufacturing problems on all of our first 5 prototypes. We now have made two new prototypes with slightly modified SDRAM BGA pads and the manufacturing department changed the temperature profile, and this already gave me some reaction from the SDRAM (DQS after the first read access).

With my "theoretical" configuration settings, the accesses still fail, but with a mixture between original freescale settings (for a completely different RAM chip) and my settings, I can finally access the Micron RAM in read and write mode.

I have attached the Codewarrior Hardware Diagnostic .cfg file that works for my MT46H16M16LF. Please be aware that the timing settings most probably are completely wrong!
Now I only have to sort out the timing settings, and should be set to start the "real" work. :smileywink:

Thank you for the help here in the forum!

Best regards,
Wolfgang
0 Kudos

964 Views
w_wegner
Contributor III
Sorry I forgot to mention:

Above in the thread, JWW asked me to tell the forum how I debugged this problem.

Actually, I debugged it with the first prototypes by connecting the logic analyzer to the SDRAM connections, getting the waveforms seen on the two attachments of post 7. The only thing I could see there was DQS was missing during the read access to the SDRAM - without any experience in SDRAM, I could not really be sure whether this could be caused by wrong register settings, too, or if there was definitely some connection missing to the SDRAM.
Fortunately, JWW told me that this was caused by missing signals and thus assembly problems, so I could argue to get new prototypes made.

The main problem is that it is simply impossible (at least for me) to measure directly at the SDRAM balls, and even connecting the analyzer to the series termination resistor packs took me far more than half a working day with several failed attempts.

With the second batch of prototypes, I only tried different software settings, and got the SDRAM working in less than a day, so no real debugging here. After all, I could avoid the pain of making these test connections again, and I have to admit I am really glad about that!

I do not know if this is helpful for anybody, but wanted to leave it here for reference after finally being successful.

Regards,
Wolfgang
0 Kudos

964 Views
w_wegner
Contributor III
Just to complete this:

After being busy with some other things, I finally found out which setting causes the problems.

When setting RD_LAT (bits 23..20) of SDCFG1 to a value of 6 (as stated in the data sheet for CAS latency of 2 in DDR mode), the first read access seems to work (with wrong result), and subsequent accesses fail. When setting it to 7, not even the first access works.

Settings of 2 through 5 give me a working SDRAM.

I now have all the register contents set to my calculated values for this DDR SDRAM except RD_LAT, and everything seems fine with these settings:
writemem.l 0xFC0B8008 0x33211530 ; SDCFG1
writemem.l 0xFC0B800C 0x56570000 ; SDCFG2

Is this the same problem as Nigel had? The trace length of our board is quite short - I did not measure it, but as CPU and RAM are only some millimeters apart, I would guess around an inch.

What is confusing is that there is nothing stated about the meaning of RD_LAT in the datasheet of the MCF537x, there are only fixed values for each CL setting... Maybe one could add a small note that the setting depends on other parameters, too? Or am I still doing anything wrong?

Best regards,
Wolfgang
0 Kudos

964 Views
Alban
Senior Contributor II
Hello Wolfgang,
 
I will look into the .png to see if/when I can add it on the Forum globally.
When it happens, I'll post an "allowed extension update" in the General Board.
 
Point taken !
 
Cheers,
Alban.
0 Kudos