I have written an example application that is initializing and using the SDRAM on the RT1052-EVK.
The program should be executing directly from flash using the xip driver.
The problem is that when I perform a write/read test of the full 32M SDRAM, there are consistently a few addresses that have been changed outside of my execution logic. The same addresses are incorrect across multiple debug sessions, so I assume they are changed programmatically.
I initialize the SDRAM in my code, but I see some online references that indicate that the debug connect still initializes SDRAM itself (even when using xip?). I don't see any connect script specified for the debugger (in this case the OpenSDA linkserver)
If SDRAM is initialized by the debugger, I was trying to determine what the SDRAM memory section was being used for outside of my main execution, and if this could possibly be what is changing SDRAM.
Attached is my memory section config, my linker settings for heap/stack and an example of some addresses that fail the test.
Any tips are appreciated!
I've just been going through the SDRAM init also, and I couldn't quite tell if you're aware, but when you include the xip c files (using the "XIP_EXTERNAL_FLASH" symbol), it adds a bunch of executed code to the start of the flash. This code does exactly the same thing as the 1050RT_SDRAM_Init.scp script so if you are booting from flash there is no need to use that script. There is arguably no need to init the SDRAM at all in your code, it is already initialised. This is executed even when using the debugger.
The code is located in fsl_flexspi_nor_boot.c and is hard coded in the const uint8_t dcd_sdram array. If you look closely at the values in the 1050RT_SDRAM_Init.scp you can find the same numbers in the array. I also confirmed this by checking the semc register values straight after boot from xip and comparing with the 1050RT_SDRAM_Init.scp script values. As an aside, it would be nice if this hard coded array was a bit more user friendly and could be changed.
The odd thing is however, the semc example actually uses different values! For example, the burst length is set to 1 in the semc example, and 8 in the script. ACT2PRE, CKEOFF etc are all slightly different also. Any idea which one is correct/optimised for this dev board?
Thanks for this, I am (slowly) starting to understand how the pieces fit together, I think.
It explains why adding that XIP_EXTERNAL_FLASH symbol "fixed" my example code.
Having these different methods of initializing SDRAM adds to my confusion.
So to clarify, in the hopes it helps someone else:
1. When targeting a project to run from RAM (not xip), it could run from DTC RAM (on-chip), but you can change this to use SDRAM.
In this case, the SDRAM must be initialized so that the debugger can load and execute the project from SDRAM.
This is done by using the 1050RT_SDRAM_Init.scp as the connect script for the debugger. It uses a "basic-like" scripting language to configure the semc.
2. When using xip to execute a project from flash, SDRAM is not needed, but *is* initialized (for some reason?) This is done by configuring the project to use xip: adding the xip sdk components, adjusting the linker, adding flash memory definition and setting the XIP_EXTERNAL_FLASH symbol. The SDRAM configuration is performed by the flash bootloader using an array defined in fsl_flexspi_nor_boot.c
3. When "generically" using SDRAM (maybe not using xip or executing the project from SDRAM through a debugger), the fsl_semc driver can be used to configure the semc in the project, as shown in the semc driver example. This uses the MCUXpresso API functions (as documented in the *pdf* supplied with the SDK, not the web API docs!)
Also, following up, the API docs actually packaged with the SDK include a lot more than what is in the web based API docs. I was only using the web based docs and they must not be updated.
The semc driver is documented in the pdf included with the sdk itself, it includes a lot of other things as well!
The script language is BASIC-like, not assembler. Most of the script commands “poke” values into various registers of the SDRAM controller. The challenge, then, is mapping the register addresses back to the controller.
Thanks and regards,
OK, I went through the SDRAM init script and made some minor changes to the semc_config in my example, as well as some changes to the pin configs, but with similar results: memory reads would fail at a few specific locations.
These failure locations seemed to be related to the type of PRINTF debug being output.
Then I totally removed the SDRAM memory section (RAM3) from my project and the LinkServer debug output started acting strange/garbled. My test array uses a pointer to the correct SDRAM address, so it should not need the memory section defined.
I switched to using a J-Link and it worked as expected and the SDRAM test ram without failure (?!)
So it seems that the DAPLink-LinkServer debugger is using the SDRAM memory in some way that is impacting the ability to use the full 32MB for data. Ideas?
To close this out, I found the issue (as far as I can tell). I wrongly assumed that creating a new project with xip would add the "XIP_EXTERNAL_FLASH" symbol definition as outlined in the RT1050 overview doc (above).
After adding this definition to my project, the SDRAM test works when debugging from both J-Link and LinkServer.
As a side note, I did notice that debug is *much* slower in the new 10.1.1 version of MCUXpresso for some reason, using both probes (and removing the cache setting for LinkServer).
I can reproduce your results using your SDRAM setup. If I comment out your setup and initialize SDRAM using the 1050RT_SDRAM_Init.scp script (see link above), I cannot reproduce your results. As a first step, compare your SDRAM setup against the script.
Thanks and regards,
Thanks for confirming... I looked at the SDRAM init script and it will take awhile to map between the assembler in the script and the semc config calls, and between IOMUX assembler in that script and pin_mux.c generated by Config Tools (Pin).
I also can't find any actual documentation for the fsl_semc sdk to decipher some of the assembler.
Is there someplace the sdk is documented besides this link? http://mcuxpresso.nxp.com/api_doc/dev/116/modules.html
Firstly, I can confirm that the default Linkserver/CMSIS-DAP debug connection does not setup the SDRAM.
LinkServer launch configurations can however specify connect and reset scripts. A connect script could be used to initialise the SDRAM - a script is available from the following RT1050 post:
None of the supplied examples make use of such a debug script.
If you have not already done so, I would recommend reading the overview doc supplied with the above link.
I would not expect there to be any code running beyond your tested application that could impact the values within the SDRAM. Similarly, I do not believe any external hardware would be making these accesses.
It would be interesting to watch these locations, either using watchpoints or perhaps live variables and see what changes you can observe.
Please can I suggest you do the following. Ensure you have downloaded the latest SDK for this board. Import the Hello World XIP example. This will run from Hyperflash and initialize the SDRAM.
Now, ensure you have arranged your memory regions so that he SDRAM not used by your application. Then insert your memory testing code into this application and see if you can duplicate your findings. In my tests, the SDRAM values read back correctly.
MCUXpresso IDE Support
Thanks for the reply. I did try the hello_world_xip example with my SDRAM test code in place and it *does* work.
My test was using a C++ project, but the hello_world_xip is a C project.
If I create a new C++ project and put the same test code in place, it fails again...
I have tried creating a C++ project with the New Project Wizard ensuring the board component is selected, this links to to Board Flash, and uses the OC RAM for data. This again showed no problem WRT SDRAM testing.
An obvious point but; be sure to check you are not using the SDRAM for data, heap or stack.
I would also recommend using (migrating to) MCUXpresso IDE version 10.1.1 since this addressed an issue relating to memory ordering that can occur with SDK(s) that contain both memory definitions for the MCU and also for the board - as is the case for the 1050RT where SDRAM and Hyperflash are 'off chip'.
If you still see issues, please can you attach an example project that demonstrates the problem?
MCUXpresso IDE Support
Thanks, I did try with the latest MCUXpresso release 10.1.1, seeing the same results.
I see data and bss sections are being allocated to SDRAM in the map file, so I suspect that it is placing something in SDRAM besides only my data. I placed HEAP and STACK in a different region in the managed linker settings, but it didn't seem to change the behaviour, but I admit I am not very familiar with MCUXpresso IDE at this point.
I am attaching a test project here: