As a first step advancing towards the goal in the title, I would like to completely understand the process of configuring, loading, running and debugging an application in external SDRAM, which is as far as I see not completely documented, nor completely discussed (in a single thread) here on NXP forum... so I would appreciate very much if someone like @Miguel04 or @lpcxpresso_supp expert in the topic can confirm my understandings.
Let's get started!
I picked evkbimxrt1050_iled_blinky demo application as starting point:

Because this project is configured by default to run using eXecution In-Place (XIP), while importing I immediately checked "Link application to RAM" and moved BOARD_SDRAM as the first RAM area listed in Memory Configuration. These two operations should be sufficient to instruct MCU Linker to move all the code in external SDRAM starting from 0x80000000 address.
I also moved NCACHE_REGION right after BOARD_SDRAM to have all the external SDRAM contiguous in memory configuration, thinking (and hoping) this will not affect MCU Linker in any way.

After SDK sample project import completes, if I build the project this is what I see in build console output and MAP file:


What I see in above listings makes me think I am right, but please... again... correct me if NOT.
But now, if when I tried to debug the project, this is what happened:

and I said to myself: "Ok, he's right. How could he be able to load the code to SDRAM if nobody initialized it before?"
Then, thanks to this article in NXP community blog I figured out that LinkServer needed a "customized" Connect Script to initialize the Smart External Memory Controller (SEMC) to enable the SDRAM, and consequenly to program and jump to 0x80000000 successfully. The connection script for RT1050 SDRAM initialization is provided at above link, and when added in Debug Configuration something starts to work.


Loading to SDRAM now works, but if I press Resume (F8) then a crash immediately occurs:



The Hard Fault seems to be triggered by BOARD_InitBootClocks(), when SEMC clock configuration is performed. Then I said to myself: "Ok, right. If LinkServer already configured SEMC to support external SDRAM and instruction are fetched from there, it's natural that changing clocks configuration while fetching instructions may cause some malfunctioning. Let's define SKIP_SYSCLK_INIT symbol.

Now, if I start debugging and just press Resume (F8) the program runs and LED starts to blink.
HURRAY! No... wait.... I cannot set breakpoints..... or better, I can set breakpoints, but they'll be never hitted. As you can see in the screenshot below, program is running, breakpoint is set, but no triggering.

The only way to have a breakpoint triggered is to set it up using right mouse button --> Add breakpoint... --> Type: Hardware


HURRAY! Now the breakpoint is working! No... wait.... if I try to step....... it's resuming execution, then stopping when hitting the same breakpoint again...........
So, debugging was impossible... until I found this thread in NXP forum in which @kerryzhou hinted at --cachelib libm7_cache.so LinkServer option, which I later found also explained in RT1050_BriefOverview_v201.pdf document provided at the same first link providing the connection script too. The document says (let me replace the screenshot, just to make it less outdated):
Debug performance and the Data Cache
When debugging images that make use of SDRAM or OC_RAM for storage of variable data (globals, stack, heap etc.) then the following option should be set within the LinkServer debug launch configuraton as shown below:

This module ensures that debug cache coherence is maintained, and correct debug operations may fail if this module is not specified. However there will be a debug performance penalty when this module is used.
Note: this module is not required io the SDRAM (or OC_TAX) only contains constant or uncached data.

HURRAY! Load works! Execute works! Breakpoints are triggering! Stepping works!
The only drawback is that stepping through the code is reaaaally, really slow.
As a very last step, I removed xip_device and xip_board from SDK components, which are no more necessary because now the project is no more executing code in-place from FLASH memory, and I also set XIP_BOOT_HEADER_ENABLE symbol to 0 in both Debug and Release configurations because, for the same reason, I don't need XIP boot header to be included in generated image anymore.




QUESTIONS:
- Is the above procedure correct? Am I missing something, or doing something wrong?
- Moving NCACHE_REGION right after BOARD_SDRAM in Memory Configuration may cause some problem?
- I read somewhere here on the forum about the need of setting Reset Handling to SOFT in Debug Configuration, is it really necessary and what may be the benefits? It looks like the debugger is working even if I did not explicitly set this option.
- How can I write output binary to FLASH memory from McuXpresso IDE starting from a prescribed address, to avoid (1) to overwrite my Bootloader, which lives at 0x60000000 address and is XIP, and (2) allow my Bootloader to pick this binary, copy it to SDRAM then jump there?