AnsweredAssumed Answered

MCUX on RT1050 - MMC stack won't initialize

Question asked by David Rodgers on Sep 3, 2019
Latest reply on Sep 24, 2019 by Jack King

TL;DR -- When starting the MMC stack by calling mmc_disk_initialize() which calls MMC_Init(), a possibly incorrect switch() statement causes a subroutine to generate an assert().  (EDIT 9/16: The assert() occurs when the extended CSD data comes back all zeroes, and this occurs when a cacheable memory area, i.e. not SRAM_DTC, is used to store global data.  See discussion below.)

 

So I'm trying to get an eMMC device (ISSI IS21ES04G) to work on an RT1050-based board.  We have the device attached to GPIO_SD_B1_00-06 (USDHC2) in 4-bit mode.  Here's the schematic:

Schematic of eMMC connection to RT1050

Since there's no SDK 2.6.1 example for an eMMC device, I instead took some files from the "fatfs_usdcard" example, cut out the SD-specific files, and imported MMC-specific source files directly from the SDK.  Here's what I have:

Files in fatfs and sdmmc in my project

Of note, I deleted fsl_sd_disk.* and replaced them with fsl_mmc_disk.*.  I also customized my SDHC-specific board.h information:

/* SD/MMC configuration. */
#define BOARD_USDHC1_BASEADDR USDHC1
#define BOARD_USDHC2_BASEADDR USDHC2
/* EVKB uses PLL2_PFD2 (166 MHz) to run USDHC, so its divider is 1. On the
* MCU, we run PLL2_PFD0 at 396 MHz, so its divider is 2, yielding 198 MHz. */

#define BOARD_USDHC1_CLK_FREQ (CLOCK_GetSysPfdFreq(kCLOCK_Pfd0) / (CLOCK_GetDiv(kCLOCK_Usdhc1Div) + 1U))
#define BOARD_USDHC2_CLK_FREQ (CLOCK_GetSysPfdFreq(kCLOCK_Pfd0) / (CLOCK_GetDiv(kCLOCK_Usdhc2Div) + 1U))
#define BOARD_USDHC1_IRQ USDHC1_IRQn
#define BOARD_USDHC2_IRQ USDHC2_IRQn

/* eMMC is always present. */
#define BOARD_USDHC_CARD_INSERT_CD_LEVEL (1U)
#define BOARD_USDHC_CD_STATUS() (BOARD_USDHC_CARD_INSERT_CD_LEVEL)
#define BOARD_USDHC_CD_GPIO_INIT() do { } while (0)

/* eMMC is always powered. */
#define BOARD_USDHC_SDCARD_POWER_CONTROL_INIT() do { } while (0)
#define BOARD_USDHC_SDCARD_POWER_CONTROL(state) do { } while (0)
#define BOARD_USDHC_MMCCARD_POWER_CONTROL_INIT() do { } while (0)
#define BOARD_USDHC_MMCCARD_POWER_CONTROL(state) do { } while (0)

/* Our device is on USDHC2 for Rev. A/B, and on USDHC1 for Rev. C. */
#define BOARD_MMC_HOST_BASEADDR BOARD_USDHC2_BASEADDR
#define BOARD_MMC_HOST_CLK_FREQ BOARD_USDHC2_CLK_FREQ
#define BOARD_MMC_HOST_IRQ BOARD_USDHC2_IRQ

#define BOARD_SD_HOST_BASEADDR BOARD_USDHC2_BASEADDR
#define BOARD_SD_HOST_CLK_FREQ BOARD_USDHC2_CLK_FREQ
#define BOARD_SD_HOST_IRQ BOARD_USDHC2_IRQ

#define BOARD_MMC_VCCQ_SUPPLY kMMC_VoltageWindow170to195
#define BOARD_MMC_VCC_SUPPLY kMMC_VoltageWindows270to360

/* Define these to indicate we don't support 1.8V or 8-bit data bus. */
#define BOARD_SD_SUPPORT_180V SDMMCHOST_NOT_SUPPORT
#define BOARD_MMC_SUPPORT_8BIT_BUS SDMMCHOST_NOT_SUPPORT

 

In my test function, I initialize the MMC stack:

SDMMCHOST_SET_IRQ_PRIORITY(BOARD_MMC_HOST_IRQ, 11);
DSTATUS dstatus = mmc_disk_initialize(MMCDISK);
if (RES_OK != dstatus) {
printf("mmc_disk_initialize() failed\n");
return -1;
}

 

When I trace into mmc_disk_initialize(), everything seems OK at first.  It correctly sets the host base address (USDHC2) and source clock rate (198 MHz... I use PLL2_PFD0 @ 396 MHz divided by 2 to drive the USDHC modules), then calls MMC_Init().  MMC_HostInit() passes fine, then MMC_PowerOnCard(), then MMC_PowerOffCard().  Then it calls MMC_CardInit(), and that goes a fair distance.  It sets the 400 kHz clock OK, gets the host capabilities, tells the card to go idle, gets the card CID, sets the relative address, sets max frequency, puts the card in transfer state, gets extended CSD register content, and sets the block size.  Then it calls this:

/* switch to host support speed mode, then switch MMC data bus width and select power class */
if (kStatus_Success != MMC_SelectBusTiming(card))
{
return kStatus_SDMMC_SwitchBusTimingFailed;
}

The guts of the function:

static status_t MMC_SelectBusTiming(mmc_card_t *card)
{
assert(card);
mmc_high_speed_timing_t targetTiming = card->busTiming;
switch (targetTiming)
{
case kMMC_HighSpeedTimingNone:
case kMMC_HighSpeed400Timing:
if ((card->flags & (kMMC_SupportHS400DDR200MHZ180VFlag | kMMC_SupportHS400DDR200MHZ120VFlag)) && ((kSDMMCHOST_SupportHS400 != SDMMCHOST_NOT_SUPPORT)))
{
/* switch to HS200 perform tuning */
if (kStatus_Success != MMC_SwitchToHS200(card, SDMMCHOST_SUPPORT_HS400_FREQ / 2U))
{
return kStatus_SDMMC_SwitchBusTimingFailed;
}
/* switch to HS400 */
if (kStatus_Success != MMC_SwitchToHS400(card))
{
return kStatus_SDMMC_SwitchBusTimingFailed;
}
break;
}
case kMMC_HighSpeed200Timing:
if ((card->flags & (kMMC_SupportHS200200MHZ180VFlag | kMMC_SupportHS200200MHZ120VFlag)) && ((kSDMMCHOST_SupportHS200 != SDMMCHOST_NOT_SUPPORT)))
{
if (kStatus_Success != MMC_SwitchToHS200(card, SDMMCHOST_SUPPORT_HS200_FREQ))
{
return kStatus_SDMMC_SwitchBusTimingFailed;
}
break;
}
case kMMC_HighSpeedTiming:
if (kStatus_Success != MMC_SwitchToHighSpeed(card))
{
return kStatus_SDMMC_SwitchBusTimingFailed;
}
break;

default:
card->busTiming = kMMC_HighSpeedTimingNone;
}
return kStatus_Success;
}

 

When I step into this function, card->busTiming is 0 (kMMC_HighSpeedTimingNone) and card->flags is 0x100 (kMMC_SupportHighCapacityFlag).  According to the logic of this function, because our timing is None, we go through all of the switch() cases.  In this application, kSDMMCHOST_SupportHS400 is set to SDMMCHOST_NOT_SUPPORT, so the first branch (HS400) is optimized out.  The second branch (HS200) is evaluated, but because neither of the HS200 flags are set, we skip it.  Then we fall through into the kMMC_HighSpeedTiming branch, and we call MMC_SwitchToHighSpeed().  This is where things go wrong.

 

Here is the relevant code in MMC_SwitchToHighSpeed():

static status_t MMC_SwitchToHighSpeed(mmc_card_t *card)
{
assert(card);
uint32_t freq = 0U;
/* check VCCQ voltage supply */
[...]

if (kStatus_Success != MMC_SwitchHSTiming(card, kMMC_HighSpeedTiming, kMMC_DriverStrength0))
{
return kStatus_SDMMC_SwitchBusTimingFailed;
}

if ((card->busWidth == kMMC_DataBusWidth4bitDDR) || (card->busWidth == kMMC_DataBusWidth8bitDDR))
{
freq = MMC_CLOCK_DDR52;
SDMMCHOST_ENABLE_DDR_MODE(card->host.base, true, 0U);
}
else if (card->flags & kMMC_SupportHighSpeed52MHZFlag)
{
freq = MMC_CLOCK_52MHZ;
}
else if (card->flags & kMMC_SupportHighSpeed26MHZFlag)
{
freq = MMC_CLOCK_26MHZ;
}

card->busClock_Hz = SDMMCHOST_SET_CARD_CLOCK(card->host.base, card->host.sourceClock_Hz, freq);
[...]

card->busTiming = kMMC_HighSpeedTiming;
return kStatus_Success;
}

 

freq is initialized to 0 at the top of the function.  We tell the MMC controller to switch to HS timing with 50 ohm drive strength, then we set freq based on one of three sets of conditions:

  • If 4-bit DDR or 8-bit DDR flag is enabled, set freq to 52000000 (and enable DDR mode).
  • If HS 52 MHz flag is enabled, set freq to 52000000.
  • If HS 26 MHz flag is enabled, set freq to 26000000.

However, if none of those three conditions are met, then freq remains 0.  Thus, the call to SDMMCHOST_SET_CARD_CLOCK() triggers an assert(), because USDHC_SetSdClock() asserts that freq cannot be 0.  And that's it, game over.

 

I have not yet put a scope on the data/clk/cmd lines to verify there is activity; that'll probably be my next step.  My question is... what's going wrong here, exactly?  By the time execution reaches MMC_SelectBusTiming(), card->busTiming is set to 0 (kMMC_HighSpeedTimingNone) and the only flag set in card->flags is 0x100 (kMMC_SupportHighCapacityFlag).  So why does MMC_SwitchToHighSpeed() assume that either the bus width has already been selected to be 4-bit or 8-bit, or that HS26 or HS52 are already indicated in the card flags?  Has this code actually been tested on an MMC card?  What should I be doing differently?  Thanks.

 

David R.

Outcomes