Hi Kerry, I gathered the info.
The clock is enabled in Chip_SDIF_Init():
Chip_Clock_EnableOpts(CLK_MX_SDIO, true, true, 1);
Clk_rate is read using this function, which returns 204 MHz:
Chip_Clock_GetBaseClocktHz(CLK_BASE_SDIO)
Used in this context:
Chip_SDIF_SetClock(pSDMMC, Chip_Clock_GetBaseClocktHz(CLK_BASE_SDIO), g_card_info->card_info.speed);
I verified that the speed parameter makes sense. Console output:
speed = 400000
speed = 25000000
speed = 50000000
With the original computation where I observe the wrong frequency, the divisors are computed as:
- 400 KHz: div = 256
- 20 MHz: div = 6
- 25 MHz: div = 5
- 50 MHz: div = 3
With my change it's one less:
- 400 KHz: div = 255
- 20 MHz: div = 5
- 25 MHz: div = 4
- 50 MHz: div = 2
Also confirmed on the scope that these frequencies are the ones being output.
Plugging in the formula from the reference manual it looks like my formula gives the better answer:
Original:
204,000,000 / (2 * 256) = 398,438 (can't actually be set, 256 doesn't fit in the register so it was really 204MHz I guess)
204,000,000 / (2 * 6) = 17,000,000 (15% error)
204,000,000 / (2 * 5) = 20,400,000 (18% error)
204,000,000 / (2 * 3) = 34,000,000 (32% error)
Revised:
204,000,000 / (2 * 255) = 400,000 (0% error)
204,000,000 / (2 * 5) = 20,040,000 (2% error)
204,000,000 / (2 * 4) = 25,500,000 (2% error)
204,000,000 / (2 * 2) = 51,000,000 (2% error)
Adding + 1 picks the nearest divisor resulting in the best match, adding + 2 rounds up the divisor which overshoots the division rate in all cases other than exact multiples.
Here's a plot in 100KHz increments with blue as the target and red as the actual output:

Swept over the range, the average error is halved in the revised version.