I am using a MK64FX512. I use the 50MHz Ethernet clock to produce a core clock rate of 120MHz (divide by 15, multiply by 36). I set the SIM_CLKDIV1 register to set the core rate to divide-by-1 (120MHz) and the bus clock to divide-by-2 (60MHz).
I set the baud rates for UART0 and UART1 to 57600 and 117187.5 respectively (yes, 117187.5 is a legitimate baud rate as long the processor on the mating side uses that same frequency).
When I measure a character like 0xAA on the scope, I see that each bit timing in the byte for both UART0 and UART1 are very precise.
On the other hand for UART4, which uses the bus clock, what I see on the oscilloscope is that while the overall timing for the byte is correct, by the low-going bits are substantially narrower than the high-going bits. I've observed this with the baud rate set to 57600 and 117187.5. Not only are the relative widths of the ones and zeros bits off, but the widths shift around substantially. The variation is much greater that the 1/16th bus clock cycle I would expect to see.
This is puzzling to me since both the core and bus clocks are derived from the same MCGOUTCLK clock. Nothing I have seen in the reference manual explains why the bus clock would cause the UART output to jump around more than the core clock.
Anyone have an idea?