Hi,
We are debugging an issue with the RGMII TXC clock generated by the ENET MAC where the TXC is off by a very tiny amount on a board with IMX8QM processor. We are feeding output of a 24MHz clock crystal to the SoC whose error is within 30ppm. The ENET1 MAC is connected to a PHY through RGMII. As per the requirement by the PHY Chip manufacturer, this clock should be between 124.99375 and 125.00625 (+/- 50ppm). What we observed was 124.98620 MHz ( equivalent of -110 ppm). This, according to the PHY manufacturer can cause errors, especially during high temperature operations (which we are observing).
Can you guide me how the 24Mhz input clock to the SoC (XTAL) gets translated to the RGMII TXC (125Mhz for RGMII operation) coming out of the MAC ?
I saw this in the IMX8QMRM

I am trying to understand what path the clock takes to get the ENET1_TXC (125Mhz). Specifically,
- Which PLL ( 24Mhz or AV_PLL16 or DIG_PLL0) does get used to generate TXC ?
- Which one is the root clock for TXC ? For example CONN_SLSLICE1 or CONN_SLSLICE3_CLK_ROOT for ENET1.
- How do I read the values of DIV and SLSlice<>_sel mentioned above?. I believe all these are done in the SCFW ? If so, can you help me pin point where it is done?
Interestingly, I saw this table too

Does this mean that none of the DIG_PLL0, AV_PLL16 can be changed at all? That would mean that everything depends on the 24Mhz clock (its quality and precision) ONLY and the DIV values are hard-coded and cannot be changed?
We believe the input 24Mhz clock fed is error/jitter free. Any chance the SOC might introduce any errors? If so, is there any workaround ? lets say, adjusting any divider/multiplier along the clock tree to make the TXC error within the acceptable levels (50ppm)
Can you help us find answers to the above?
Thanks
Regards
Mahesh