Hi
We're building a product with an i.MX8ULP processor and one of our users noticed that the time drift is pretty bad (this is unit dependents, where some units are "only" a few seconds fast in about an hour, but some devices easily go fast one second per minute)
I've reproduced on our i.MX8ULP EVK, running the latest BSP (LF_v6.12.20-2.0.0), using chrony to measure the drift (requires internet)
imx8ulpevk login: root
# stop ntpd/timesyncd to avoid interfering... (why are both running?!)
root@imx8ulpevk:~# systemctl disable --now ntpd
root@imx8ulpevk:~# systemctl disable --now systemd-timesyncd
# run chronyd as a container for simplicity.
# tmpfs ensures drift state is not preserved accross restarts
root@imx8ulpevk:~# docker run --net=host --name ntp -d simonrupf/chronyd
Unable to find image 'simonrupf/chronyd:latest' locally
latest: Pulling from simonrupf/chronyd
d69d4d41cfe2: Pull complete
5f93ef52bdbb: Pull complete
3a84ecea1ef1: Pull complete
Digest: sha256:6c2a693438b6c663f151516ccce21b409f0891df673a34cadf133c74b35b7b6b
Status: Downloaded newer image for simonrupf/chronyd:latest
15a0cdd35648868fe8592e98f7d653a6e9fd5abec5ff8c335b6edffc69617fe8
# (here need to wait a bit for the Frequency error to be computed
# based on measurements stability. If it doesn't compute it helps
# to run something like `chronyc burst 4/10`)
root@imx8ulpevk:~# docker exec ntp chronyc tracking
Reference ID : 2D4D1467 (v4.ntp.admtan.jp)
Stratum : 3
Ref time (UTC) : Tue Jul 22 07:26:57 2025
System time : 0.343799949 seconds fast of NTP time
Last offset : +0.068965137 seconds
RMS offset : 0.129153565 seconds
Frequency : 12426.411 ppm fast
Residual freq : +0.000 ppm
Skew : 100.904 ppm
Root delay : 0.018474324 seconds
Root dispersion : 0.001048955 seconds
Update interval : 2.0 seconds
Leap status : Normal
So the interesting line here is Frequency, described as "The ‘frequency’ is the rate by which the system’s clock would be wrong if chronyd was not correcting it. It is expressed in ppm (parts per million). For example, a value of 1 ppm would mean that when the system’s clock thinks it has advanced 1 second, it has actually advanced by 1.000001 seconds relative to true time."
In this case, that means 12426.411 * 10^-6 * 3600 = ~45 seconds would pass too fast in 1h, but this appears to change on reboot
So, a few questions:
- Where does the arch_sys_counter gets its clock from? Is that a problem with one of the crystals on the board being low precision? (... if so we copied that flaw from the EVK...)
- I've noticed there are two clocksources available, and switching to imx-tpm is much more stable (either `echo imx-tpm > /sys/devices/system/clocksource/clocksource0/current_clocksource` or setting `clocksource=imx-tpm` in boot args), is there any reason it's not the default? I'm thinking of sending a patch to increase the default from 200 to 500 in the 32-bit counter case (need higher than 400 to have priority over arch sys counter), but I'd like to understand why a lower value was chosen first, if there is any reason.
Thank you
Hello,
Internal team as able to tun the same tests and confirmed that the arch_sys_counter
delivers relatively poor precision, whereas the imx-tpm timer is markedly more accurate. As mentioned before, the ARM generic timer includes a system counter and set of per-core timers.
arch_sys_counter
.If your application demands tighter timing accuracy, they can switch to the TPM as their clock source. Enabling TPM-based timing requires instantiating TPM timers on both CPU cores to preserve per-core event handling.
Here's the patch about dual TPM timers enablements. To test it you need to rebuild your ATF to generate new bl31.bin.
make -j8 PLAT=imx8ulp CROSS_COMPILE=aarch64-poky-linux- bl31 IMX8ULP_TPM_TIMERS=1
And you need to use imx8ulp-evk-tpm.dtb
device tree file.
Any questions, let me know.
Best regards.
Thank you for the reply!
> It is driven by an internal 1 MHz LPO, whose frequency stability is inherently limited—hence the observed precision issues with arch_sys_counter.
Ah! That makes sense, thank you
> Here's the patch about dual TPM timers enablements. To test it you need to rebuild your ATF to generate new bl31.bin.
I'm on vacation this week so I'll check next week (in particular the atf code for IMX8ULP_TPM_TIMERS was added in lf-6.12.20-2.0.0 and we're on an older branch, so I'll need to rebase and validate first)
We've been running with a single tpm instance by just selecting the imx-tpm clocksource and I had not noticed this problem:
> Enabling TPM-based timing requires instantiating TPM timers on both CPU cores to preserve per-core event handling
Could you clarify what's wrong with enabling only a single tpm timer? "just" more load on cpu0 to keep the system clock up to date? Or are there other downsides?
Thank you again,
Dominique
Hello,
Thank you for test this in EVK, this issue has not been reported before.
Regarding your questions:
The Generic Timer provides a standardized timer framework for Arm cores. The Generic Timer includes a System Counter and sets of per-core timers.
The System Counter is an always-on device, which provides a fixed frequency incrementing system count. The system count value is broadcast to all the cores in the system, giving the cores a common view of the passage of time. Which is implemented by SCTR.
The System Counter inputs two counter clock sources and outputs a gray coded counter value and interrupt signals (one per compare frame) to the platform’s interrupt controller.
After reset, the System Counter is disabled with count value reset to zero and base frequency selected.
Once the counter is enabled, it will increment the appropriate value on each rising edge of the selected clock.
Best regards.