Whenn reading out the BBNSM RTC cyclically
```hwclock -r -f /dev/rtc0```
ande settign the CPU unde load (using stress-ng and stressapptest in parallel), intermidiate errors occur. The call to ```hwclock -r -f /dev/rtc0``` then produces no output, but goes into a timeout.
In a test run 324 out of 43322 calls failed. Wenn reading BBNSM RTC with the same settings with no CPU load no errors could be seen within 269305 calls.
Kernel: 6.1 from NXP BSP
@danny_john
Are there any progress or news regarding this topic. We could see the same behaviour at our Board i.MX93 EVK Revision A0.
We are planning to build a PLC with this CPU and want to use the internal RTC.
No, there is no progress. Most of our customers use an external RTC backuped by a coin cell since the BBNSM RTC needs powering 24/7 and longer power outage needs more than a coin cell.
We will retest this against kernel 6.6 but we see no changes in the SW implementation. we suspect the read function is to blame - there seem to be no guard against scheduling or whatever in realtime / heavy load scenario.
To reproduce run stresstools in background with decreased priority, actual test setup was:
```
# nice level 5
stressapptest -W -s 31536000 -M 128 -m 1 -C 0 -i 1 &
# nice level 9
stress-ng --cpu-load 10 --cpu 2 --timeout 31536000 &
```
Run in a loop with 2 seconds between invocations (rtc0: I2C RTC / rtc1: BBNSM RTC):
```
hwclock -r -f /dev/rtc0
hwclock -r -f /dev/rtc1
```
Expectation: when reading every 2 seconds we see increasing time whit every read.
Error case: reading BBNSM RTC fails randomly, hwclock needs 10 seconds to return in this case.
```
8<------------------------------------------
[2024-06-04_23:09:20] SUCCESS /dev/rtc1 1970-01-01 13:59:50.978334+00:00 1970-01-01 13:59:50.978334+00:00
[2024-06-04_23:09:23] SUCCESS /dev/rtc0 2024-06-04 23:09:23.175275+00:00 2024-06-04 23:09:23.175275+00:00
[2024-06-04_23:09:23] SUCCESS /dev/rtc1 1970-01-01 13:59:53.940790+00:00 1970-01-01 13:59:53.940790+00:00
[2024-06-04_23:09:26] SUCCESS /dev/rtc0 2024-06-04 23:09:26.222111+00:00 2024-06-04 23:09:26.222111+00:00
[2024-06-04_23:09:26] SUCCESS /dev/rtc1 1970-01-01 13:59:56.969521+00:00 1970-01-01 13:59:56.969521+00:00
[2024-06-04_23:09:29] SUCCESS /dev/rtc0 2024-06-04 23:09:29.175745+00:00 2024-06-04 23:09:29.175745+00:00
[2024-06-04_23:09:39] ERROR /dev/rtc1
[2024-06-04_23:09:42] SUCCESS /dev/rtc0 2024-06-04 23:09:42.239289+00:00 2024-06-04 23:09:42.239289+00:00
[2024-06-04_23:09:42] SUCCESS /dev/rtc1 1970-01-01 14:00:12.953436+00:00 1970-01-01 14:00:12.953436+00:00
[2024-06-04_23:09:45] SUCCESS /dev/rtc0 2024-06-04 23:09:45.209569+00:00 2024-06-04 23:09:45.209569+00:00
[2024-06-04_23:09:45] SUCCESS /dev/rtc1 1970-01-01 14:00:15.945005+00:00 1970-01-01 14:00:15.945005+00:00
8<------------------------------------------
```
Hello,
Thank you for sharing, I'm investigating this issue with internal team and testing on my side as well.
While doing that, could you share the full part number of the i.MX93 that you are using?
Best regards/Saludos,
Aldo.
Hello,
So I could continue with the investigation, please help me sharing the full part number.
Best regards/Saludos,
Aldo.
Hello,
the full part number is PIMX9352CVVxMAB
Best regards,
Danny
Hello,
Could you share the logs seen in the failed test?
How easy is to reproduce such error?
Best regards/Saludos,
Aldo.