The problem I've found so far is the 64-bit arithmetic. I defined 64-bit local variables inside a function and did all the arithmetic, including division. When I changed the variable to 32 bits, the program worked. This is where another problem I found earlier is to comment out the 64-bit arithmetic inside the function. So far, the accuracy of the algorithm has dropped, but the program is working. But I don't know why that decimal point makes this 64-bit operation work.

I targeted this 64-bit operation because: I mentioned in the previous diagram that the program doesn't run, it stops at DefaultISR, WDOG_EWM_IRQHandler is displayed, and on the next line is __aeabi_uldivmod, which is a function (presumably added by the compiler) that performs division and modulo on long unsigned integers, with only one place in the program where 64-bit arithmetic is used. So I've targeted the problem here. In practice, this action does have an effect on the program. I couldn't find a good explanation for the effect of floating-point on 64-bit operations