I'm sorry, but this time I disagree.
It's true that timestamps taken "as they are" are not numerically multiple of 80ns because, as you mentioned, remainder is not 0.
But In order to have a period multiple of 80ns is sufficient to divide timestamps by 80ns and obtain all remainders equal.
The remainder different from 0 means that there is a small offset added to all timestamps.
This offset simply is the initial starting point dictated by the first packet sent/received.
If you subtract the same offset from all timestamps, then all remainders became 0.
So, as I stated earlier, ALL send/receive packets have a constant period multiple of 80ns.
But why we obtain such a behavior? Why all timestamps are latched at multiple of 80ns?
Regards,
Marco