I wonder whether anyone has experienced the following?
I am sending TCP data from an HTTP server, whereby the following effect occurs every time for a certain image:
1) The transmission takes place by sending TCP frames of maximum IPv4 length (about 1400 bytes each)
2) TCP windowing is in operation and so Ethernet frames are sent quickly after another [that is one frame is placed into a buffer descriptor and the TDAR is written to start polling, then another one is written and the TDAR is again written, etc.].
3) The final frame in the test image is a shorter frame (about 700 bytes) since it is the last chunk of the data.
4) It happens that the last activity is that two TCP frames are sent quickly (1400 bytes followed by 700 bytes) using the same technique as the rest of the image, which works otherwise as expected.
5) The effect (presumably something to do with the last frame length or a timing issue since the last frame can be prepared slightlly faster) is that the first 1400 byte long TCP frame is sent but the second (700 byte frame) is not sent.
This results in a TCP re-transmission after which (with a slight delay at the end) the image has been successfully transmitted and displayed.
But, on closer observation it is found that the second frame is not lost but it doesn't get sent until a repetition is made. When the repetition is made, the 'lost' frame is sent, followed by the repeated frame which has just been generated by software. Thsi means that the second frame was waiting in the output buffer but didn't get sent, but got 'released' by following activity.
To prove this I could set a breakpoint in the TCP code which 'wanted' to repeat the 'lost' frame and simply write something to the TDAR register using the debugger; immediately the "waiting" frame is indeed sent!!
6) I am using the EMAC in its compatibility mode (compatible to the one in the Coldfires), where the same code doesn't have this problem.
7) If I disable the IP accelerator at the transmitter (check sum offloading disabled) the problem is no longer seen (this suggest that it is probably a timing issue since the SW takes more time to prepare the following frame and possibly then misses a race-state timing issue. Alternatively it could be due to the store and forward operation which needs to be enabled to use it - I have the FIFO watermark set to 64 bytes.
My first conclusion is that it is possible for the Ethernet transmitter to stop polling its buffer descriptors, although the TDAR register is written with a frame waiting to be sent. This only happens when a shorter frame follows a full size frame (full size frames following each other don't have the problem - another test image that is slightly larger terminates with two TCP farmes of 1400 and 900 bytes and this is served perfectly every time....!).
Has anyone experienced the same, or similar? Was a workaround found?
Note that I am using an 0M33Z mask (first version) but didn't see any errata about such a thing.