AnsweredAssumed Answered

Behavior of RTCS "OPT_SEND_TIMEOUT"

Question asked by FRED WEDEMEIER on May 13, 2016
Latest reply on May 23, 2016 by soledad

I'm setting OPT_SEND_TIMEOUT on a blocking socket. If the receiving application or network goes down when sending data, I'd expect to either see an RTCS_ERROR return or maybe a count of the number of bytes accepted. What I see in either of those two cases is no return from send(). If I close the receiving app, which shuts down the receiving socket before exiting, RTCS reports an error as expected.

 

Is this a bug, awkward/incorrect English in the manual description, or non-standard behavior by RTCS? The OPT_SEND_TIMEOUT description in the RTCS user manual states:

• Zero (RTCS waits indefinitely for outgoing data during a call to send()).

• Non-zero (RTCS waits for this number of milliseconds for incoming data during a call to send()).

"Waits indefinitely for outgoing data" could mean "wait until outgoing data is buffered," but waiting for "incoming data" does not seem to make sense.

 

The work-around is to set the MSG_DONTWAIT flag and require the sending application to deal with partial send() results, but OPT_SEND_TIMEOUT should be doing that work on behalf of the application code...

 

Thanks!

Outcomes