RTCS - TCP disconnect reporting

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

RTCS - TCP disconnect reporting

1,133 Views
v_k
Contributor I

I'm using MQX 3.7 and RTCS for implementing TCP client and server applications. I'm using the RTCS_select function to determine whether some data is available to be read on a socket or not. This works fine except when the remote end has abruptly closed the connection. In this case, select call never reports that there is any data (which it should and when I do a recv it should return an error due to the disconnect). Therefore, I never (well, I waited for more than 5 mins after disconnect) get to know that the connection is no longer alive. However, I've found that if I do a send when the connection has been lost, then I do receive an error, and therefore get to know that the connection does not exist. I guess this is a problem in RTCS, that it does not report TCP disconnects when no activity is taking place on the socket. Any ideas on how to workaround this issue?


I tried using the TCP keep-alive option, but for my case, it is too late. The minimum I can specify is 1 minute.

Labels (1)
Tags (3)
0 Kudos
3 Replies

507 Views
Ed_EmbeddedAcce
Contributor II

The default connection timeout is 8mins as recommended in the RFCs.  You can reduce this time it is specified in ms in rtcs.h override in your userconfig.h file but be careful the IETF has specified 8 mins for a good reason but it may not apply in the network your product is used in.  It is better to use some type of keep alive.

0 Kudos

507 Views
v_k
Contributor I

I tried the TCP_KEEPALIVE option, but with RTCS it causes spurious disconnects. That is even when everything is normal, sometimes the connection is reported as lost. Disabling the keep-alive removes the problem. I haven't yet looked into RTCS for the possible cause. Anybody's had this issue before?

0 Kudos

507 Views
PLacerenza
Contributor II

I have a problem where I am looking for something very similar.

My project talks to multiple sockets, and most of them anticipate send/receive.  Thus, I can see an error/disconnect on the receive task for those sockets.

I have one socket that only anticipates sending, thus I have no practical need for a receive task for that socket.  What I find is this:

  1. First connect works exactly as expected.
  2. Remote endpoint disconnects (intentionally).  Because there is no receive, there's no task that detects that the disconnect occurred, and I keep my send task alive.
  3. The remote endpoint reconnects.  My code never left the 1st connection, and thus until I attempt a send, acts as though a disconnect/reconnect never occurred.  The remote endpoint can disconnect/reconnect all day, and until I send, nothing happens recognizing a disconnect or error in this socket task's coding.
  4. I attempt sending to the endpoint.  I assume because the connection IS a different connection, send(...) is to equal to RTCS_ERROR, which in turn disconnects my end of the connection, which the remote endpoint doesn't seem to recognize.
  5. My code waits for a proper connection to occur again.  If the remote endpoint disconnects and reconnects once more, we go back to step 1, and the cycle continues.

If I had a receive task, I'd likely have no issues, as my other sockets are working fairly well.  The problem might be a superficial one, but it is that this socket should never anticipate receiving on my end, and if I did receive, all I would do is throw it out and waste resources.

Is there a better way to detect a disconnect/error than to effectively have a task to do recv(...) for this socket?

If not, do I have to put the data from recv somewhere, thus allocating a "worthless" message struct, or could I simply do something like recv([socket], NULL, 0, 0)?

0 Kudos