I'm using MQX 3.7 and RTCS for implementing TCP client and server applications. I'm using the RTCS_select function to determine whether some data is available to be read on a socket or not. This works fine except when the remote end has abruptly closed the connection. In this case, select call never reports that there is any data (which it should and when I do a recv it should return an error due to the disconnect). Therefore, I never (well, I waited for more than 5 mins after disconnect) get to know that the connection is no longer alive. However, I've found that if I do a send when the connection has been lost, then I do receive an error, and therefore get to know that the connection does not exist. I guess this is a problem in RTCS, that it does not report TCP disconnects when no activity is taking place on the socket. Any ideas on how to workaround this issue?
I tried using the TCP keep-alive option, but for my case, it is too late. The minimum I can specify is 1 minute.
The default connection timeout is 8mins as recommended in the RFCs. You can reduce this time it is specified in ms in rtcs.h override in your userconfig.h file but be careful the IETF has specified 8 mins for a good reason but it may not apply in the network your product is used in. It is better to use some type of keep alive.
I tried the TCP_KEEPALIVE option, but with RTCS it causes spurious disconnects. That is even when everything is normal, sometimes the connection is reported as lost. Disabling the keep-alive removes the problem. I haven't yet looked into RTCS for the possible cause. Anybody's had this issue before?
I have a problem where I am looking for something very similar.
My project talks to multiple sockets, and most of them anticipate send/receive. Thus, I can see an error/disconnect on the receive task for those sockets.
I have one socket that only anticipates sending, thus I have no practical need for a receive task for that socket. What I find is this:
If I had a receive task, I'd likely have no issues, as my other sockets are working fairly well. The problem might be a superficial one, but it is that this socket should never anticipate receiving on my end, and if I did receive, all I would do is throw it out and waste resources.
Is there a better way to detect a disconnect/error than to effectively have a task to do recv(...) for this socket?
If not, do I have to put the data from recv somewhere, thus allocating a "worthless" message struct, or could I simply do something like recv([socket], NULL, 0, 0)?