Hi All,
I have a question in setting the value for OPT_RETRANSMISSION_TIMEOUT. It seems to be not working in actual testing.
Here is a snippet of what I am trying to do:
opt_value = 500;
error = setsockopt(gSock, SOL_TCP, OPT_RETRANSMISSION_TIMEOUT, &opt_value,sizeof(opt_value));
if (error != RTCS_OK)
os_task_block();
opt_value = 0;
error = getsockopt(gSock, SOL_TCP, OPT_RETRANSMISSION_TIMEOUT, &opt_value,&opt_length);
if (error != RTCS_OK)
os_task_block();
When I get the value, it did show 500ms. However during actual testing, the retransmission of packet is still not the desired value.
During actual testing retransmission happens whenever client doesn't ack the data on the network layer. I am doing this purposely to see how long will be the retransmission. On the RTCS manual , it states there:
After a connection is established, RTCS determines the retransmission timeout, starting from this initial value.
I wonder if the RTCS stack automatically re-adjust this value before sending?
Thanks.
Solved! Go to Solution.
This socket option is only initial value for RTO which is then changed by RTCS according to measured RTT.
There is a global variable for the stack: _TCP_rto_min, that defines the minimum RTO. By default it is 15 ms and it can be changed by the application.
This socket option is only initial value for RTO which is then changed by RTCS according to measured RTT.
There is a global variable for the stack: _TCP_rto_min, that defines the minimum RTO. By default it is 15 ms and it can be changed by the application.
By the way, I forgot to mention. I did the setting up before binding as stated in the manual.