Hi!
I'm not sure if this is a problem, but it costed me the whole afternoon - so I think I need to share the issue in order to possibly save somebody's time.
The WS_send function (and so the WS_close) have the following code right before returning:
setsockopt(ws_context->sock, SOL_SOCKET, SO_EXCEPTION, &n_exept, sizeof(n_exept));
/* Block calling task. It will be unblocked as soon as message is processed. */
if ((_task_id) message.data != ws_context->tid)
{
_task_block();
}
They're both taking advantage of the clever sync feature RTCS implemented in its last version I think - through the "except" condition.
However I found out there may be an issue with this approach in the current implementation. What if the HTTP server priority is higher than the priority of the WS_send caller task? What happens here in my test program is:
- My http server priority is 7 (the default)
- A user task of priority 10 calls WS_send
- Right after the setsockopt call (inside WS_send) the system function ws_process_api_calls (on httpsrv_ws.c ) takes the processor control (because the http server has higher priority!)
- It executes normally and then tries to unblock the caller - however the caller wasn't blocked yet! This puts the task in invalid state (0x011)!
- After that, the WS_send continues execution and call task_block - which blocks the task forever!
I could solve this issue here by changing the HTTP server priority to a large value - however I don't think this is the most 'elegant' solution.
Is there anything I'm missing or this is a bug?
Regards