I'm intending to use websockets to pass rather long JSON strings to javascript in a browser app. I have things cobbled together and functioning, but it's not a production-ready solution:
-The browser app opens a web socket and sends a message indicating the type of response it is requesting.
- The server message-receive callback builds the response JSON string and attempts to return it via WS_send().
- If the code builds and returns the response from a single large buffer things work as expected.
Given the limited amount of RAM in the K parts, the production code can't afford to statically allocate a large message buffer, allocate it on a stack frame, and certainly will not malloc/free large buffers on a system that needs to remain up and running for unlimited durations. The attempted solution was to allocate a small buffer, then build the response in chunks and use the FIN flag to indicate end of message as specified by the RTCS user manual. This completely bolluxes the works:
- WS_send() operations are not sent out the wire until the message-receive callback returns.
- It appears all the chunks are queued and are sent out the wire back to back when the callback returns.
- Re-use of the small buffer for each WS_send() causes the content of each chunk to be the content of the last chunk.
So I would first suggest the behavior of the websocket code needs much better documentation since there's no hint of this in the RTCS 4.2 manual.
Questions:
1. Is there some means to correctly issue multiple WS_send() operations in the context of a message-receive callback?
2. Assuming "No," can multiple WS_send() operations with a small buffer be made to work even outside the context of a message-receive callback? The potential issue I see is the non-blocking behavior of the WS_send() operations inside the receive callback. If the behavior is non-blocking outside the callback, re-writing the buffer will most likely muck up the data sent in earlier calls.