MQX httpsrv - large cgi stream - retransmissons

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

MQX httpsrv - large cgi stream - retransmissons

1,810 Views
m_bach
Contributor III

Hi there,

 

I'm quite new to MQX, just starting to port some projects to MQX. I need to create an application which is generating 'big' files as dynamically generated CGI data, so I started to figure out what the MQX http server is capable of.

 

First question: is it allowed to have a resource like this, which is 'just' doing multiple subsequent calls to HTTPSRV_cgi_write() ?

My first impression: it works. (spot on).

My second impression: it's slow. After a few packets, I see many TCP re-transmission on a very regular basis.

 

You find attached a wireshark trace of the complete TCP stream.

It's not always exactly the same packet no, though the re-transmissions are kicking in at approx the same stream-position, maybe one or two packets more or less.

 

Here is the code I added to cgi.c, starting at the MQX demo http server project:

 

const HTTPSRV_CGI_LINK_STRUCT cgi_lnk_tbl[] = {

    { "bigdata",        cgi_big_data,  1500},

    { 0, 0 }    // DO NOT REMOVE - last item - end of table

};

 

static _mqx_int cgi_big_data(HTTPSRV_CGI_REQ_STRUCT* param)

{

    HTTPSRV_CGI_RES_STRUCT response;

    if (param->request_method != HTTPSRV_REQ_GET)

    {

        return(0);

    }

 

    response.ses_handle = param->ses_handle;

    response.content_type = HTTPSRV_CONTENT_TYPE_PLAIN;

    response.status_code = 200;

    response.content_length = 0; // @TODO this prevents keep-alive

   

    int i;

    char str[32];

    response.data = str;

    for (i = 0; i < 60000; i++)

    {

        response.data_length = snprintf(str, 32, "%ld\n", i);

        HTTPSRV_cgi_write(&response);

    }

    return (1); // ?

}

 

 

Does anyone can tell me if it's OK to generate large CGI streams that way?

Does anyone know what might be the reason for the many re-transmissions?

 

Here is my example wget call to get the file:

 

  $ wget "http://192.168.3.43/bigdata.cgi"  -O /tmp/bla

  --2014-08-25 13:41:36--  http://192.168.3.43/bigdata.cgi

  Connecting to 192.168.3.43:80... connected.

  HTTP request sent, awaiting response... 200 OK

  Length: unspecified [text/plain]

  Saving to: ‘/tmp/bla’

      [                                       <=>        ] 348.890     26,9KB/s   in 13s   

  2014-08-25 13:41:49 (26,3 KB/s) - ‘/tmp/bla’ saved [348890]

 

 

best regards, and many thank in advance,

Martin

Original Attachment has been moved to: wiresharp_cgi_bigdata.pcapng.zip

Tags (2)
0 Kudos
6 Replies

1,094 Views
m_bach
Contributor III

OK, I have some news, and even more questions...

I'm using the 52259 eval tower, and the httpsrv demo app to check if I'm able to produce a large stream of sprintf generated data in a cgi_callback task.

As captured by Wireshark above, it's awfully slow. Turned out I can speed up things a great deal when I skip

  // session->time = RTCS_time_get();

in HTTPSRV_cgi_write, and in addition to that skip the

  // response.data_length = snprintf(str, 32, "%ld\n", i);

in my for-loop in cgi_big_data()

That'll boost the performance a lot, which makes the complete 300k stream being transmitted in less than a second, with 417kb/s, without any retransmissions.

Once I include the sprintf again, the performance is reduced to 110kb/s, which is 60k times sprintf. Well, Ok...

Doing so, the re-transmissions are kicking in again, and now I see the relationship: after 1s of the overall TCP-streaming the retransmissions are beginning...

So, my guess is something is stuck. While the ongoing cgi_big_data() task is producing data, somehow the TCP retrans timer of I think 1s becomes elapsed, and is not reset by client's ACKs. So TCP thinks it has to transmit all the time...

So again, I'm not sure if I'm allowed to have a cgi-callback which is sending 'a lot' of data over 'a long' time-span, or if this is just a bug in the TCP<->HTTP Server synchronization...

Any comments are very welcome.

cheers, Martin

0 Kudos

1,094 Views
karelm_
Contributor IV

Hi Martin,

I would strongly recommend against using function snprintf in such way. It is very complicated function so it takes long time to complete. Better solution is to create say 2KiB big buffer, store data in there in binary format (even better - compressed). And send it in 2KiB chunks to client. If you need your data in human readable form, do the conversion to string client-side. Regarding the commented out RTCS_time_get(): This has a side effect. The session will always time-out after 20 seconds and server will close the connection. Also not setting content_length to valid value will probably cause client hang as it will not know where the data ends. In future version of HTTPSRV we plan to implement chunked transfer encoding to eliminate this limitation (content-length is then no longer needed).

Best regards,

Karel

0 Kudos

1,094 Views
m_bach
Contributor III

Hi Karel,

again thanks for answer.

So in short: everything is OK as long as the cgi-callback is producing the data just fast enough? Creating data in less than 1s is needed for being retrans-free?

Edit:

In fact my options are somehow limited, because I need to implement an existing CGI/JSON API with MQX. Client sends a GET requests, containing several CGI Params, and Server answers with more or less complex JSON. I'm currently working on porting existing firmware to MQX, so sticking to same HTTP API interfaces is mandatory, for there are mobile and PC Apps out there relying in that CGI/JSON interface.

In fact, the actual maximum size of the JSON reply is somewhere between 5 and 30kb, depending on the product capabilities and client's cgi-get-params.

Since generating server-side JSON-encoded replies are mandatory, I'm currently interested in 'how' and to what extend I can make use use the MQX httpsrv.

best regards, Martin

0 Kudos

1,094 Views
karelm_
Contributor IV

Hi Martin,

those retransmissions should not occur as long as server is on lower priority then TCP/IP task (by default this is the case). I understand your use case and it is completely OK to generate these responses in CGI. Probably best is to let CGIs run in separate tasks so they are processed in parallel.

Best regards,

Karel

1,094 Views
m_bach
Contributor III

Hi Karel,

I did not change the HTTP server prio, it is (8) as far as I can see. Besides, the 'bigdata' cgi is already running in as separated task. So I go with "it's a bug" :smileywink:

best regards, and thanks a lot for your time, Martin

0 Kudos

1,094 Views
karelm_
Contributor IV

Hi Martin,

please try to add following code before RTCS initialization:

    _RTCSPCB_init = 4;

    _RTCSPCB_grow = 2;

    _RTCSPCB_max = 20;

This will increase number of packet control blocks and thus should reduce/eliminate retransmissions.

Another tip:

You may try to increase HTTP session buffer size by adding following line to you user_config.h:

#define HTTPSRVCFG_SES_BUFFER_SIZE        (3000) //default is 1360

Best regards,

Karel

0 Kudos