Limit the max. connection count for TCP port

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 
已解决

Limit the max. connection count for TCP port

跳至解决方案
3,014 次查看
Fabi
Contributor III

Hi community,

due to RAM limitations, it is needed to limit max. amount of TCP connections. There ist a hard coded constant RTCSCFG_TCP_MAX_CONNECTIONS to configure the total amount of TCP connection count. However, if simultaneous requests (from external clients) to a special port have exhausted these number, no other TCP service (server on another port) can service requests.

The requirement is, to provide TCP service on one port and limit the service on another TCP port to avoid RAM underflow. I was wondering, if there is any solution for it. I use MQX 4.0.2.2 running on MK60DN512.

I've tested the suggestion Re: how to see the max connection number to a TCP/IP port

However, it is not clear for me, to shutdown

a) the socket handler returned by RTCS_selectall() or

b) to shutdown the socket returned by listen() ?

Idea a) does not work for me, because TCP_Clone_tcb() and RTCS_mem_alloc_zero() is called before RTCS_selectall() and MQX memory is not released until the external client shutdown by himself.

Idea b) does not work for me too, because furher client request are also out of work if all other requests are closed before (which is understandable, indeed).

TY

标签 (1)
标记 (1)
0 项奖励
回复
1 解答
2,127 次查看
RadekS
NXP Employee
NXP Employee

Unfortunately this is bad character of MQX “partitions”, which are used in RTCS for pcb, socket and message data structures.

You can influence his functionality trough parameters:

_RTCSPCB_init/grow/max

_RTCS_msgpool_init/grow/max

_RTCS_socket_part_init/grow/max

If you set max to 10, RTCS will never allocate more than 10 PCB/msg/sockets...

Rework of this part RTCS code is on TODO list… However there still missing any time plan for that, therefore I cannot estimate any time schedule for that.

The same is valid for implementation of “backlog” for listen() function, which will limit number of clients for specific socket.


Have a great day,
RadekS

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

在原帖中查看解决方案

0 项奖励
回复
5 回复数
2,128 次查看
RadekS
NXP Employee
NXP Employee

Unfortunately this is bad character of MQX “partitions”, which are used in RTCS for pcb, socket and message data structures.

You can influence his functionality trough parameters:

_RTCSPCB_init/grow/max

_RTCS_msgpool_init/grow/max

_RTCS_socket_part_init/grow/max

If you set max to 10, RTCS will never allocate more than 10 PCB/msg/sockets...

Rework of this part RTCS code is on TODO list… However there still missing any time plan for that, therefore I cannot estimate any time schedule for that.

The same is valid for implementation of “backlog” for listen() function, which will limit number of clients for specific socket.


Have a great day,
RadekS

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 项奖励
回复
2,127 次查看
Fabi
Contributor III

Hi RadekS,

TY for speak your mind. This topic is mission critical for us and could block product launch. PLS, can you estimate the effort and/or ask 2nd or 3rd level support for implementing it? TYA

0 项奖励
回复
2,127 次查看
RadekS
NXP Employee
NXP Employee

You can use limitation by “grow” parameters just now. Disadvantage of this solution is that they are global parameters, you cannot use different parameters for specific services/ports like HTTP, FTP,…

Unfortunately I do not know detail plan for new RTCS features, but I suppose that it will be reworked for next MQX release (probably MQX 4.2, Q1/Q2 2015).

If you need such update early or if you need solution tailored for you, you have to ask for Commercial Support or Professional Services:

http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MQX_SUPPORT

Details about work load, timing, priority,… will be specified during negotiation with our sales/marketing team.  

0 项奖励
回复
2,127 次查看
Fabi
Contributor III

Ok, we will wait for MQX 4.2.

0 项奖励
回复
2,127 次查看
Fabi
Contributor III

I forgot an idea:

c) to shutdown the socket returned by accept()

Considering idea c) in more detail, I've created some simultaneous client connections and compared the RAM usage onto MQX. Here is my result, when one additional socket is opened (by external client) and closed by the MQX-based server:

RTCSfreeing.png

The green blocks are released/freed by shutdown(). However, the orange block remains on RAM. So, there is a memory garbage of 2592 Byte for each TCP socket after shutdown(). Any idea to free this memory too?

0 项奖励
回复