Modbus TCP with InterNiche’s ColdFire TCP/IP Stack

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Modbus TCP with InterNiche’s ColdFire TCP/IP Stack

1,969 Views
rlcoder
Contributor II

Hi,

Trying to implement Modbus TCP using InterNiche's TCP/IP stack.

The implementation is a modbus TCP server.   The problem I am having is handling when multiple clients connect to the server.  This requires the server to listen and communicate on multiple open sockets.

 

Does anyone know of an example with this stack for handling multiple sockets for Modbus TCP, or just handling multiple open sockets in general?

 

Or, can this stack be configured to only accept one connection?   Right now when a second client attempts to connect, the stack/communications freezes.  I am not sure why yet.

 

Thanks.

Labels (1)
0 Kudos
Reply
5 Replies

1,458 Views
rlcoder
Contributor II

Hi Tom,

Sorry the code I posted wasn’t complete enough to understand the implementation.  I was thinking people were familiar with this stack and implementation because it is supplied as example code by NXP.  The stack and socket interface is described in NXP application note AN3470:

www.nxp.com/files/microcontrollers/doc/app_note/AN3470.pdf

There is also NXP app notes AN3507, AN3455, and others.

The pseudo code I posted above, that uses this stack, is from NXP’s TCP/IP Serial Server example code that is distributed with (for example) the MCF52233 processor:

interniches-coldfire-tcp-ip-stack

I’m still testing, but it looks like I’ve been able to implement comms on multiple sockets with the single listen(), and then as clients connect, getting their socket information from the “msring” buffer and using m_recv() and m_send() to transfer data to each socket.   This was one of my questions, if I needed multiple listen() calls.  Looks like I don't.

I’m still trying to figure out how to refuse a socket connection.   I would like to limit the number of clients, and therefore the number of open sockets.

 
When a client connects the socket information is put in the msring buffer, and normally the socket connection is removed from the msring buffer and the socket connection is accepted with m_ioctl(so, SO_NONBLOCK, NULL).

 

How can a socket connection be rejected, or the number of sockets be limited.

Thanks.

0 Kudos
Reply

1,458 Views
TomE
Specialist II

> How can a socket connection be rejected, or the number of sockets be limited.

What sort of "reject" response can the clients best deal with?

You can close the listener when you're out of sockets. Then it will look like your device has disappeared off the network as they SHOULD get an ICMP Error message back from the TCP stack indicating "no such port". Whether your end sends this "correct" response depends on how complete it is, and whether it fully adheres to the TCP/IP standards A lot of embedded ones don't. What the clients do with that error message depends on their stack and how well the client code is written - if the stack can sensibly forward that error and if the clients are written well enough to do something sane with it. This is also highly unlikely. What will they do then? Will they "hard fail", retry once per whenever, "attack retry" a hundred times per second or something in-between?

The other option is to allow the "final connection", but then CLOSE that connection immediately. The protocol might allow sending an "in-band error message" before slamming it closed. What will the clients do in that case?

There is a problem with being "the originator of the close". Have a look at "Figure 6 TCP Connection State Diagram" in the following, which is the "original TCP standard":

https://www.ietf.org/rfc/rfc793.txt

Here's a better drawn version:

The TCP/IP Guide - TCP Operational Overview and the TCP Finite State Machine (FSM)

You should also look at "Figure 13 Normal Close Sequence". When the connection comes up you're in "ESTAB". If the other end closes, your end receives a "FIN" and goes down the right hand side of the diagram directly into CLOSED and you get your socket back immediately, ready for the next connection. If you originate the close you end up in TIME WAIT, and don't get your socket back for 2MSL, which in the above document is "Arbitrarily defined to be 2 minutes". So your TCP stack may hang onto closed sockets for FOUR MINUTES. If you don't like that, then read all the justification in the RFC before complaining. Also read any specs (or code) for your TCP stack to see what it has the MSL set to, and if you can adjust it or violate it on a close. You should type "what is the value of the TCP MSL" into Google and read the resulting matches:

Maximum segment lifetime - Wikipedia, the free encyclopedia

The TCP/IP Guide - TCP Connection Termination

It is better if you can send an "in-band error message" to the client which the client code then interprets as a "request to close". Then it can close from its end, you get your socket back in the server immediately and the client gets to wear the 4 minute socket wait. That only works if a "please close from your end" command is a documented part of the MODBUS protocol, or if you're writing all the client-end code as well and can add that function everywhere.

This document is referenced from Wikipedia's MODBUS page:

http://www.modbus.org/docs/Modbus_Messaging_Implementation_Guide_V1_0b.pdf

It says in part:

However the MODBUS client must  be capable of accepting a close request 

from the server and closing the connection. The connection can be reopened when 

required.

"Required" may be "when the client wants to send some data", which might be very soon after. I also take "accepting a close request" to mean when the SERVER sends a TCP Close, which is what you're trying to avoid.

Another question would be "how do all the other manufacturers in the MODBUS world handle these problems"? I suspect the answer is that the Server has to be designed with sufficient memory to have more than enough sockets to handle all of the simultaneous connections as well as the case where some clients may be disconnecting and reconnecting. If your server doesn't have enough internal memory then you chose the wrong chip, or you have to add external RAM, or choose a chip you can add external RAM to.

The above document is excellent. I recommend you read it cover-to-cover. Section 4.3.2 details the 2MSL problem, the socket option you can set to bypass it (you'll have to check your stack to see if it has this option) and the requirement to have "TCP Keepalives" enabled on both ends of the connection to prevent "half open connections" staying open forever. Note though the timeout is 2 HOURS, and that's not a typo. Section 4.2.1.1 details having two "connection pools", which are "Priority" and "Non-priority". That section pretty much answers your original question.

The last time I had this problem, the TCP/IP stack I was using allocated about 50 kbytes PER SOCKET, even when closed.

Tom

0 Kudos
Reply

1,458 Views
TomE
Specialist II

Interniche has a Telnet server. Why try and write your own?

http://www.iniche.com/source-code/networking-stack/prodoptions.php

I don't know anything about the stack you're using. If you're lucky it is socket-based. From their web site, it certainly looks like it is. If not, then it has an interface where you have to manage the threads and processes yourself.

> or just handling multiple open sockets in general?

A multi-socket Telnet server is conceptually very simple, as long as you have sockets and threads. Here's some example code showing a single-connection server and a multi-connection one. You should compare the code flow here with what you're doing, and that should show what you're doing wrong.

http://www.binarytides.com/server-client-example-c-sockets-linux/

Do you have enough memory for multiple sockets? Depending on how a TCP Socket structure is implemented, in might chew up as much as 64 kbytes per connection. And if a socket closes "the wrong way", they are usually required to hang about for 5 minutes or more. If the stack (and your code) isn't robust when it runs out of memory (and can report or log the error), then that may be your real problem.

Tom

0 Kudos
Reply

1,458 Views
rlcoder
Contributor II

Thanks Tom.   Memory will certainly be a concern.  Even looking at the examples, I’m not sure how a server handles multiple clients.   It looks like the freescale TCP server example is setup to handle this, but I’m not sure what changes are needed.

For example, does listen() get called  once or multiple times?

The freescale TCP server example has declared a server socket, communication socket, and message ring of 10 sockets:

struct sockaddr_in           emg_tcp_sin;

M_SOCK                          emg_tcp_server_socket = INVALID_SOCKET;

static struct msring          emg_tcp_msring;

static M_SOCK                emg_tcp_msring_buf[10];

M_SOCK                          emg_tcp_communication_socket = INVALID_SOCKET;

During initialization m_listen() is called and return value assigned to the server socket.

emg_tcp_sin.sin_addr.s_addr = (INADDR_ANY);

emg_tcp_sin.sin_port = (PORT_NUMBER);

emg_tcp_server_socket  = m_listen(&emg_tcp_sin, freescale_tcp_cmdcb, &e);

The socket callback function “freescale_tcp_cmdcb()” handles connecting and disconnecting. When a connection is opened the socket is put in the msring buffer using msring_add():

int freescale_tcp_cmdcb(int code, M_SOCK so, void * data)

{

int e = 0;

            switch(code)

            {            // socket open complete

                        case M_OPENOK:

                                    msring_add(&emg_tcp_msring, so);

                                    break;

     

                        // socket has closed     

                        case M_CLOSED: 

                                    while( semaphore ){};

                                    semaphore = 1;

                                    emg_tcp_communication_socket = INVALID_SOCKET; 

                                    m_close(so);               //FSL close the socket

                                    semaphore = 0;

                        break;

}

A separate task handles changes to the msring buffer and processes data from a connection on the socket:

void freescale_tcp_check(void)

{

            M_SOCK  so;

if ( emg_tcp_server_socket == INVALID_SOCKET )

                        return ;

               while(msring_del(&emg_tcp_msring, &so) == 0)

            {

                           while( semaphore ){};

                        semaphore = 1;

                        if( emg_tcp_communication_socket == INVALID_SOCKET )

                        {

                                    m_ioctl(so, SO_NONBLOCK, NULL);   /* socket non-blocking */

                                    emg_tcp_communication_socket = so;

                        semaphore = 0;

                        }

           

            if( emg_tcp_communication_socket != INVALID_SOCKET )

                                    freescale_tcp_loop();

                       

                        semaphore = 0;     

            } // while

            if( emg_tcp_communication_socket != INVALID_SOCKET )

            freescale_tcp_loop();

}

The freescale_tcp_loop() handles receiving/transmitting data from “emg_tcp_communication_socket” using m_recv().

It would seem that the msring buffer is in place to handle multiple client connections on the socket. Is this correct?

If yes, then for each connection there would need to be an associated "emg_tcp_communication_socket". Something like “emg_tcp_communication_socket[10]" to handle up to 10 clients.

Conceptually one problem I am having is how to handle when the connections open or closed via the socket callback function.   Now the callback function closes the single client connection and marks "emg_tcp_communication_socket" as invalid.

emg_tcp_communication_socket = INVALID_SOCKET; 

m_close(so);                 //FSL close the socket

With multiple client connections open (in my example emg_tcp_communication_socket[10] ) how does the callback function know which connection to close?

Maybe this is simple for someone familiar with socket communications.

Thanks for the help.

0 Kudos
Reply

1,458 Views
TomE
Specialist II

> the freescale TCP server example

You've provided scraps of code, but there's no way of knowing how they work as you haven't provided any links to documentation about that stack or its source.


So a lot of "magic" probably happens in the "freescale_tcp_loop()" function, but without being able so see what it is or does I can't help you at all.

As shown in the example code I linked to, a server has to:

1 - Open a "Listen" socket.

2 - Call "accept()" on that socket. This will return a NEW socket when a client connects,

3 - Create a new execution thread to receive and send on that new socket.

4 - Close each thread  when its session closes.

So if you have two current connections, there's the original thread of execution waiting on the "accept()" for any new ones, plus a separate thread for each of the connections, all independently running.

You don't seem to have a threaded operating system (or at least a stack written to run on one). You have a "socket callback" function, which implies it isn't threaded.

Any software written to run on threads can be rewritten to run in a polled or callback-driven system. But to implement "listen(); while (skt = accept()) { new thread(skt) }" can take hundreds of lines of polled code to implement, and weeks to get the bugs out of.

What's your time worth (and a delayed product worth) versus what it would cost to buy server code?

Tom

0 Kudos
Reply