socket: device driver is slow as molasses (MQX 4.0.1)

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

socket: device driver is slow as molasses (MQX 4.0.1)

2,194 Views
pmt
Contributor V

Hello all!

I'm using the MQX socket: io driver similar to the MQX "telnet_to_serial" demo example.  However I am driving output with printf() (STDOUT) and explicit fprintf() calls, not the serial port.

The issue is that the characters come over the socket connection slow as molasses; 3 to 4 characters per second as if the socket driver is hitting a purposeful _time_delay() between characters.  Occasionally you might get a burst of a few characters.  I have no other issues with sockets in general or my setup.  Only when I write to the socket via the socket: driver is the data rate slow.  I have seen this in the past going all the way back to version 2 of MQX and surprised to see the behavior is still in here on version 4.  I have tried disabling NAGLE, etc.....  

I think there is probably a lurking bug in the socket.  Can any MQX folk shed some light on the issue?

Thanks,

PMT

15 Replies

1,339 Views
brlmzhong
Contributor I

Shell on telnet is very slow. It takes 3 to 5 minutes to show something that cost only 1~2 second on RS232.

We may have to drop this feature.

And it never happen in Fnet Bootloader.

Can Freescale fix this issue????

0 Kudos

1,339 Views
matthewkendall
Contributor V
0 Kudos

1,339 Views
pmt
Contributor V

OK, I reiterate, the "socket:" device driver is slow as molasses!!!!!!!!!!!!!!!!!!

I ran the stock demo "rtcs_shell_twrk60f120m" on a K60 tower board.  When you telnet in and do a "help" the printed response for "Available commands:" comes in multiple packets.  Start to end it took 1.05 seconds to complete (captured with Wireshark)!

So what I am saying is that there is a bug in the "socket:" driver.  It is broken and inserting long delays somewhere.  It has broken going back years to version 2 of MQX in my personal experience.   It doesn't matter if you run it on a 20MHz ARM7 or a 120MHz Cortex-M4.  Can someone at Precise take a look at this or comment?   

Thanks,

PMT

This takes over 1 second to print over a telnet session:

Available commands:

   exit

   gethbn <host>

   getrt <ipaddr> <netmask>

   help [<command>]

   ipconfig [<device>] [<command>]

   netstat

   pause [<minutes>]

   ping [-c count] [-h hoplimit] [-i wait] [-p pattern] [-s size] <host>

   telnet <host>

   tftp <host> <source> [<dest>] [<mode>]

   walkrt

   ?         

0 Kudos

1,339 Views
pmt
Contributor V

Doing a little more experimentation, setting the IO_IOCTL_SET_BLOCK_MODE makes a big difference in performance (though I think there is still a lingering bug).  However if you fclose() the socket: handle without an explicit fflush() prior to the close then data will be chopped off the end.  fclose() should also fflush() all buffers by definition.  I think this is another bug.

PMT

0 Kudos

1,339 Views
stevejanisch
Contributor IV

I think your issues have more to do with the nature of printf and how the c-language deals with stdio than anything really associated with Ethernet.

Replace all the stdio calls with Ethernet send and recv and you should have much better response.

If you want to see how bad stdio is... write a small procedure with a short sleep of a millisecond or so that toggles a digital output.  Every second or so do a printf and see what happens to the toggle.   It will flat-line for long periods of time

0 Kudos

1,339 Views
pmt
Contributor V

Steve,

Since the entire purpose of the "socket:" driver is to connect send/recv with stdio so I can use stdio, I'd rather not do that!  Yes, I agree that for any performance (moving lots of data) you'd want to use send/recv directly.  But for implementing interative applications, i.e. telnet command handlers, ftp command sessions, etc. the "socket" driver is a great tool and simplifies vast amounts of string handling code when piping text in and out of sockets.  Without this you essitially end up writing your own "socket:" driver anyway.  Both the MQX telnet and ftp servers make use of the socket: driver for this same purpose.

But I don't think it's an issue with stdio.  I can stdio the same type of data to the RS232 console and it's fast!  No, there is a problem specifically with the "socket:" device driver, and I'm condifent it has nothing to do with the stdio library, or RTCS per se.  

I don't think the latest version of RTCS 4.1 is going to fix it, but I will do as Garabo suggests.  This problem is ancient.  I've been using MQX for 14 years now.  Precise tech support has acknowledge the issue in the past but I could never really get anyone to investigate deeply.  Like I've said, I've noticed the problem for years going back to now ancient versions of MQX/RTCS (2.4).

I think now that the issue is demonstratable on stock Kenetis tower hardware, running on very fast CPU's (120MHz+), running the stock rtcs_shell demo MQX should give this another serious look.  When I have some time I will troubleshoot this myself, but I think in this case it will take quite a bit of setup to figure out what's really going on.

PMT

0 Kudos

1,339 Views
stevejanisch
Contributor IV

Good points!  Nice to hear that someone has 14 years in with MQX and is still alive and kicking.  I've run into so many issues I figured I be dead in less than two (it's been less than one).

BTW... do you have pointers on getting Freescale to at least respond to posts?  I tend to post things and pretty much get ignored.  I have two teenagers and an ex-wife, so I'm used to it, of course... but it would be nice to have a reply once in a while.

0 Kudos

1,339 Views
karelm_
Contributor IV

Hi,

I understand your frustration. We (developers) do visit community pages from time to time and we certainly do not ignore what you post here. However we usually do not have as much time as we would like to solve issues found by community. As for the sockio driver issues: as PMT mentioned it is ancient code and I believe this feature is undocumented and thus I strongly discourage you from using this code (undocumented features are not maintained nor tested). Also from quick look in file sockio.c the functionality is not correctly implemented. I know this feature is used in telnet application. We do plan to improve telnet during its port to IPv6.

Best regards,

Karel

1,339 Views
stevejanisch
Contributor IV

I certainly did not intend derision.  I am grateful for the site.

0 Kudos

1,339 Views
pmt
Contributor V

Steve, yes, I'm still alive and kicking!  As far as the MQX support you certainly can't beat free, and I'm thrilled that Precise is here in any capacity!  They have been very helpful.  If you are looking for good commercial support then give the paid support options a try.  It has worked well for me over the years.

Karel, since the telnet shell is one of the core demos and relies on socket: it would be much appreciated if you could spend even an hour or two with it.  It may be a complex fix, but then again it may be an easy fix as you seemed to noticed some obvious issues with it just by code inspection. 


The socket: driver is a really great feature and an asset to the MQX driver offerings.  Rather than design away from it give some thought to officially supporting/documenting it.  In the mean time I'm working around the issues by using buffered mode, and flushing the stream before closing.  So far this has worked OK.

Thanks,

PMT

0 Kudos

1,339 Views
Luis_Garabo
NXP TechSupport
NXP TechSupport

Hi Pmt,

MQX 4.0.2 is fixing some TCPIP issues. The MQX 4.1 is including some other improvements in the TCPIP performance. I would recommend to wait couple of weeks and test when MQX 4.1 is releases.

Regards,

Garabo

0 Kudos

1,339 Views
Luis_Garabo
NXP TechSupport
NXP TechSupport

Hi PMT,

Well, it looks like you could have synchronization issues. Let me explain for instance in the TWR-K60N512. If you want to use Ethernet then you have to configure the jumper the way that the clock for the MCU is also the clock for the PHY. This could be the same problem you have. Have you checked that?

Best Regards,

Garabo

0 Kudos

1,339 Views
pmt
Contributor V

Garabo,


I don't think it has anything to do with Ethernet.  I'm not having any other Ethernet performance issues using network socket calls.  Additionally I've seen this issue across multiple platforms going back 8 years!


PMT

0 Kudos

1,339 Views
pmt
Contributor V

I think I have this problem addressed.  When I set the socket device file descriptor to block mode:  ioctl(sockfd, IO_IOCTL_SET_BLOCK_MODE, &param); everything now works fast.  I overlooked this ioctl call in the original MQX telnet example. 

My one complaint is that the socket: driver is very useful, but isn't really documented anywhere except in the source code, let alone the IO_IOCTL_SET_BLOCK_MODE ioctl parameter.  I would like to know the side effects of block mode thoroughly explained.

In my mind when block mode is not set the function still seems unusually slow.  I suppose the driver is pushing a packet out for every character written and blocking until each character actually gets sent over Ethernet.

Thanks,

PMT

0 Kudos

1,339 Views
pmt
Contributor V

Perhaps I spoke too soon.  When I use block mode it appears that I can lose data when writing too fast.  Data loss occurs when the block size is exceeded.  Also, only 'telnsrv.c' uses the socket: device in block mode.  All other MQX examples are not setting block mode.  I have to figure they would have some serious throughput issues as I've described.

Can anyone at MQX shed some light on the best way to use the socket: device driver and the performance issues?

Thanks,

PMT

0 Kudos