TCP AND UDP COMMUNICATION PROBLEM BETWEEN TWO IMX6

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

TCP AND UDP COMMUNICATION PROBLEM BETWEEN TWO IMX6

1,378 Views
mesahin
Contributor I

We have two IMX6 processors in our system.One of the processor sends uncompressed video over UDP to other one. And there is also async. data communication (10 to 120 Bytes max) between each other  using TCP . We experience sometimes dropped packets or sometimes transfer delay in the communication. We tried different methods in our source code. We also tried migrating the communication from TCP to UDP but we still see communication problems as before.

We did write some experimantal code for send/receive purposes between processors. We think sender side is able to send but the receiver side cannot  receive all the packages sent.

We tried to suspend UDP video transfer while communication takes place and we saw that communication works flawlessly. But as sson as video transfer begins, the communication problem arises.

How can we debug the problem? Is there a possible solution we can try?

QT version : 5.3.2

UDP transfer : QUDPSocket

TCP transfer : ZMQ

Yocto : Poky Distro

Kernel : 3.14.28

0 Kudos
3 Replies

1,107 Views
b36401
NXP Employee
NXP Employee

There is nothing special to i.MX. You can use iperf to check the bandwidth and ping for check loosing packets.

0 Kudos

1,107 Views
TomE
Specialist II

Not only that, there should be heaps of statistics being kept by all of the driver layers to help you work out where things are going wrong.

It is possible that the "normal Linux tools" haven't been built into your distribution and you may need to enable some more of them in the build (usually in the Busybox configuration).

"ip link list" should list the links, and "ip -s link" should show you the link statistics, like it does on my desktop machine:

% ip -s link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    RX: bytes  packets  errors  dropped overrun mcast   
    2484       27       0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    2484       27       0       0       0       0       
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP mode DEFAULT group default qlen 1000
    link/ether 34:97:f6:8b:29:b5 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    436160163  690629   0       0       0       271599  
    TX: bytes  packets  errors  dropped carrier collsns 
    18502209   111050   0       0       0       0       
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 34:97:f6:8b:29:b5 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    152022568  469297   0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    36038066   74499    0       0       0       0       

It is likely that the "ip" included with busybox doesn't give statistics. It doesn't on my old i.MX53 Linux. You'll have to see if your one does (or if your distribution comes with a "real one").

That means you have to dive into /proc/net and look at the snmp statistics:

# cat /proc/net/snmp
Ip: Forwarding DefaultTTL InReceives InHdrErrors InAddrErrors
 ForwDatagrams InUnknownProtos InDiscards InDelivers OutRequests
 OutDiscards OutNoRoutes ReasmTimeout ReasmReqds ReasmOKs
 ReasmFails FragOKs FragFails FragCreates
Ip: 2 64 987 0 638 0 0 0 349 106 0 0 0 0 0 0 0 0 0
Icmp: InMsgs InErrors InDestUnreachs InTimeExcds
 InParmProbs InSrcQuenchs InRedirects InEchos InEchoReps
 InTimestamps InTimestampReps InAddrMasks InAddrMaskReps
 OutMsgs OutErrors OutDestUnreachs OutTimeExcds OutParmProbs
 OutSrcQuenchs OutRedirects OutEchos OutEchoReps
 OutTimestamps OutTimestampReps OutAddrMasks OutAddrMaskReps
Icmp: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Tcp: RtoAlgorithm RtoMin RtoMax MaxConn ActiveOpens
 PassiveOpens AttemptFails EstabResets CurrEstab InSegs
 OutSegs RetransSegs InErrs OutRsts
Tcp: 1 200 120000 -1 0 1 0 0 1 204 106 0 0 1
Udp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors
Udp: 0 0 0 0 0 0
UdpLite: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors
UdpLite: 0 0 0 0 0 0
# cat /proc/net/snmp6
Ip6InReceives                           17072
Ip6InHdrErrors                          0
Ip6InTooBigErrors                       0
Ip6InNoRoutes                           0
Ip6InAddrErrors                         0
Ip6InUnknownProtos                      0
Ip6InTruncatedPkts                      0
Ip6InDiscards                           0
Ip6InDelivers                           17072
Ip6OutForwDatagrams                     0
Ip6OutRequests                          36691
Ip6OutDiscards                          0
Ip6OutNoRoutes                          0
Ip6ReasmTimeout                         0
Ip6ReasmReqds                           0
Ip6ReasmOKs                             0
Ip6ReasmFails                           0
Ip6FragOKs                              0
Ip6FragFails                            0
Ip6FragCreates                          0
Ip6InMcastPkts                          4
Ip6OutMcastPkts                         12
Ip6InOctets                             6201232
Ip6OutOctets                            3475704
Ip6InMcastOctets                        448
Ip6OutMcastOctets                       992
Ip6InBcastOctets                        0
Ip6OutBcastOctets                       0
Icmp6InMsgs                             11
Icmp6InErrors                           0
Icmp6OutMsgs                            16
Icmp6OutErrors                          0
... Lots more ...

Then you should look at the Ethernet statistics, which should be here:

# cd /sys/class/net/eth0/
# ls
addr_len      carrier       dormant       flags         iflink
operstate     statistics    type          address       dev_id
duplex        ifalias       link_mode     power         subsystem
uevent        broadcast     device        features      ifindex
mtu           speed         tx_queue_len
# cat duplex
full
# cat speed
100
# cd statistics
# ls
collisions           rx_dropped           rx_missed_errors
tx_carrier_errors    tx_heartbeat_errors  multicast
rx_errors            rx_over_errors       tx_compressed
tx_packets           rx_bytes             rx_fifo_errors
rx_packets           tx_dropped           tx_window_errors
rx_compressed        rx_frame_errors      tx_aborted_errors
tx_errors            rx_crc_errors        rx_length_errors
tx_bytes             tx_fifo_errors
# cat *
0
0
9345077
0
0
0
0
0
0
0
0
0
26274
0
5570222
0
0
0
0
0
0
51206
0

The way to use these is to record all of the statistics after a clean power up, run your tests (that are dropping or running slow) and then record all the statistics after the test. Then see how many packets were sent from the sender, through TCP, out through Ethernet, then counted into the other one's Ethernet and up to TCP. You should be able to see the differences in packet counts, and see where packets are being dropped, and possibly why.

You should run TCP and you should make sure the flow-control works all the way through. Your sender shouldn't be able to push data faster than the receiver can read it. It is possible to get this programming wrong though (and have your software be the problem).

Let us know what you find.

Tom

0 Kudos

1,107 Views
TomE
Specialist II

Posted in the wrong forum. This is for Coldfire chips, not i.MX. Please use the "Actions" option to move it.

Tom

0 Kudos