Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[SOLVED] Packet Loss in UDP implemented in FPGA

Status
Not open for further replies.

beginner_EDA

Full Member level 4
Full Member level 4
Joined
Aug 14, 2013
Messages
191
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,296
Visit site
Activity points
3,854
Hello Everybody,
if FPGA (UDP with static ARP) is connected with LAN cable to PC, is there still any possibility of packet loss/packet exchange in transmission/reception? if yes, then how much percentage of loss we can except?

I would like to be sure because UDP theory say, there might be packet loss/packet exchange especially if transmitted via internet.

If loss percentage is high even with direct LAN cable connection then I might have to switch to TCP. But as I know TCP is very difficult to implement in FPGA.

Your suggestion will be highly appreciated.

regards
 

You can expect effectively no packet loss in LAN operation. If at all, strong electrical interferences are the most likely cause of packet loss in a LAN.

UDP can be perfectly used for all protocols that transmit data as packets or messages. TCP should be used for streams or generally for data entities that exceed the capacity of an ethernet frame.
 
Thanks FvM.
This means Packet loss in UDP(FPGA) is only applicable if it is transmitted/received via internet/wireless connection?
Regards
 

Packetloss can always happen, at least theoretically. You should determine the consequences of packet loss in your application and decide if it needs the confirmed data transmission provided with TCP. Many protocols have a data link layer that detects and corrects packet loss on it's own. Or the protocol can tolerate a certain low loss rate.

Generally it's the right way idea to use UDP for ethernet implementation in FPGA, particularly for pure hardware implementations.
 
As a practical matter, if you run a short cable between the PC and fpga board (so no hubs/switches in between) then packet loss will be minimal. And by that I mean well under 1%. And also well under 0.1%, and lets make that under 0.01% as well. Oh and below that as well.

I've done data pumping between PC and fpga over gigabit ethernet with a a short cable for several projects, and always using UDP. Why? Because full stack is just too much work. :p I found that for my case packet loss was not an issue. As in zero packet loss. But you DO want to keep a counter on both sides as a sanity check to see if number of packets sent is number of packets succesfully received. I didn't even bother with retransmits because it was not required. But if you really really care about every single packet then you should add retransmit functionality to your design.
 
For GigE Bit Error Rates for Ethernet are 1E-10 or 1E-12 depending on if it's 1000BaseT or BaseX. So yes there is a theoretical possibility of packet loss, but in practice the rate is much lower unless you happen to use poor quality cables or cables that aren't rated at least CAT5e or better.

I've seen testing done on equipment that will run for more than a week with no packet loss with UDP encapsulated MPEG2 transport streams with 900Mbps of traffic.
 
UDP does not make any guarantees about packet loss and packet duplication and packet order. If you want to guarantee that each packet in a stream is received exactly once in the correct order, then you need TCP.

Under certain limited circumstances you may be lucky with UDP - but you cannot know whether or not you have been lucky. You can, of course, insert extra information in a packet to help detect and correct packet loss/duplication, but if you do that you will be inventing a transport protocol. The best known transport protocol is TCP, and if you try to reinvent that you will spend a lot of time doing it poorly.

If you rely on having ideal operation of UDP, then your system will subtly fail at some point in the future, e.g. when a bridge or router is added into the network without your knowledge.

So, whether or not UDP is sufficient can only be answered in the context of your system's requirements. It may be that you can design the application so that it is insensitive to missing/duplicated packets.
 

tggzzz,

You are talking about the internet, where a packet is never guaranteed to take the same path from source to destination.

UDP is frequently used in systems that are in a closed environment, where there is only one route between source and destination. UDP is used extensively in cable systems to send transport streams around between systems. Things like redundancy can potential be impacted by the use of UDP and the possibility of the primary and redundant stream being "out of sync" during a fail over.

The entire idea behind these and many other systems is that even if there is a glitch due to a dropped packet, repeated packet, packet error, etc, we don't care if a packet is bad/lost. The system will recover and continue to work regardless. e.g. If you've ever been watching TV and see a garbled square box on the screen you've just received a number of corrupted/dropped/extra packet(s) and the video stream was disrupted. Does you TV crash, is it the end of the world? Nope you just ignore the momentary glitch and continue watching your show.

Basically the criteria between using UDP or TCP/IP is what kind of payload is in the packets.
UDP - time sensitive data, which must be delivered within a give time otherwise the "delay" becomes an issue. e.g. video, audio.
TCP/IP - data transmission, the integrity of the data is of primary importance and the delay to deliver the data is inconsequential. e.g. web pages, banking data, tax returns, etc.
 
Of course, as implied by my statements. And yet...

I have repeatedly seen cases where the initial optimistic assumptions turned out to be invalid, e.g:
  • the distinction between LAN and internet is grey, not black and white; internal bridges and routers can cause unwelcome surprises
  • the internal network isn't correctly understood
  • somebody else modifies the internal network without you realising, making your assumptions invalid
  • the scope becomes expanded so that the traffic has to cross the net

BTW, your statement "criteria between using UDP or TCP/IP is what kind of payload is in the packets" is incorrect. It is the applications that use the bits, not the bits themselves, that define whether UDP or TCP is appropriate.

In addition, the OP hasn't stated his application, so it is premature for you to presume it is a TV/audio application. There are many other possible applications, and the messages may or may not be idempotent.
 

I have repeatedly seen cases where the initial optimistic assumptions turned out to be invalid, e.g:
  • the distinction between LAN and internet is grey, not black and white; internal bridges and routers can cause unwelcome surprises
  • the internal network isn't correctly understood
  • somebody else modifies the internal network without you realising, making your assumptions invalid
  • the scope becomes expanded so that the traffic has to cross the net
*grin* That sounds awfully familiar. Luckily all of these have to do with a large subset of Homo Sapiens being a bunch of stupid mthrfkrs. If you manage to avoid being part of that subset and actually engage brain, then all the above points can easily be avoided for your own projects. As soon as any other person comes near it you may have to adjust expectation values.

My reply to the OP is partly based on his previous posts. Those suggest that he is doing some data acquisition on the fpga side, and would like to pump the acquired data to the PC which is directly connected to the fpga board via a single short cable. Furthermore those posts suggest it is a personal project, not some paid for by customer design. And I also suspect he hasn't done any projects with a softcore in it yet. Read: if tcp, then softcore, and thus toolchain learning curve.

Now given all that, it seems reasonable to start with UDP over a single short cable. And as suggested in a previous post, you DO keep internal counters such that you can do a sanity check and verify your assumptions.

The purpose of the sanity check is purely to warn you of any packet loss. The expectation is that you will not encounter any such loss. But should it happen due to the cat gnawing on the cable overnight then at least the error will not be silently ignored. The assumption here is of course that you either don't care about a few dropped packets, can work around it, or just rerun the experiment.

A simple counter + client side checks are far less expensive than a tcp implementation. Granted it is also far less robust, but for certain jobs the safety of tcp is nice to have, but not strictly required. So then it becomes an engineering decision of what to spend your time and resources on.

I find that the direct cable link + static arp + udp (with counter in the payload for sanity checks) gives good results for a low amount of implementation effort.
And before I forget, another low cost implementation that is useful is to implement icmp echo + reply (aka ping). For two reasons:
- regular boring ping, should you want to have a remote quick check for signs of life of the fpga
- icmp with payload for SIMPLE commands

Or you can just do the same thing all with UDP on port number base, whatever is your preference. Incidentally, the use of icmp in this way pretty much also implies that you intend to use it without any routers etc in between. Because as per RFC-forgot-number those routers are totally within their rights to mess up the above icmp.

But then again, if you want to argue "but it has to connect over the internet", then assume that any implementation you will come up with is going to be suboptimal and thus set up an stunnel through which you pipe all your crappy insecure content.

All that to say that for simple data acquisition in a lab setting with direct cable UDP can be quite sufficient and easy to implement.
 

That's all pretty sane, mrflibble.

Personally I only trust networks where I can see both ends of all pieces of wire. Yes, that excludes my trusting WLANs :)
 

Well my post was in regards to large scale cable/telcom networks. Those responsible for running those networks in central offices know what's in their network, there aren't any monkeys running around randomly adding stuff to the network. And if you were such a monkey you would be an out of work monkey very quickly :wink:

So in such a controlled network I've built FPGAs that took in UDP and split up the payload into something more usable to a downstream device. And as mentioned on a number of occasions testing was done for weeks at full load with no dropped or errorred packets (using 200-300 ft CAT5e cable).
 

I'm skeptical that there are such ideal companies in the real world.

I've personal experience, a couple of decades ago, of a large successful mobile operator where their network operations staff knew they didn't know everything that was in their network - because, for example, there would be a report of a problem with a particular base station or antenna, and the local staff would go and fix it. Whether or not changes were reported back to "the centre" in a timely fashion is an entirely separate issue.

I've also seen the high-level diagram of the system in a different telco. It was a remarkable hodge-podge of interconnected systems that had just grown like Topsy and couldn't be rationalised.

There was a local company, now borged, whose product attempted to produce an inventory of whatever is in a telco's network. I have no idea how successful it was.

There is a small ecosystem of telco supply companies whose sole reason for existence is to cobble together the marketeers latest wet dream from whatever happens to already be in the system. Such companies do not simplify existing system!

I've also been told of a telco that carries significant parts of its network traffic over the same network as all their office email etc comms. They were advised, repeatedly, that their network was not supported!

And when working in HP a while ago, it was clear that networks changed faster than any central "authority" could keep up - and that was true not only in HP but also in the companies that HP sold test equipment to.

So no, I don't believe that "central offices know what's in their network" :)
 

It was telling that when the Germans looked at the SS7 [1] traffic on a couple of the big telecoms internal call management systems they found ~5,000 queries per second that had no real business being there....

No large organisation that does not have some sort of automatic network mapping functionality in place EVER has an up to date map of what is connected (And even the question of 'where' can be surprisingly problematic).

To the OP: One thing to watch is that some network stacks can decide to drop UDP between the network interface and the application, seen Windows do this with a cheap gigE card when heavily loaded, replacing it with a good one fixed the issue, which I am assuming was driver related.

Regards, Dan.

[1] Signalling system 7, the protocol suite that started life in the 70's and has grown by accretion ever since to accommodate new things like cell phones.... It is obscure, baroque, massively insecure and has 'issues' (Want to locate almost any cell phone on the planet? There is an SS7 command for that (Yes really)).
 

It was telling that when the Germans looked at the SS7 [1] traffic on a couple of the big telecoms internal call management systems they found ~5,000 queries per second that had no real business being there....

Why am I not in the slightest bit surprised!

[1] Signalling system 7, the protocol suite that started life in the 70's and has grown by accretion ever since to accommodate new things like cell phones.... It is obscure, baroque, massively insecure and has 'issues' (Want to locate almost any cell phone on the planet? There is an SS7 command for that (Yes really)).

Well, such a search-and-discovery command/mechanism is inherently required isn't it!

Security through obscurity isn't exactly a new concept :-( Consider SCADA :-(

(And just think of the cost of the SS7 64kb/s interface cards; manufacturers salivate when they are mentioned! )
 

Well, such a search-and-discovery command/mechanism is inherently required isn't it!
Security through obscurity isn't exactly a new concept :-( Consider SCADA :-(
Well sort of, but the command I am thinking of went well beyond the "First get the IMSI, then find out the network region, then ping the phone" functionality one would expect for the required call routing stuff.

And, yea, SCADA vendors really need a collective kicking, direct quote "If you want us to support remote control you must disable the firewall!", much puzzlement from their end as to why I started swearing.

Regards, Dan.
 

And, yea, SCADA vendors really need a collective kicking, direct quote "If you want us to support remote control you must disable the firewall!", much puzzlement from their end as to why I started swearing.

I've wondered what form such a kicking would take, without much success.

One possibility is increases in their (or their customers') insurance premiums. Another is a few of their customers going out of business. Or maybe something vaguely like UL certifications, but who in their right mind would do such a certification?
 

A few more incidents like that uncontrolled blast furnace shutdown might concentrate a few minds, but it is only when the cost of the losses exceed the cost of fixing it that we will see any real action.

Tying the plant automation into the company ERP systems (and thus the internet) is just too attractive, and designing a whole separate level of failsafes is expensive, especially if you have to plan for malice rather then just faults or incompetence.

Regards, Dan.
 

A few more incidents like that uncontrolled blast furnace shutdown might concentrate a few minds, but it is only when the cost of the losses exceed the cost of fixing it that we will see any real action.

"Mind" implies an individual. The trouble is that the cost of the losses don't proportionately affect the individuals responsible, especially in environments where the individuals change too frequently. Jail time is probably the only thing that would work, but individuals can point the finger at their predecessor/successor, and prosecutions for corporate manslaughter never succeed.

Tying the plant automation into the company ERP systems (and thus the internet) is just too attractive, and designing a whole separate level of failsafes is expensive, especially if you have to plan for malice rather then just faults or incompetence.

It gets worse. How do I know what a vendor has sold if the vendor themselves don't know what they put in their product :(

Just wait until your five year old IoT appliance gets rooted, starts spamming/ or DDOSing, and there's no way of updating the appliance.
 

Plenty of ISPs (who really should understand this stuff) out there handing out 'free' home routers with their broadband services that have known REMOTE ROOT EXPLOITS, and not patching the things, I don't see the IoT being any better that way.

Already we have had the remotely exploitable telly that had NO REMOTE FIRMWARE UPDATE CAPABILITY!

Firewalls, both ways, three, in series, from different vendors on hardware from different vendors.....

Regards, Dan.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top