Actual networking
Aug
21
2013

In a previous post I looked into client side prediction, how it works, and why we do it. Yet I realized, the environment I was working in didn't have a single one of those problems. I was running on the loopback interface, with zero latency and zero packet loss, and absolutely no way of seeing if the network-compensation code even worked. So I added a random "packet-loss" by simply not sending packets based on a random value — and my game promptly crashed.

Turns out the problem was not related to network-compensation, but the fact that the server code had an issue with running the game with no client connected. The random number was a pseudo-random number, and the seed was fixed, resulting in the same sequence every time, and in this case, it meant dropping the first packet sent from the client, which was the connection packet. The resend did not trigger until a second later, given the server ample time to run without a client. But it emphasized the need for actually run the game using something which behaves at least sort of like a network.

Packet loss

Random packet loss is easy, you simply pull a random number, if it's below your set packet loss rate, you simply do not send (or skip reading) the packet. This is not really realistic network behavior, but it'll probably be good enough to verify and improve your network compensation code.

There is however one more packet loss pattern you should consider, the burst packet loss, where large chunks of consecutive packets are dropped. Many packet loss compensation schemes include tacking on the data from the previous packet (or the last couple of packets) in case it got lost, which obviously doesn't do you much good if you lose ten packets in a row.

Network latency

Network transmissions are physically limited by the speed of light, but with all the switches and routers along the way, the time it takes to reach the other end is far longer than the time it would take for light to reach it in a vacuum. Even with the speed of light, it would take roughly 135 milliseconds to reach the other side of the world and back, and for network pings across the Atlantic we're usually looking at 300-500 milliseconds, which is pretty darn close to the speed of light considering what has to happen along the way for it to reach its destination.

What we need to do is queue the packets, and delay the delivery to introduce an artificial latency. On top of this we add random jitter so the packets don't arrive perfectly spaced. If the span of the random jitter is large enough, we can achieve a separate network phenomena, which can also cause your network code to get confused, packet reordering. A large delay on one packet, a short delay on the next (and using a priority queue, rather than FIFO queue in our simulated network) will let the latter overtake the former. The reason for packets arriving out of order in the real world can have several reasons, and although the common assumption that the packets are taking different paths down to the destination might not be as realistic as one would think, but it's easy to imagine parallel processing in a router can cause out of order delivery.

Bit errors

One network issue which is rarely considered, but still happens more often than you might think are bit errors. Bit errors happen for various reasons, usually on the network layer where the electrical signal is so extraordinarily ambiguous, it's amazing they even get it right once, the fact that it works pretty much all the time is close to magic. Anyway, there are usually a couple of layers of checksums which will discard the packet in case an error sneaks through, they are usually built to be computationally inexpensive, and more often than not, two wrongs can make a right under certain circumstances.

Bit errors also experience burst errors (perhaps changing multiple bytes), but I'd wager a guess that it is a very remote possibility that a burst error would make it through the lower level checksums.

Code freebie

To wrap it up, here's my little utility to simulate network for UDP packets. It handles one link, queuing packets in both directions. The server address/port is fixed, but whenever it sees a new client address/port, it's updated so you can leave it running as long as the server address/port stay the same (which is usually fixed) it doesn't matter if you restart either server or client or both. If you need more clients, though, you need to start multiple instances.

This is not the most elegant piece of code I ever wrote, but it does the trick. I haven't built burst packet loss or bit errors into it, but that shouldn't be too hard to add.

comments powered by Disqus

Categories

Archives