What are good and bad jitter times for a LAN

Ive just ran jperf (frontend to iperf) on our network between 2 workstations, its recorded jitter between 0.033ms and 0.048ms. Is this good or bad? Are there more variables that i would need to consider to make the decision?

EDIT: TCP/IP Ethernet LAN 43 PCs 1 server, 100Mbits main switch, various small 8 port switches, test was done using UDP, Its a Windows Domain.

I want to instal a few voip softphones on the workstations, see how many i can use that reliably work, im testing a few different workstations around the network to see where the best quality network paths are. Will also change some equipment if i identify bad connections.


A quick back-of-my hand calculation gives the following data:

100 mbit bit rate is 100 000 000 bits/second (network, so not 1024)

A full length ethernet frame is 1518 bytes, 12 144 bits.

Transfer of a full packet takes 12144/100000000 seconds, around 0.12 ms.

A minimum length ethernet frame is 64 byes, 512 bits.

Transfer of a minimal packet takes 512/100000000 seconds, around 0.005 ms.

So the whole jitter can be explained with a single packet in queue at one of the switches. In practice you should be more interested in the distribution of jitter than single outliers. To get this data you need to do a lot more measurements. If this is your maximum latency you observe you are doing as well as Ethernet can.

For VoIP a maximum latency under 10 ms will place you easily in MOS 5. People start complaining around 50-100ms. Above 100ms it's significantly degraded. For grading of voice quality have a look at http://en.wikipedia.org/wiki/Mean_opinion_score


Jitter less then 5ms is likely to be overwhelmed by any general purpose OS (the scheduling subsystem) on the end of the connection.

In general jitter of ~10% of the RTT is reasonable, especially long, contended or unreliable links can obviously affect that.


I really doubt you're going to notice one twenty thousandth of a second delay in your network communications.