Measure total latency of SSH session

Is there a way to measure/report the overall latency in a tunneled SSH session?

My particular setup is:

  • Client (OS X + wifi router + ADSL modem)
  • Gateway SSH server exposed to Internet
  • Internal SSH target to which I'm tunneling

I'm interested in seeing the latency between the console on my local machine and the final machine on which I have the session open.


Solution 1:

See the sshping utility: https://github.com/spook/sshping

Example:

# sshping 172.16.47.143
--- Login: 1725 msec
--- Minimum Latency: 4046 nsec
---  Median Latency: 11026 nsec  +/- 0 std dev
--- Average Latency: 178105 nsec
--- Maximum Latency: 8584886 nsec
---      Echo count: 1000 Bytes
---  Transfer Speed: 11694919 Bytes/second

# sshping --help
Usage: sshping [options] [user@]addr[:port]

  SSH-based ping that measures interactive character echo latency
  and file transfer throughput.  Pronounced "shipping".

Options:
  -c  --count NCHARS   Number of characters to echo, default 1000
  -e  --echocmd CMD    Use CMD for echo command; default: cat > /dev/null
  -h  --help           Print usage and exit
  -i  --identity FILE  Identity file, ie ssh private keyfile
  -p  --password PWD   Use password PWD (can be seen, use with care)
  -r  --runtime SECS   Run for SECS seconds, instead of count limit
  -t  --tests e|s      Run tests e=echo s=speed; default es=both
  -v  --verbose        Show more output, use twice for more: -vv

Solution 2:

I skipped some steps suggested by @nicht-verstehen:

python -m timeit --setup 'import subprocess; p = subprocess.Popen(["ssh", "user@host", "cat"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=0)' 'p.stdin.write(b"z"); assert p.stdout.read(1) == b"z"'

Where

python -m timeit executes the timeit Python module.

The -s/--setup option tells timeit which statement(s) to execute before each repeat.

subprocess.Popen(["ssh", "user@host", "cat"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=0) launches ssh - executing cat on your host - as a child/subprocess, redirecting its IO streams to Python file-like objects. bufsize=0 makes sure no IO is buffered, which may cause IO waits.

And for each loop:
p.stdin.write(b"z") writes a single byte to the child (in turn through ssh to cat).
p.stdout.read(1) reads a single byte from the child. The assertion around it tests whether that byte is the same as the one you wrote to it.

Boils down to the same thing, but skips creating the named pipes (mkfifo). I noticed that the more loops you run, the faster each loop is. Control it using -n/--number: python -m timeit --number 50 ...

Solution 3:

Was trying to do this myself and came up with this. Probably there is a simpler way, but this is what I came up with.

First, prepare pipes which will be used to make the benchmarking program communicate through the SSH connection.

$ mkfifo /tmp/up /tmp/down

Then establish a connection in ControlMaster mode without executing any remote command. This allows us to authenticate with the host interactively. After the connection is established, SSH will just "hang" here in foreground.

$ ssh $HOST -N -M -S /tmp/control

In a parallel terminal, execute remote cat in background. It will be our echo server whose latency we will measure. Inputs and outputs are connected to FIFOs:

$ ssh $HOST -S /tmp/control cat </tmp/up >/tmp/down &

And then benchmark a small program (send a byte to up FIFO, receive a byte from down FIFO):

$ python -m timeit -s 'import os' \
    'os.write(3, "z"); z=os.read(4, 1); assert z=="z", "got %s" % z' \
    3>/tmp/up 4</tmp/down
10 loops, best of 3: 24.6 msec per loop

The measure obviously shows the round-trip latency. If you need to repeat the experiment, run the last two commands (ssh and python) again.

If something seems to go wrong, use SSH -v flag to get more debugging output.