Faster way to ping URL than curl [closed]

What would be the fastest way to check whether the specified URL is working? (responding in OK http status code)?

For now I'm using curl, but I have bunch of URLs to test in a loop, so I'm looking for the fastest solution.

Any options to check besides wget?


I suspect that any performance increases you'll see are from improving whatever wrapper you're using to make your connections, rather than the overhead of launching curl for each URL. Whether it's curl or netcat or wget, you'll probably want to launch each one separately in order to process their results separately.

But I'll answer this question in two ways, just for fun.

First off, you can actually make TCP connections in bash without having to launch something like curl/wget/netcat/fetch/etc. For example:

#!/usr/bin/env bash

hostlist=(
  www.xe.com
  www.google.com
)

for host in "${hostlist[@]}"; do
  exec 3<>/dev/tcp/$host/80         # open a socket connection on fd/3
  printf "HEAD / HTTP/1.0\n\n" >&3  # send a request
  read -u 3 protocol code message   # read the result (first line only)
  exec 3<&-; exec 3>&-              # close fd/3, in and out
  echo ">> $host -- $code $message ($protocol)"
  printf ">> %s -- %s %s (%s)\n" "$host" "$code" "${message%?}" "$protocol"
done

I found some nice documentation on this bash feature here.

Note the handling of $message. Since this is HTTP protocol, the line has a \r at the end. This strips it, for more sensible display.

Note that looking for "OK" probably isn't what you want to do. In my example above, www.google.com returns a 302 redirect rather than a 200 OK, which is a perfectly valid response.

Note also that attempting to open a connection this way to a nonexistent host is an error. You'll want to think about the various error conditions you may encounter in this script, and how you want to handle them.

Second option is to use a tool that allows multiple URLs to be provided on one command line. As it happens, curl does this. And you can massage its output in beautiful and wondrous ways. For example:

curl -sL -w "%{http_code} %{url_effective}\\n" \ 
 "http://www.xe.com/" -o /dev/null \
 "http://www.google.com" -o /dev/null

Note that this solution performs an HTTP GET rather than a HEAD, so you're transferring more data, but getting a more "pure" result. If you want to save bandwidth by using HEAD, use curl's -I option. (I've found that in some situations, particularly with Java™, the HEAD method is periodically not implemented. Using GET may increase the likelihood that the response will be equivalent to the one provided to a browser, though at the cost of extra bandwidth.)

You can script the expansion of your URL list into a command line however you like. Though of course, if you're dealing with thousands and thousands of URLs, you may want to wrap them in the first solution after all.


Maybe use Netcat?

( netcat $domain 80 | head -n 1 ) << EOF
HEAD / HTTP/1.0
Host: $domain


EOF

Output:

HTTP/1.1 200 OK