Force wget to timeout
Solution 1:
You can run the wget command as a background process and send a SIGKILL to forcibly kill it after sleeping for a certain amount of time.
wget ... &
wget_pid=$!
counter=0
timeout=60
while [[ -n $(ps -e) | grep "$wget_pid") && "$counter" -lt "$timeout" ]]
do
sleep 1
counter=$(($counter+1))
done
if [[ -n $(ps -e) | grep "$wget_pid") ]]; then
kill -s SIGKILL "$wget_pid"
fi
Explanation:
-
wget ... &
- the&
notation at the end runs the command in the background as opposed to the foreground -
wget_pid=$!
-$!
is a special shell variable that contains the process id of the most recently executed command. Here we save it to a variable calledwget_pid
. -
while [[ -n $(ps -e) | grep "$wget_pid") && "$counter" -lt "$timeout" ]]
- Look for the process every one second, if it's still there, keep waiting until a timeout limit. -
kill -s SIGKILL "$wget_pid"
- We usekill
to forcibly kill the wget process running in the background by sending it a SIGKILL signal.
Solution 2:
Easiest way is to use the timeout(1)
command, part of GNU coreutils, so available pretty much anywhere bash is installed:
timeout 60 wget ..various wget args..
or if you want to hard-kill wget if its running too long:
timeout -s KILL 60 wget ..various wget args..
Solution 3:
Recent wget
versions (1.19+, at least) allow to set a timeout:
-T, --timeout=SECONDS set all timeout values to SECONDS
--dns-timeout=SECS set the DNS lookup timeout to SECS
--connect-timeout=SECS set the connect timeout to SECS
--read-timeout=SECS set the read timeout to SECS
--waitretry=SECONDS wait 1..SECONDS between retries of a retrieval
-t, --tries=NUMBER set number of retries to NUMBER (0 unlimits)
Example
wget --timeout 4 --tries 1 "https://www.google.com/"
This example will try once to connect and get URL https://www.google.com. After 4 seconds it will timeout.