Executing script remotely with "curl | bash" feedback
Solution 1:
As @Zoredache points out, ssh
relays the status of the remote command as its own exit status, so error detection works transparently over SSH. However, two important points require special consideration in your example.
First, curl
tends to be very lenient, treating many abnormal conditions as success. For example, curl http://serverfault.com/some-non-existent-url-that-returns-404
actually has an exit status of 0. I find this behavior counterintuitive. To treat those conditions as errors, I like to use the -fsS
flags:
- The
--fail
flag suppresses the output when a failure occurs, so thatbash
won't get a chance to execute the web server's 404 error page as if it were code. - The
--silent --show-error
flags, together, provide a reasonable amount of error reporting.--silent
suppresses all commentary fromcurl
.--show-error
re-enables error messages, which are sent to STDERR.
Second, you have a pipe, which means that a failure could occur in either the first or the second command. From the section about Pipelines in bash(1):
The return status of a pipeline is the exit status of the last command, unless the
pipefail
option is enabled (see The Set Builtin). Ifpipefail
is enabled, the pipeline’s return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
Side note: The bash
documentation is relevant not because you pipe to bash
, but because (I assume) it is your remote user's login shell, and would therefore be the program that interprets the remote command line and handles the execution of the pipeline. If the user has a different login shell, then refer to that shell's documentation.
As a concrete example,
( echo whoami ; false ) | bash
echo $?
yields the output
login
0
demonstrating that the bash
at the end of the pipeline will mask the error status returned by false
. It will return 0 as long as it successfully executes whoami
.
In contrast,
set -o pipefail
( echo whoami ; false ) | bash
echo $?
yields
login
1
so that the failure in the first half of the pipeline is reported.
Putting it all together, then, the solution should be
ssh [email protected] 'set -s pipefail ; curl -fsS http://some_server/script.sh | bash'
That way, you will get a non-zero exit status if any of the following returns non-zero:
ssh
- The remote login shell
curl
- The
bash
at the end of the pipeline
Furthermore, if curl -fsS
detects an abnormal HTTP status code, then it will:
- suppress its STDOUT, so that nothing will get piped to
bash
to be executed - return a non-zero value which is properly propagated all the way
- print a one-line diagnostic message to its STDERR, which is also propagated all the way
Solution 2:
That's a horrible hack. If you want remote execution, use something that does remote execution properly, such as func or mcollective.