Exit when one process in pipe fails

Solution 1:

I think that you're looking for the pipefail option. From the bash man page:

pipefail

If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands in the pipeline exit successfully. This option is disabled by default.

So if you start your wrapper script with

#!/bin/bash

set -e
set -o pipefail

Then the wrapper will exit when any error occurs (set -e) and will set the status of the pipeline in the way that you want.

Solution 2:

The main issue at hand here is clearly the pipe. In bash, when executing a command of the form

command1 | command2

and command2 dies or terminates, the pipe which receives the output (/dev/stdout) from command1 becomes broken. The broken pipe, however, does not terminate command1. This will only happen when it tries to write to the broken pipe, upon which it will exit with sigpipe. A simple demonstration of this can be seen in this question.

If you want to avoid this problem, you should make use of process substitution in combination with input redirection. This way, you avoid pipes. The above pipeline is then written as:

command2 < <(command1)

In the case of the OP, this would become:

./script.sh < <(tee /dev/stderr) | tee /dev/stderr

which can also be written as:

./script.sh < <(tee /dev/stderr) > >(tee /dev/stderr)