Checking Bash exit status of several commands efficiently

Is there something similar to pipefail for multiple commands, like a 'try' statement but within bash. I would like to do something like this:

echo "trying stuff"
try {
    command1
    command2
    command3
}

And at any point, if any command fails, drop out and echo out the error of that command. I don't want to have to do something like:

command1
if [ $? -ne 0 ]; then
    echo "command1 borked it"
fi

command2
if [ $? -ne 0 ]; then
    echo "command2 borked it"
fi

And so on... or anything like:

pipefail -o
command1 "arg1" "arg2" | command2 "arg1" "arg2" | command3

Because the arguments of each command I believe (correct me if I'm wrong) will interfere with each other. These two methods seem horribly long-winded and nasty to me so I'm here appealing for a more efficient method.


You can write a function that launches and tests the command for you. Assume command1 and command2 are environment variables that have been set to a command.

function mytest {
    "$@"
    local status=$?
    if (( status != 0 )); then
        echo "error with $1" >&2
    fi
    return $status
}

mytest "$command1"
mytest "$command2"

What do you mean by "drop out and echo the error"? If you mean you want the script to terminate as soon as any command fails, then just do

set -e    # DON'T do this.  See commentary below.

at the start of the script (but note warning below). Do not bother echoing the error message: let the failing command handle that. In other words, if you do:

#!/bin/sh

set -e    # Use caution.  eg, don't do this
command1
command2
command3

and command2 fails, while printing an error message to stderr, then it seems that you have achieved what you want. (Unless I misinterpret what you want!)

As a corollary, any command that you write must behave well: it must report errors to stderr instead of stdout (the sample code in the question prints errors to stdout) and it must exit with a non-zero status when it fails.

However, I no longer consider this to be a good practice. set -e has changed its semantics with different versions of bash, and although it works fine for a simple script, there are so many edge cases that it is essentially unusable. (Consider things like: set -e; foo() { false; echo should not print; } ; foo && echo ok The semantics here are somewhat reasonable, but if you refactor code into a function that relied on the option setting to terminate early, you can easily get bitten.) IMO it is better to write:

 #!/bin/sh

 command1 || exit
 command2 || exit
 command3 || exit

or

#!/bin/sh

command1 && command2 && command3

I have a set of scripting functions that I use extensively on my Red Hat system. They use the system functions from /etc/init.d/functions to print green [ OK ] and red [FAILED] status indicators.

You can optionally set the $LOG_STEPS variable to a log file name if you want to log which commands fail.

Usage

step "Installing XFS filesystem tools:"
try rpm -i xfsprogs-*.rpm
next

step "Configuring udev:"
try cp *.rules /etc/udev/rules.d
try udevtrigger
next

step "Adding rc.postsysinit hook:"
try cp rc.postsysinit /etc/rc.d/
try ln -s rc.d/rc.postsysinit /etc/rc.postsysinit
try echo $'\nexec /etc/rc.postsysinit' >> /etc/rc.sysinit
next

Output

Installing XFS filesystem tools:        [  OK  ]
Configuring udev:                       [FAILED]
Adding rc.postsysinit hook:             [  OK  ]

Code

#!/bin/bash

. /etc/init.d/functions

# Use step(), try(), and next() to perform a series of commands and print
# [  OK  ] or [FAILED] at the end. The step as a whole fails if any individual
# command fails.
#
# Example:
#     step "Remounting / and /boot as read-write:"
#     try mount -o remount,rw /
#     try mount -o remount,rw /boot
#     next
step() {
    echo -n "$@"

    STEP_OK=0
    [[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
}

try() {
    # Check for `-b' argument to run command in the background.
    local BG=

    [[ $1 == -b ]] && { BG=1; shift; }
    [[ $1 == -- ]] && {       shift; }

    # Run the command.
    if [[ -z $BG ]]; then
        "$@"
    else
        "$@" &
    fi

    # Check if command failed and update $STEP_OK if so.
    local EXIT_CODE=$?

    if [[ $EXIT_CODE -ne 0 ]]; then
        STEP_OK=$EXIT_CODE
        [[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$

        if [[ -n $LOG_STEPS ]]; then
            local FILE=$(readlink -m "${BASH_SOURCE[1]}")
            local LINE=${BASH_LINENO[0]}

            echo "$FILE: line $LINE: Command \`$*' failed with exit code $EXIT_CODE." >> "$LOG_STEPS"
        fi
    fi

    return $EXIT_CODE
}

next() {
    [[ -f /tmp/step.$$ ]] && { STEP_OK=$(< /tmp/step.$$); rm -f /tmp/step.$$; }
    [[ $STEP_OK -eq 0 ]]  && echo_success || echo_failure
    echo

    return $STEP_OK
}

For what it's worth, a shorter way to write code to check each command for success is:

command1 || echo "command1 borked it"
command2 || echo "command2 borked it"

It's still tedious but at least it's readable.


An alternative is simply to join the commands together with && so that the first one to fail prevents the remainder from executing:

command1 &&
  command2 &&
  command3

This isn't the syntax you asked for in the question, but it's a common pattern for the use case you describe. In general the commands should be responsible for printing failures so that you don't have to do so manually (maybe with a -q flag to silence errors when you don't want them). If you have the ability to modify these commands, I'd edit them to yell on failure, rather than wrap them in something else that does so.


Notice also that you don't need to do:

command1
if [ $? -ne 0 ]; then

You can simply say:

if ! command1; then

And when you do need to check return codes use an arithmetic context instead of [ ... -ne:

ret=$?
# do something
if (( ret != 0 )); then