Catching error codes in a shell pipe

ShellError HandlingPipe

Shell Problem Overview


I currently have a script that does something like

./a | ./b | ./c

I want to modify it so that if any of a, b or c exit with an error code I print an error message and stop instead of piping bad output forward.

What would be the simplest/cleanest way to do so?

Shell Solutions


Solution 1 - Shell

In bash you can use set -e and set -o pipefail at the beginning of your file. A subsequent command ./a | ./b | ./c will fail when any of the three scripts fails. The return code will be the return code of the first failed script.

Note that pipefail isn't available in standard sh.

Solution 2 - Shell

You can also check the ${PIPESTATUS[]} array after the full execution, e.g. if you run:

./a | ./b | ./c

Then ${PIPESTATUS} will be an array of error codes from each command in the pipe, so if the middle command failed, echo ${PIPESTATUS[@]} would contain something like:

0 1 0

and something like this run after the command:

test ${PIPESTATUS[0]} -eq 0 -a ${PIPESTATUS[1]} -eq 0 -a ${PIPESTATUS[2]} -eq 0

will allow you to check that all commands in the pipe succeeded.

Solution 3 - Shell

If you really don't want the second command to proceed until the first is known to be successful, then you probably need to use temporary files. The simple version of that is:

tmp=${TMPDIR:-/tmp}/mine.$$
if ./a > $tmp.1
then
    if ./b <$tmp.1 >$tmp.2
    then
        if ./c <$tmp.2
        then : OK
        else echo "./c failed" 1>&2
        fi
    else echo "./b failed" 1>&2
    fi
else echo "./a failed" 1>&2
fi
rm -f $tmp.[12]

The '1>&2' redirection can also be abbreviated '>&2'; however, an old version of the MKS shell mishandled the error redirection without the preceding '1' so I've used that unambiguous notation for reliability for ages.

This leaks files if you interrupt something. Bomb-proof (more or less) shell programming uses:

tmp=${TMPDIR:-/tmp}/mine.$$
trap 'rm -f $tmp.[12]; exit 1' 0 1 2 3 13 15
...if statement as before...
rm -f $tmp.[12]
trap 0 1 2 3 13 15

The first trap line says 'run the commands 'rm -f $tmp.[12]; exit 1' when any of the signals 1 SIGHUP, 2 SIGINT, 3 SIGQUIT, 13 SIGPIPE, or 15 SIGTERM occur, or 0 (when the shell exits for any reason). If you're writing a shell script, the final trap only needs to remove the trap on 0, which is the shell exit trap (you can leave the other signals in place since the process is about to terminate anyway).

In the original pipeline, it is feasible for 'c' to be reading data from 'b' before 'a' has finished - this is usually desirable (it gives multiple cores work to do, for example). If 'b' is a 'sort' phase, then this won't apply - 'b' has to see all its input before it can generate any of its output.

If you want to detect which command(s) fail, you can use:

(./a || echo "./a exited with $?" 1>&2) |
(./b || echo "./b exited with $?" 1>&2) |
(./c || echo "./c exited with $?" 1>&2)

This is simple and symmetric - it is trivial to extend to a 4-part or N-part pipeline.

Simple experimentation with 'set -e' didn't help.

Solution 4 - Shell

Unfortunately, the answer by Johnathan requires temporary files and the answers by Michel and Imron requires bash (even though this question is tagged shell). As pointed out by others already, it is not possible to abort the pipe before later processes are started. All processes are started at once and will thus all run before any errors can be communicated. But the title of the question was also asking about error codes. These can be retrieved and investigated after the pipe finished to figure out whether any of the involved processes failed.

Here is a solution that catches all errors in the pipe and not only errors of the last component. So this is like bash's pipefail, just more powerful in the sense that you can retrieve all the error codes.

res=$( (./a 2>&1 || echo "1st failed with $?" >&2) |
(./b 2>&1 || echo "2nd failed with $?" >&2) |
(./c 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
	echo pipe failed
fi

To detect whether anything failed, an echo command prints on standard error in case any command fails. Then the combined standard error output is saved in $res and investigated later. This is also why standard error of all processes is redirected to standard output. You can also send that output to /dev/null or leave it as yet another indicator that something went wrong. You can replace the last redirect to /dev/null with a file if yo uneed to store the output of the last command anywhere.

To play more with this construct and to convince yourself that this really does what it should, I replaced ./a, ./b and ./c by subshells which execute echo, cat and exit. You can use this to check that this construct really forwards all the output from one process to another and that the error codes get recorded correctly.

res=$( (sh -c "echo 1st out; exit 0" 2>&1 || echo "1st failed with $?" >&2) |
(sh -c "cat; echo 2nd out; exit 0" 2>&1 || echo "2nd failed with $?" >&2) |
(sh -c "echo start; cat; echo end; exit 0" 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
	echo pipe failed
fi

Solution 5 - Shell

This answer is in the spirit of the accepted answer, but using shell variables instead of temporary files.

if TMP_A="$(./a)"
then
 if TMP_B="$(echo "TMP_A" | ./b)"
 then
  if TMP_C="$(echo "TMP_B" | ./c)"
  then
   echo "$TMP_C"
  else
   echo "./c failed"
  fi
 else
  echo "./b failed"
 fi
else
 echo "./a failed"
fi

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionhugomgView Question on Stackoverflow
Solution 1 - ShellMichel SamiaView Answer on Stackoverflow
Solution 2 - ShellImronView Answer on Stackoverflow
Solution 3 - ShellJonathan LefflerView Answer on Stackoverflow
Solution 4 - ShelljoschView Answer on Stackoverflow
Solution 5 - ShellJashaView Answer on Stackoverflow