pipe stdout and stderr to two different processes in shell script?

BashShellPipeIo Redirection

Bash Problem Overview


I've a pipline doing just

 command1 | command2

So, stdout of command1 goes to command2 , while stderr of command1 go to the terminal (or wherever stdout of the shell is).

How can I pipe stderr of command1 to a third process (command3) while stdout is still going to command2 ?

Bash Solutions


Solution 1 - Bash

Use another file descriptor

{ command1 2>&3 | command2; } 3>&1 1>&2 | command3

You can use up to 7 other file descriptors: from 3 to 9.
If you want more explanation, please ask, I can explain ;-)

Test

{ { echo a; echo >&2 b; } 2>&3 | sed >&2 's/$/1/'; } 3>&1 1>&2 | sed 's/$/2/'

output:

b2
a1

Example

Produce two log files:

  1. stderr only

  2. stderr and stdout

    { { { command 2>&1 1>&3; } | tee err-only.log; } 3>&1; } > err-and-stdout.log

If command is echo "stdout"; echo "stderr" >&2 then we can test it like that:

$ { { { echo out>&3;echo err>&1;}| tee err-only.log;} 3>&1;} > err-and-stdout.log
$ head err-only.log err-and-stdout.log
==> err-only.log <==
err

==> err-and-stdout.log <==
out
err

Solution 2 - Bash

The accepted answer results in the reversing of stdout and stderr. Here's a method that preserves them (since Googling on that purpose brings up this post):

{ command 2>&1 1>&3 3>&- | stderr_command; } 3>&1 1>&2 | stdout_command

Notice:

  • 3>&- is required to prevent fd 3 from being inherited by command. (As this can lead to unexpected results depending on what command does inside.)
Parts explained:
  1. Outer part first:

    1. 3>&1 -- fd 3 for { ... } is set to what fd 1 was (i.e. stdout)
    2. 1>&2 -- fd 1 for { ... } is set to what fd 2 was (i.e. stderr)
    3. | stdout_command -- fd 1 (was stdout) is piped through stdout_command
  2. Inner part inherits file descriptors from the outer part:

    1. 2>&1 -- fd 2 for command is set to what fd 1 was (i.e. stderr as per outer part)
    2. 1>&3 -- fd 1 for command is set to what fd 3 was (i.e. stdout as per outer part)
    3. 3>&- -- fd 3 for command is set to nothing (i.e. closed)
    4. | stderr_command -- fd 1 (was stderr) is piped through stderr_command

Example:

foo() {
    echo a
    echo b >&2
    echo c
    echo d >&2
}

{ foo 2>&1 1>&3 3>&- | sed -u 's/^/err: /'; } 3>&1 1>&2 | sed -u 's/^/out: /'
Output:
out: a
err: b
err: d
out: c

(Order of a -> c and b -> d will always be indeterminate because there's no form of synchronization between stderr_command and stdout_command.)

Solution 3 - Bash

Using process substitution:

command1 > >(command2) 2> >(command3)

See http://tldp.org/LDP/abs/html/process-sub.html for more info.

Solution 4 - Bash

Simply redirect stderr to stdout

{ command1 | command2; } 2>&1 | command3

Caution: commnd3 will also read command2 stdout (if any).
To avoid that, you can discard commnd2 stdout:

{ command1 | command2 >/dev/null; } 2>&1 | command3

However, to keep command2 stdout (e.g. in the terminal),
then please refer to my other answer more complex.

Test

{ { echo -e "a\nb\nc" >&2; echo "----"; } | sed 's/$/1/'; } 2>&1 | sed 's/$/2/'

output:

a2
b2
c2
----12

Solution 5 - Bash

Pipe stdout as usual, but use Bash process substitution for the stderr redirection:

some_command 2> >(command of stderr) | command of stdout

Header: #!/bin/bash

Solution 6 - Bash

Zsh Version

I like the answer posted by @antak, but it doesn't work correctly in zsh due to multios. Here is a small tweak to use it in zsh:

{ unsetopt multios; command 2>&1 1>&3 3>&- | stderr_command; } 3>&1 1>&2 | stdout_command

To use, replace command with the command you want to run, and replace stderr_command and stdout_command with your desired pipelines. For example, the command ls / /foo will produce both stdout output and stderr output, so we can use it as a test case. To save the stdout to a file called stdout and the stderr to a file called stderr, you can do this:

{ unsetopt multios; ls / /foo 2>&1 1>&3 3>&- | cat >stderr; } 3>&1 1>&2 | cat >stdout

See @antak's original answer for full explanation.

Solution 7 - Bash

The same effect can be accomplished fairly easily with a fifo. I'm not aware of a direct piping syntax for doing it (though it would be nifty to see one). This is how you might do it with a fifo.

First, something that prints to both stdout and stderr, outerr.sh:

#!/bin/bash

echo "This goes to stdout"
echo "This goes to stderr" >&2

Then we can do something like this:

$ mkfifo err
$ wc -c err &
[1] 2546
$ ./outerr.sh 2>err | wc -c
20
20 err
[1]+  Done                    wc -c err

That way you set up the listener for stderr output first and it blocks until it has a writer, which happens in the next command, using the syntax 2>err. You can see that each wc -c got 20 characters of input.

Don't forget to clean up the fifo after you're done if you don't want it to hang around (i.e. rm). If the other command wants input on stdin and not a file arg, you can use input redirection like wc -c < err too.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
Questionuser964970View Question on Stackoverflow
Solution 1 - BashoHoView Answer on Stackoverflow
Solution 2 - BashantakView Answer on Stackoverflow
Solution 3 - BashFuePiView Answer on Stackoverflow
Solution 4 - BashoHoView Answer on Stackoverflow
Solution 5 - BashiBugView Answer on Stackoverflow
Solution 6 - BashjbylerView Answer on Stackoverflow
Solution 7 - BashFatalErrorView Answer on Stackoverflow