Confused about stdin, stdout and stderr?

LinuxStdoutStdinStderr

Linux Problem Overview


I am rather confused with the purpose of these three files. If my understanding is correct, stdin is the file in which a program writes into its requests to run a task in the process, stdout is the file into which the kernel writes its output and the process requesting it accesses the information from, and stderr is the file into which all the exceptions are entered. On opening these files to check whether these actually do occur, I found nothing seem to suggest so!

What I would want to know is what exactly is the purpose of these files, absolutely dumbed down answer with very little tech jargon!

Linux Solutions


Solution 1 - Linux

Standard input - this is the file handle that your process reads to get information from you.

Standard output - your process writes conventional output to this file handle.

Standard error - your process writes diagnostic output to this file handle.

That's about as dumbed-down as I can make it :-)

Of course, that's mostly by convention. There's nothing stopping you from writing your diagnostic information to standard output if you wish. You can even close the three file handles totally and open your own files for I/O.

When your process starts, it should already have these handles open and it can just read from and/or write to them.

By default, they're probably connected to your terminal device (e.g., /dev/tty) but shells will allow you to set up connections between these handles and specific files and/or devices (or even pipelines to other processes) before your process starts (some of the manipulations possible are rather clever).

An example being:

my_prog <inputfile 2>errorfile | grep XYZ

which will:

  • create a process for my_prog.
  • open inputfile as your standard input (file handle 0).
  • open errorfile as your standard error (file handle 2).
  • create another process for grep.
  • attach the standard output of my_prog to the standard input of grep.

Re your comment:

>When I open these files in /dev folder, how come I never get to see the output of a process running?

It's because they're not normal files. While UNIX presents everything as a file in a file system somewhere, that doesn't make it so at the lowest levels. Most files in the /dev hierarchy are either character or block devices, effectively a device driver. They don't have a size but they do have a major and minor device number.

When you open them, you're connected to the device driver rather than a physical file, and the device driver is smart enough to know that separate processes should be handled separately.

The same is true for the Linux /proc filesystem. Those aren't real files, just tightly controlled gateways to kernel information.

Solution 2 - Linux

It would be more correct to say that stdin, stdout, and stderr are "I/O streams" rather than files. As you've noticed, these entities do not live in the filesystem. But the Unix philosophy, as far as I/O is concerned, is "everything is a file". In practice, that really means that you can use the same library functions and interfaces (printf, scanf, read, write, select, etc.) without worrying about whether the I/O stream is connected to a keyboard, a disk file, a socket, a pipe, or some other I/O abstraction.

Most programs need to read input, write output, and log errors, so stdin, stdout, and stderr are predefined for you, as a programming convenience. This is only a convention, and is not enforced by the operating system.

Solution 3 - Linux

As a complement of the answers above, here is a sum up about Redirections: Redirections cheatsheet

EDIT: This graphic is not entirely correct.

The first example does not use stdin at all, it's passing "hello" as an argument to the echo command.

The graphic also says 2>&1 has the same effect as &> however

ls Documents ABC > dirlist 2>&1
#does not give the same output as 
ls Documents ABC > dirlist &>

This is because &> requires a file to redirect to, and 2>&1 is simply sending stderr into stdout

Solution 4 - Linux

I'm afraid your understanding is completely backwards. :)

Think of "standard in", "standard out", and "standard error" from the program's perspective, not from the kernel's perspective.

When a program needs to print output, it normally prints to "standard out". A program typically prints output to standard out with printf, which prints ONLY to standard out.

When a program needs to print error information (not necessarily exceptions, those are a programming-language construct, imposed at a much higher level), it normally prints to "standard error". It normally does so with fprintf, which accepts a file stream to use when printing. The file stream could be any file opened for writing: standard out, standard error, or any other file that has been opened with fopen or fdopen.

"standard in" is used when the file needs to read input, using fread or fgets, or getchar.

Any of these files can be easily redirected from the shell, like this:

cat /etc/passwd > /tmp/out     # redirect cat's standard out to /tmp/foo
cat /nonexistant 2> /tmp/err   # redirect cat's standard error to /tmp/error
cat < /etc/passwd              # redirect cat's standard input to /etc/passwd

Or, the whole enchilada:

cat < /etc/passwd > /tmp/out 2> /tmp/err

There are two important caveats: First, "standard in", "standard out", and "standard error" are just a convention. They are a very strong convention, but it's all just an agreement that it is very nice to be able to run programs like this: grep echo /etc/services | awk '{print $2;}' | sort and have the standard outputs of each program hooked into the standard input of the next program in the pipeline.

Second, I've given the standard ISO C functions for working with file streams (FILE * objects) -- at the kernel level, it is all file descriptors (int references to the file table) and much lower-level operations like read and write, which do not do the happy buffering of the ISO C functions. I figured to keep it simple and use the easier functions, but I thought all the same you should know the alternatives. :)

Solution 5 - Linux

I think people saying stderr should be used only for error messages is misleading.

It should also be used for informative messages that are meant for the user running the command and not for any potential downstream consumers of the data (i.e. if you run a shell pipe chaining several commands you do not want informative messages like "getting item 30 of 42424" to appear on stdout as they will confuse the consumer, but you might still want the user to see them.

See this for historical rationale:

> "All programs placed diagnostics on the standard output. This had > always caused trouble when the output was redirected into a file, but > became intolerable when the output was sent to an unsuspecting > process. Nevertheless, unwilling to violate the simplicity of the > standard-input-standard-output model, people tolerated this state of > affairs through v6. Shortly thereafter Dennis Ritchie cut the Gordian > knot by introducing the standard error file. That was not quite enough. > With pipelines diagnostics could come from any of several programs > running simultaneously. Diagnostics needed to identify themselves."

Solution 6 - Linux

stdin

Reads input through the console (e.g. Keyboard input). Used in C with scanf

scanf(<formatstring>,<pointer to storage> ...);

stdout

Produces output to the console. Used in C with printf

printf(<string>, <values to print> ...);

stderr

Produces 'error' output to the console. Used in C with fprintf

fprintf(stderr, <string>, <values to print> ...);

Redirection

The source for stdin can be redirected. For example, instead of coming from keyboard input, it can come from a file (echo < file.txt ), or another program ( ps | grep <userid>).

The destinations for stdout, stderr can also be redirected. For example stdout can be redirected to a file: ls . > ls-output.txt, in this case the output is written to the file ls-output.txt. [Stderr can be redirected][1] with 2>.

[1]: http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html "stderr redirect"

Solution 7 - Linux

Using ps -aux reveals current processes, all of which are listed in /proc/ as /proc/(pid)/, by calling cat /proc/(pid)/fd/0 it prints anything that is found in the standard output of that process I think. So perhaps,

/proc/(pid)/fd/0 - Standard Output File
/proc/(pid)/fd/1 - Standard Input File
/proc/(pid)/fd/2 - Standard Error File

for examplemy terminal window

But only worked this well for /bin/bash other processes generally had nothing in 0 but many had errors written in 2

Solution 8 - Linux

For authoritative information about these files, check out the man pages, run the command on your terminal.

$ man stdout 

But for a simple answer, each file is for:

stdout for a stream out

stdin for a stream input

stderr for printing errors or log messages.

Each unix program has each one of those streams.

Solution 9 - Linux

stderr will not do IO Cache buffering so if our application need to print critical message info (some errors ,exceptions) to console or to file use it where as use stdout to print general log info as it use IO Cache buffering there is a chance that before writing our messages to file application may close ,leaving debugging complex

Solution 10 - Linux

A file with associated buffering is called a stream and is declared to be a pointer to a defined type FILE. The fopen() function creates certain descriptive data for a stream and returns a pointer to designate the stream in all further transactions. Normally there are three open streams with constant pointers declared in the header and associated with the standard open files. At program startup three streams are predefined and need not be opened explicitly: standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). When opened the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device

https://www.mkssoftware.com/docs/man5/stdio.5.asp

Solution 11 - Linux

Here is a lengthy article on stdin, stdout and stderr:

To summarize:

> Streams Are Handled Like Files > > Streams in Linux—like almost everything else—are treated as though > they were files. You can read text from a file, and you can write text > into a file. Both of these actions involve a stream of data. So the > concept of handling a stream of data as a file isn’t that much of a > stretch. > > Each file associated with a process is allocated a unique number to > identify it. This is known as the file descriptor. Whenever an action > is required to be performed on a file, the file descriptor is used to > identify the file. > > These values are always used for stdin, stdout, and stderr: > > 0: stdin > 1: stdout > 2: stderr

Ironically I found this question on stack overflow and the article above because I was searching for information on abnormal / non-standard streams. So my search continues.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionShouvikView Question on Stackoverflow
Solution 1 - LinuxpaxdiabloView Answer on Stackoverflow
Solution 2 - LinuxJim LewisView Answer on Stackoverflow
Solution 3 - LinuxLeopold GaultView Answer on Stackoverflow
Solution 4 - LinuxsarnoldView Answer on Stackoverflow
Solution 5 - LinuxdeeView Answer on Stackoverflow
Solution 6 - Linuxmikek3332002View Answer on Stackoverflow
Solution 7 - LinuxSamView Answer on Stackoverflow
Solution 8 - LinuxMargach ChrisView Answer on Stackoverflow
Solution 9 - LinuxgeekanilView Answer on Stackoverflow
Solution 10 - LinuxBahruz BalabayovView Answer on Stackoverflow
Solution 11 - LinuxWinEunuuchs2UnixView Answer on Stackoverflow