stderr

stderr, or the diagnostic channel, is one of the two stdio output channels that is automatically open and available in Tcl. Programs usually use it to display diagnostic information.

See Also

exec
includes vital information about the standard channels
open
open pipelines and capture stdout and/or stderr
bgexec
stdout
stdin
stdio
magic names
Tcl syntax

Description

Tcl programs write to stderr in two ways:

  • [puts stderr $msg] gives fine-grained control;
  • [error $message] generally has the effect of reporting a diagnostic traceback to stderr (except if prevented by an outer catch, or in wish which pops up an error info window instead).

The [catch] Trick

When an [exec]ed program generates data on stderr, [exec] will normally return an error, and the value will include the contents of stderr. In this case, [catch] can be used to obtain that value. See [exec] and [open] for examples.

Misc

MSW:

One problem you often have with programs outputting diagnostic messages on stderr while also outputting stuff on stdout is that you lose the interleaving of stderr and stdout. What does this mean ? imagine a program having following scheme of output:

 1. stderr <diagnostic for item a>
 2. stdout information for item a
 3. stderr <diagnostic for item b>
 4. stdout information for item b
 ... and so on ...

Fine you think, and start opening that program in a pipe, and read its stderr and stdout. Now the problem with io buffering comes into play: (at least with usual programs and unix) stderr is NOT buffered, while stdout IS buffered if its target is not a console. What does this mean ? Imagine you happily read, and want to e.g. create buffers of information for the different items. As a control point, you want to use stderr information: each time a line arrives on stderr, you know you handle a new item, complete the old buffer and save it, and start a new one. But the problem with buffering shows when you read:

You will read all stderr information first, because it's unbuffered, even when going over a pipe. THEN you will read all stdout information in one chunk, because stdout was buffered. Joy.

How to solve that problem ? One way is to use two separate pipes for stderr and stdout, configure writer and listener side for line buffering, and redirect stderr and stdout via the tcl file descriptors through those pipes, reading the other side. This way, through the enforced line buffering, you can have the interleaving of stderr and stdout. Erm pipes, I mean fifos in the filesystem. open them for writing, and use that filedescriptor for the redirection. Open them for reading and install fileevent readable scripts to read the information.

mkfifo stderr_pipe
mkfifo stdout_pipe

set w_stderr [open stderr_pipe {WRONLY NONBLOCK}]
set r_stderr [open stderr_pipe {RDONLY NONBLOCK}]
set w_stdout [open stdout_pipe {WRONLY NONBLOCK}]
set r_stdout [open stdout_pipe {RDONLY NONBLOCK}]

fconfigure $w_stderr -blocking 0 -buffering line
# same for r_stderr, w_stdout, r_stdout

fileevent $w_stderr readable ...
fileevent $w_stdout ...

exec ... >@$w_stdout >@$w_stderr

Would you ever happen to stumble about this in real life ? Probably not. But you can. As a nice example, write a script which does

cvs -d $CVSROOT checkout -p $module

i.e. pipe the whole module to stdout. And you want to record the contents of each file in an array indexed by the filepath. The output is one big bad block though, interleaved by stderr messages telling you that a new file starts and even which file it is. You might want to listen to stderr and stdout, use the info from stderr to know which file you are handling and that a new is being handled, and append stdout information onto the appropriate buffer. Try and fail, spend hours and hours on debugging where things go wrong until you finally remember, fuck, channel buffering !!!. That is, IF you stumble over it, you'll hate it :) But now that you know that such things can happen, you'll at least have an idea on where to start searching/debugging :)

Oh btw, for the party factor, not all systems buffer stderr enough so you do not lose information when going over a pipe. I.e. when there is enough stderr information going over the pipe, you will lose information. Fine, use plain files instead of pipes then ... the rest works still as shown. (only add {CREAT TRUNC} to the open WRONLY call).

Clif Flynt

One application for this sort of thing is a wrapper around mkisofs, which sends the iso data to stdout,and status info to stderr.

Here's an excerpt that creates a pipe, runs mkisofs, and gets the status output for display. It seems to work on linux.

catch {exec rm /tmp/stderr_pipe}
exec mkfifo --mode=+rwxrwxrwx /tmp/stderr_pipe

set r_stderr [open /tmp/stderr_pipe {RDONLY NONBLOCK}] 

exec mkisofs -JR packages >$destFile 2>/tmp/stderr_pipe &

# fconfigure appears to be not needed.
# fconfigure $r_stderr -blocking 0 -buffering line
fileevent $r_stderr readable "getInput $r_stderr mkiso"