stderr is one of the [stdio] output files opened - applications and functions tend to use it for the output of error messages. [[Explain common idioms for management of stderr from subprocesses.]] [bgexec] [[ [exec] conventions]] ---- Pure Tcl programs ''write'' to stderr in two ways: * "puts stderr $msg" gives fine-grained control; * "[error] $message" generally has the effect of reporting a diagnostic [traceback] to stderr (except if prevented by an outer [catch], or in [wish] which pops up an error info window instead). ---- Often people ask how to [open] a [pipeline] to a command and read the command's [stdout] and stderr. One recent example of how to do this was: set fd1 [open "|somecmd |& cat" "r"] (if your system has a command named cat in the default path). [glennj]: Or, without having to open a cat process (see http://www.tcl.tk/man/tcl8.4/TclCmd/exec.htm): set fd1 [open "|somecmd 2>@ stdout" r] Here's a quick test sequence that takes advantage of [close] returning the standard error for ''blocking'' pipes: set somecmd {sh -c {echo "to stdout" ; echo >&2 "to stderr"}} set errorCode "" puts "no redirection:" set f [open "| $somecmd" r] set std_out [read -nonewline $f] catch {close $f} std_err puts " std_out is {$std_out}" puts " std_err is {$std_err}" puts " errorCode is {$errorCode}" set errorCode "" puts "redirected:" set f [open "| $somecmd 2>@ stdout" r] set std_out [read -nonewline $f] catch {close $f} std_err puts " std_out is {$std_out}" puts " std_err is {$std_err}" puts " errorCode is {$errorCode}" The output should be: no redirection: std_out is {to stdout} std_err is {to stderr} errorCode is {NONE} redirected: std_out is {to stdout to stderr} std_err is {} errorCode is {} ---- '''Enrico''' 03-03-25: I have tried this on Windows 2000 with 8.4.1, but it doesn't work properly. The error message I've got, is channel "console1" wasn't opened for writing The same result it takes, enforcing to show the console with the tcl-command ''[console] show''. Redirection of stderror to a file works: open "| $somecmd 2> errfile.txt" r Who have an idea - or should I use the [bgexec] command? [Matthias Hoffmann]: No idea, but I see the same effects under a similar environment... Is there someone with a critcl implementation of bgexec (like the one for busy)? ---- Another option of dealing with stderr, (or other file descriptors for that matter) that I have seen mentioned and used may be considered ''a cheat'': exec /bin/ksh -c "command 2>&1 > /dev/null" This provides you with the stderr on the ''stdout'' file descriptor. This specific example throws away the original stdout data. If that data is also going to be needed, then replace "> /dev/null" with "> /some/file". ---- Unfortunately, the 'cat' solution means you lose the exit status of the process whereas the non-cat solution doesn't let your script read the text, it only redirects it to the stdout channel of the tclsh process. It seems that [bgexec] provides the only complete solution. [LV] March 26, 2003 - I don't understand this comment. Here's a transcript of a tclsh session: $ cat test.ksh #! /bin/ksh echo "This is stdout" echo "This is stderr" >&2 exit 0 $ tclsh % set fd [open "|/home/lwv26/test.ksh 2>@ stdout" "r"] file3 % gets $fd This is stdout % gets $fd This is stderr In other words, the 2>@ stdout allows tcl to read the stderr on the same [channel] as stdout. What it _doesn't_ allow is for you to be able to tell what is stderr and what is stdout. That's a nuisance. ---- [MSW]: One problem you often have with programs outputting diagnostic messages on stderr while also outputting stuff on stdout is that you lose the interleaving of stderr and stdout. What does this mean ? imagine a program having following scheme of output: 1. stderr 2. stdout information for item a 3. stderr 4. stdout information for item b ... and so on ... Fine you think, and start opening that program in a pipe, and read its stderr and stdout. Now the problem with io buffering comes into play: (at least with usual programs and unix) stderr is NOT buffered, while stdout IS buffered if its target is not a console. What does this mean ? Imagine you happily read, and want to e.g. create buffers of information for the different items. As a control point, you want to use stderr information: each time a line arrives on stderr, you know you handle a new item, complete the old buffer and save it, and start a new one. But the problem with buffering shows when you read: You will read '''all''' stderr information first, because it's unbuffered, even when going over a pipe. '''THEN''' you will read '''all''' stdout information ''in one chunk'', because stdout was buffered. Joy. How to solve that problem ? One way is to use two separate pipes for stderr and stdout, configure writer and listener side for line buffering, and redirect stderr and stdout via the tcl file descriptors through those pipes, reading the other side. This way, through the enforced line buffering, you can have the interleaving of stderr and stdout. Erm pipes, I mean fifos in the filesystem. open them for writing, and use that filedescriptor for the redirection. Open them for reading and install fileevent readable scripts to read the information. mkfifo stderr_pipe mkfifo stdout_pipe set w_stderr [open stderr_pipe {WRONLY NONBLOCK}] set r_stderr [open stderr_pipe {RDONLY NONBLOCK}] set w_stdout [open stdout_pipe {WRONLY NONBLOCK}] set r_stdout [open stdout_pipe {RDONLY NONBLOCK}] fconfigure $w_stderr -blocking 0 -buffering line # same for r_stderr, w_stdout, r_stdout fileevent $w_stderr readable ... fileevent $w_stdout ... exec ... >@$w_stdout >@$w_stderr Would you ever happen to stumble about this in real life ? Probably not. But you can. As a nice example, write a script which does cvs -d $CVSROOT checkout -p $module i.e. pipe the whole module to stdout. And you want to record the contents of each file in an array indexed by the filepath. The output is one big bad block though, interleaved by stderr messages telling you that a new file starts and even which file it is. You might want to listen to stderr and stdout, use the info from stderr to know which file you are handling and that a new is being handled, and append stdout information onto the appropriate buffer. Try and fail, spend hours and hours on debugging where things go wrong until you finally remember, ''fuck, channel buffering !!!''. That is, IF you stumble over it, you'll hate it :) But now that you know that such things can happen, you'll at least have an idea on where to start searching/debugging :) Oh btw, for the party factor, not all systems buffer stderr enough so you do not lose information when going over a pipe. I.e. when there is enough stderr information going over the pipe, you '''will''' lose information. Fine, use plain files instead of pipes then ... the rest works still as shown. (only add {CREAT TRUNC} to the open WRONLY call). [Clif Flynt] One application for this sort of thing is a wrapper around mkisofs, which sends the iso data to stdout,and status info to stderr. Here's an excerpt that creates a pipe, runs mkisofs, and gets the status output for display. It seems to work on linux. catch {exec rm /tmp/stderr_pipe} exec mkfifo --mode=+rwxrwxrwx /tmp/stderr_pipe set r_stderr [open /tmp/stderr_pipe {RDONLY NONBLOCK}] exec mkisofs -JR packages >$destFile 2>/tmp/stderr_pipe & # fconfigure appears to be not needed. # fconfigure $r_stderr -blocking 0 -buffering line fileevent $r_stderr readable "getInput $r_stderr mkiso" ---- See also [stdout], [stdin], [stdio], and [magic names], [Tcl syntax help] ---- [Category Glossary] - [Category File]