Version 145 of exec

Updated 2013-11-23 06:14:26 by pooryorick

exec , a built-in Tcl command, executes other programs.

See Also

exec ampersand problem
exec quotes problem
exec magic
exec path problems
exec and error information
for interpreting errors from external programs
exec test availability of executables
start
How to display the result of an external command in a widget
tk_exec
a drop-in replacement for exec that keeps the event loop rolling, e.g. to keep GUIs updated and responsive.
Is tclsh or wish suitable as a login shell
VFS
contains discussion of whether [exec] is usefully extended to deal with virtual file systems, also on VFS, exec and command pipelines.
execx
make a script withdraw itself to the background
sbron 2005-08-16: [exec] example

Synopsis

exec ?switches? arg ?arg ...?

Documentation

official reference

Description

exec executes a program, passing it the given args, and returns some combination stdout, stderr, and return status of the invoked program. exec does not use other shells to invoke the program, so it is not necesary to add any extra layers of quoting to args.

Many times people use exec expecting the syntax to be the same as either a Bourne / C shell, or some other favorite invocation interface, or at the very least the same as typing commands at an interactive Tcl console prompt.

They discover quickly they are wrong.

First difference - a concept often referred to as wild cards. Tcl does not automatically expand wild cards. So where as in shell you might type

myapp a*

in Tcl, you would type

exec myapp {*}[glob a*]

Switches

Supported switches are:

-ignorestderr
Stops [exec] from treating the output of messages to the pipeline's standard error channel as an error case.
-keepnewline
Retains a trailing newline in the pipeline's output. Normally a trailing newline will be deleted.
--
Marks the end of switches. The argument following this one will be treated as the first arg even if it starts with a -.

Examples summarizing various methods of exec

Simple command:

exec /bin/sort -u /tmp/data.in -o /tmp/data.out

process pipeline:

exec /bin/sort -u /tmp/data.in | /bin/wc -l

Command with arguments in variables:

exec /bin/sort -T [file dirname $sorted_name] -o $sorted_name $name

Command with file globbing:

exec ls -l {*}[glob *.tcl]

or, prior to 8.5:

eval [list exec ls -l] [glob *.tcl]

Interactive terminal program session on windows (from 8.5 onwards):

exec {*}[auto_execok start] your_program <args> 

Child Status

KBK et al:

To obtain the exit status of the child process, wrap [exec] in [catch]:

set status [catch {exec $theCommand $arg1 $arg2 ...} result]

Now you can find out with the combination of $status and $errorCode.

if {$status == 0} {
    # The command succeeded, and wrote nothing to stderr.
    # $result contains what it wrote to stdout, unless you
    # redirected it
} elseif {[string equal $::errorCode NONE]} {
    # The command exited with a normal status, but wrote something
    # to stderr, which is included in $result.
} else {
    switch -exact -- [lindex $::errorCode 0] {
        CHILDKILLED {
            foreach {- pid sigName msg} $::errorCode break

            # A child process, whose process ID was $pid,
            # died on a signal named $sigName.  A human-
            # readable message appears in $msg.

        }
        CHILDSTATUS {
            foreach {- pid code} $::errorCode break

            # A child process, whose process ID was $pid,
            # exited with a non-zero exit status, $code.

        }

        CHILDSUSP {

            foreach {- pid sigName msg} $::errorCode break

            # A child process, whose process ID was $pid,
            # has been suspended because of a signal named
            # $sigName.  A human-readable description of the
            # signal appears in $msg.

        }

        POSIX {

            foreach {- errName msg} $::errorCode break

            # One of the kernel calls to launch the command
            # failed.  The error code is in $errName, and a
            # human-readable message is in $msg.

        }

    }
}

DKF: From 8.6 onwards, this is easier because we have [try]:

try {
    set result [exec $theCommand $arg1 $arg2 ...]
} trap NONE errOut {
    # $errOut now holds the message that was written to stderr
} trap CHILDKILLED {- opts} {
    lassign [dict get $opts -errorcode] -> pid sigName msg
    # process $pid was killed by signal $sigName; message is $msg
} trap CHILDSTATUS {- opts} {
    lassign [dict get $opts -errorcode] -> pid code
    # process $pid exited with non-zero exit code $code
} trap CHILDSUSP {- opts} {
    lassign [dict get $opts -errorcode] -> pid sigName msg
    # process $pid was suspended by signal $sigName; message is $msg
} trap POSIX {- opts} {
    lassign [dict get $opts -errorcode] -> errName msg
    # Some kind of kernel failure; details in $errName and $msg
}

<<

Cameron Laird wrote in comp.lang.tcl on the << "operator" in exec which allows feeding stdin right from Tcl, without going through a file:

I emphasize this: not only is "<< ... another of Tcl's nice magics", but it's one poorly known even to many experienced Tcl-ers, AND its absence from exec's correspondents in Python and Perl is a considerable inconvenience.

In a later posting, Cameron says:

It's valuable to realize that [exec] difficulties can sometimes be circumvented by exploitation of [open]. In the context of avoiding exec's unescapable special characters, I prefer to use the far-too-little-understood << argument, in the manner of

set url http://ats.nist.gov/cgi-bin/cgi.tcl/echo.cgi?hello=1&hi=2
exec [file join $env(COMSPEC)] << "start \"\" \"$url\"
exit"

RJ: I just saw this used on another page. Is << interpreted in Tcl or the shell? Are there other "exec" redirectors?

MHo: It's interpreted by Tcl...

Modifying the Execution Environment

Modifying the UNIX environment when calling exec:

I think it is the case that ALL unices (and I hope OSX is included) have the program /usr/bin/env, the purpose of which is to modify environment for exec'd (including Tcl exec calls) processes.

exec /usr/bin/env PATH=/my/new/path:\$PATH progname

Reaping Child Processes

2012-08-18

one easily forgotten feature of [exec]: It calls waitpid(2), thereby reaping the dead child processes and zombies. Which is usually what you want, except when you need a process monitor. As exec has no -dontreap option, I ended up writing my own exec as an extension.


In Eggdrop tcl , comp.lang.tcl ,2001-10-09, David Wijnants wrote:

Tcl 'reaps child processes' when exec is called, but if you've got a long-running process, with a long time between execs, the zombies will hang around for a long time. One solution is to do a pro-forma exec every couple of seconds.

proc grimreaper {} {
    catch {exec {}}
    after 3000 grimreaper
}

Just call this procedure once during initialisation, and it'll do the job every three seconds. The effect on CPU seems to be non-existent.

RS 2007-11-05: [exec {}] throws an error after searching PATH etc. It may be more gentle to call an existing "dummy" like exec true, as noted today on c.l.t..

Example: Catching an Error

set cmd /path/to/some/command
set my_arg_1 first
set my_arg_2 second

# solution:
set runcmd [list exec $cmd $my_arg_1 $my_arg_2 2>@stderr]

if {[catch $runcmd res]} {
  error "Failed to run command $runcmd: $res"
}

puts "result is: $res"

2>@stderr redirects stderr to, uh... stderr. Useful to accomplish the same thing as -ignorestderr on older versions of Tcl.

Example: Tee

Falco Paul: you may one day want to execute a long running process and have its output sent both to screen and some log file. On UNIX, you would do stuff like that with the 'tee' command. Here is a Tcl implementation that works for UNIX and WIN32 environments. It's done by starting the command in a pipe. One of the nice features of this approach is that the output is flushed to the screen right away, as the command runs.

proc execpipe {COMMAND} {

if {[catch {open [list | {*}$COMMAND 2>@stdout]} FILEHANDLE]} {
    return "Can't open pipe for '$COMMAND'"
}

set PIPE $FILEHANDLE
fconfigure $PIPE -buffering none

set OUTPUT {} 

while {[gets $PIPE DATA] >= 0} {
    puts $DATA
    append OUTPUT $DATA \n
}

if {[catch {close $PIPE} ERRORMSG]} {
    if {[string compare "$ERRORMSG" "child process exited abnormally"] == [equal]} {
        # this error means there was nothing on stderr (which makes sense) and
        # there was a non-zero exit code - this is OK as we intentionally send
        # stderr to stdout, so we just do nothing here (and return the output)
    } else {
        return "Error '$ERRORMSG' on closing pipe for '$COMMAND'"
    }
}

regsub -all -- {\n$} $OUTPUT {} STRIPPED_STRING
return $STRIPPED_STRING
}

Mho: For a similar construct, see Another BgExec.

MJL: Can you explain the following? I don't understand it.

if {[string compare $x $y] == [equal]} { ... }

looks like someone has written a proc called equal that returns a true value so then they compare the return value from string compare with the local proc. However, if all they are doing is comparing whether strings are equal, they should just use string equal. Note that this doesn't really have anything to do with exec, though.



Arjen Markus: The remark above by LV intrigued me for a while, but I now have a solution:

  • Adjust unknown to translate well-known commands like ls that will typically take such patterns as *.txt so that the correct shell invocation is used
  • For instance: "ls *.txt" translates into "exec sh -c {ls *.txt}"

LV: Of course, translating automatically has to worry about things like tcl vs shell variables, tcl constructs like [command], evals, etc. Also, probably should use /bin/sh (or $::env(SHELL)) instead of sh.


SB 2002-11-10: Didn't grasp what KBK said before I had also read [eval] and tried to use [exec] myself. Came up with an example to illustrate how important eval is:

set db(1,package) Gtk
set db(1,confprog) "gtk-config --version"               ;# Could be unsafe, but work here
set db(2,package) Glib
set db(2,confprog) "glib-config --version"              ;# Alert
set db(3,package) Guile
set db(3,confprog) [list guile-config --version]        ;# Should be safe; is proper list
set db(lastid) 3

for {set i 1} {$i <= $db(lastid)} {incr i} {
   catch {eval exec $db($i,confprog)} result        ;# This is ok
   catch {exec $db($i,confprog)} surprise_result    ;# Not what I wanted
   puts [format {%-20s: %s} $db($i,package) $result]
   puts [format {%-20s: %s} $db($i,package) $surprise_result]
}

Someone recently wrote asking why

exec rm -i file1 file2

didn't seem to work. Turns out that one has to do something like this:

catch {exec rm -i /tmp/a /tmp/b /tmp/c 1>@stdout 2>@stderr 0<@stdin} results

[can someone suggest updates to the above to make error handling more robust?]

[Can anyone explain why this is necessary? And why for this command, but not others?]


['Wonder if we can get DGP to say a few words on how exec users should think about auto_execok ...?]


LV: Could someone provide an example of how one would execute a pipeline of commands, some of which require arguments that, in shell, would be inside quotes? For instance, how would this shell construct translate?

/bin/ps -ax | my_filter arg1 arg2 | awk '{print $1}'

Yes, I know we could do without the awk - the point is that sometimes, the awk code may be simpler, or just what the writer is more comfortable writing, or maybe not even awk but something else that similarly requires the quotes, etc.

RS: What single quotes are to /bin/sh and friends, are braces for Tcl. So (one layer for Tcl, one layer for awk):

exec /bin/ps -ax | my_filter arg1 arg2 | awk {{ print $1}}

TV: Don't forget cygwin http://www.cygwin.org/ (I think it is) to do Unix things under most windows versions from 95 onward, exec can call them, too. Or use ls and grep and such from some lib (like bwise, its has them in limited but useful form), or program them in tcl...

Some fundamental issues about the exec are mentioned in the manual itself, and cygwin for instance under the sort of common Windows os-es makes attempts to solve some problems with multiprocessing. Wouldn't it be wonderful to have a decent Unix anywhere? But then with which X?

LV What do you mean by with which X?


AM 2004-03-17: I ran into a strange phenomenon while trying to tame an external program on Windows:

  • I used [open "|myprog.exe"] to start the program (as usual, in combination with fileevent)
  • Rather than a nice display of the output of this program in a window, I saw a mass of DOS boxes appear and disappear. Presumably the program I tried to control was calling out to batch files or other programs ...
  • I could solve this (with help from the Tkchat room) by executing the program via an ordinary batch file. This was the incantation:
set infile [open "|[auto_execok run.bat]"]
  • It can probably be stripped down, but this worked fantastically (the batch file, run.bat simply starts the original program and that is now quietly doing its job).

JB If you need to store the output of an [exec] into a list, [split] is useful. This will split up the text (note the newline \n) with carriage returns so you can easily manipulate the output. Handy if you're trying to pull information out of a repetitive list.

set result [split [nmap -sP 10.0.0.0/24] \n]
foreach l $result { puts stdout [lindex $l 1]}

male 2004-09-14:

Again: the exit code of a piped executable

Our problem is, that an external company provides an C executable (on Windows) using old FORTRAN functionality mapped in a DLL. This FORTRAN code returns failure code integers using 4Bytes (INTEGER*4). These failure codes are used as exit code of the C executable.

Catching the close on the blocked command channel to the C executable let the exit code be stored in side the global errorCode variable.

So far so good!

Now the problem!

On Windows the exit function in C allows to use a 4Byte integer (int or int32). But we get only the last byte of the exit code.

example: the original failure & exit code is 655, the exit code in Tcl via errorCode is 143 (655 & 255 = 143)

Has anybody a tip how to avoid this? Any hint or suggestion? Please think of the fact that the executable is not maintained by us; it's a company external executable! Or is this a tcl specialty to be platform independent by supporting only an 1-byte--sized integer as exit code in command channels?

MHo: Perhaps you can try to use get_process_exit_code from twapi?


male 2004-09-14: my own answer: It's a pity that the Microsoft Windows platform exit code is strictly reduced from 4 bytes to 1 byte!

The comp.lang.tcl thread http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=1c8670d2.0405260607.5a4c08ce%40posting.google.com&rnum=1&prev=/groups%3Fhl%3Den%26lr%3D%26ie%3DUTF-8%26selm%3D1c8670d2.0405260607.5a4c08ce%2540posting.google.com decribes this.


George Peter Staplin: I found that with my file manager there were some problems with exec and mplayer. If I started my file manager with & (in pdksh) then mplayer would block waiting for stdin, but if I started it as a foreground process it worked properly. This is how I fixed the problem (verified to work with another Tcl/Tk program and mplayer by Steve Redler IV):

proc nullexec {f} {
    # This is needed for applications like mplayer.
    if {![info exists ::nullexec_closed_stdin]} {
        catch {close stdin}
        open /dev/null r
        set ::nullexec_closed_stdin 1
    }
    if {[catch {eval exec -- $f >& /dev/null &} pid] && "NONE" ne $::errorCode} {
        tk_messageBox -title Error -message $pid -type ok
        return
    }
}

It works by replacing stdin with an fd pointing to /dev/null.


KBK gave this example for building an exec command dynamically, in the Tcl chatroom:

set command exec
lappend command gcc
lappend command -c -O3
lappend command -fpic -march=pentium4 -fomit-frame-pointer
lappend command foo.c bar.c grill

... followed by

eval $command

LES 2005-05-07: What if the command may take a really long time to finish and return and I want to set a time limit for a proc to return whether the exec'ed command is done or not? How?

LV: What do you want to happen in the case of a time out? I mean, with exec, you can use & and just go on and do work, but then, I don't know that tcl has a way to determine whether the command completed or not. Or, you could use [open "| command" "r"] , which puts the command in the background, then do some sort of setup where you check for output occasionally, and check to see how much time has elapsed, and you could then close the file handle when a time limit has passed. Or perhaps you could use after to set up some sort of thing to set of an alarm when the time limit has passed.

LES: "Set up some sort of alarm"? Yes, I would like to say: " - Hey, Tcl, remember that exec or open command I just mentioned a couple of lines ago? Well, forget about it, I don't care what it returns anymore." But how? I think about it and imagine some sort of break command for exec situations, but there is no such thing.

Take a look at Another BgExec. The only problem there is that a timeout only closes the process pipelines but probably leave the processes running due to tcl's lack of a built-in kill-command...


Guillaume Plenier 2005-07-28: I am currently working at a starpack including executables for which I wrote a Tcl-tk graphical interface. Everything works fine on my computer because the executable files are in my path, but if I take the starpack and run it under another computer, I have error messages like " -executable- command not find" or something close to that. I tried several things to indicate where my executable files are for exec to find them, but apparently exec looks in the computer path variable to find the unknown programs and not where indicated (I changed env(PATH), used $starkit::topdir etc...)

MG 2005-07-28: You could try something like this:

set paths [list ~/path/number/1 ~/path/number/2 ../path/number/3]
set success 0
foreach x $paths {
    if {[file exists [file join $x $fileToExec]]} {
        set success 1
        exec [file join $x $fileToExec]
        break;
   }
}
if {! $success} {
    # we didn't find it in any of out paths - try just exec'ing and hope for the best
    catch {exec $fileToExec}
}

Peter Newman 2005-07-29: DOS/Windows can't run EXEs (or DLLs) that are physically in the StarKit/Pack. You'll have to extract them first. MHo: Concerning DLLs, you are wrong: Doing a load in Starkits/Starpacks automatically copies the DLLs to a temporary location before loading.... execx could be a partial solution for exec...

For more info. check out the freewrap docs. There's discussion there about embedded binary files - and some techniques for dealing with them.

Guillaume Plenier 2005-07-31: I don't like the idea of extracting my executables first and will probably try to find other solutions.


MHo: It seems to me that if starting Win32-Console-mode-Applications, exec does not create a new console window (this is an option in the original win32-api create process, see [L1 ]). By default, the execed process inherits the console from the parent process. This is sometimes not correct. While the Win32-Console-Api is absolutely Win32-specific and this topic is not of interest to the tcl programmers in general, it would be fine if one would be sometimes able to have granular control over such exotic, platform specific options - unfortunately, twapi isn't at this stage of development always an alternative yet.


Sarnold: When I build a command as a list to be evaluated, I have found that, on Windows, the path of the executable behaves like a list when it contains spaces. For example,

c:\Program Files\Tcl\bin\wish.exe

is always converted to :

{c:\Program Files\Tcl\bin\wish.exe}

when you call

eval exec $cmd

The best solution I found was to call exec like this :

eval exec [lindex $cmd 0] [lrange $cmd 1 end]

This is a bug that Ased and CrowTDE suffer from, on Windows XP.

MG: Another solution is to force the shortname, without spaces:

set path {c:\Program Files\Tcl\bin\wish.exe}
catch {set path [file attributes $path -shortname]}

On Windows, $path will now be set to

C:/PROGRA~1/Tcl/bin/wish.exe

(you can add in a file nativename if you want to keep back slashes instead of forward slashes). In fact, it's probably a good idea to do that whenever you're passing a path to something outside your own program, if you can't guarantee it (and anything it may need to call) can handle paths with spaces. The only time you really need to use the real path is when you're displaying it to the user for something, at which point something clearer to read is preferable.

Sarnold 2007-07-20: I have found the solution to handle properly spaces in files names, in a cross-platform manner:

eval exec \"[auto_execok wish]\" \"$pwd/tkcon.tcl\"
exec \"[auto_execok wish]\" \"$pwd/tkcon.tcl\"

It works fine even with [eval]. But soon I realised I was wrong. Please read the following post:

Lars H: I find that claim very unlikely, Sarnold. [auto_execok] is documented to return a list (even if the length of that list is typically 1), so if there is a space in the path to the executable then the return value is typically brace-delimited, like so:

{c:\Program Files\Tcl\bin\wish.exe}

(Strangely, I could take that example from another post by Sarnold above. Are there two Sarnolds on the wiki?) Neither the eval exec nor the exec examples above get rid of these braces. In the exec example also the quotes are handed to exec, whereas in the eval exec example they are removed but OTOH the backslash path separators trigger backslash substitutions!

In 8.5, the canonical method should instead be

exec {*}[auto_execok wish] [file join $pwd tkcon.tcl]

which in 8.4, with proper list-quoting, can be coded as

eval [list exec] [auto_execok wish] [list [file join $pwd tkcon.tcl]]

In TIP 304 , Alexandre Ferrieux writes: A popular workaround for script-only purists [who want to get at the stderr of a command] is to spawn an external "pump" like cat in an [open ... r+], and redirect the wanted stderr to the write side of the pump. Its output can then be monitored through the read side:

set pump [open "|cat" r+]
set f1 [open "|cmd args 2>@ $pump" r]
fileevent $f1 readable got_stdout
fileevent $pump readable got_stderr

Now this is all but elegant of course, difficult to deploy on Windows (where you need an extra cat.exe), and not especially efficient since the "pump" consumes context switches and memory bandwidth only to emulate a single OS pipe when Tcl is forced to create two of them via [open ... r+].

MSW: Note that piping stderr and stdout simultaneously won't work for all operating systems. I remember having problems on solaris with lost data through the pipes (one filled up while the other wasn't drained?). I had to redirect stuff to files and watch the files to follow the output (invoked program in question was cvs). Furthermore buffering differences hurt when you try to simultaneously catch err and out. I did not try the exact proposed solution above. Someone with Solaris 8 and 9 might want to test this (don't have access to one atm).


MHo 2008-07-03: I still see some mysterious behaviour: if I use exec from within a starpack based on wish, then sometimes exec won't be able to catch the output from some programs. For example, the result of

catch {exec -- query.exe termserver /continue 2>@1} r

is the empty string. I tried every possible variation, even open |...., but the only workaround I found was to create a temporary .BATch file which does in turn calls query.exe with its own redirection, and reading the result file afterwords. Very inelegant. Sure, I speak about Windows ;-) Perhaps it has something to do with the absence of tclpip84.dll? Oh, problably I should better learn reading some day... meanwhile i found this on the manual page:

exec will not work well with TUI applications when a console is not present, as is done when launching applications under wish. It is desirable to have console applications hidden and detached. This is a designed-in limitation as exec wants to communicate over pipes. The Expect extension addresses this issue when communicating with a TUI application.


LV: One place where developers may be surprised is trying to execute an external program that they expect will interact with the user. For instance:

$ cat io.p
#include <stdio.h>

int main()
{
    char val1[1024], val2[1024];
    printf("please enter first number: ");
    gets(val1);
    printf("please enter second number: ");
    gets(val2);
    printf("thank you\n");
    exit(0);
}

If the developer would compile this program (which, granted, does nothing useful), they might be surprised to find that coding

exec gcc io.c -o io
exec ./io 

does not result in the display of the prompts. The reason for this is that Tcl's exec has created a pipe for stdout, since the general case of using exec is more likely a construct like

set results [exec ./io]

and that the program would run to completion, after which exec would take the output and assign it to the variable.

So, one has to code around the default behavior if a more interactive access is needed. Over on comp.lang.tcl, Alexandre Ferrieux wrote, in the thread 'Problem in calling c programs and compiling them in tcl/tk from mid November, 2008, that one needs to use something like:

exec ./io >@ stdout 2>@ stderr 

(adding the catch construct around it and possibly the -ignorestderr depending on the behavior of the program).


LV: recently on comp.lang.tcl a poster asked why the Tcl program being written would hang. The example of code was

exec /path/to/xselection FREDY -- "A B C"

The command itself would run from a shell command prompt, but when run in tclsh with the above command line, it would not terminate. After discussion, Alexandre Ferrieux mentioned the program might have a requirement of running with its stdin and/or stdout connected to a terminal, and suggested trying

$ cat | /path/to/xselection FREDY -- "A B C" | cat

to see if the program worked as expected, since essentially, this is how tcl's exec worked. When this was tried, indeed the program appeared to hang from the command line.


AMG: I wish I could use [exec] to redirect to/from reflected channels, but this doesn't work.

proc mychan {cmd chan args} {
    switch -- $cmd {
    initialize {
        return {initialize finalize watch write}
    } write {
        set data [lindex $args 0]
        puts "got [string length $data] bytes: >$data<"
        return [string length $data]
    }}
}
exec ls >@ [chan create write mychan]

This produces an error: channel "rc0" does not support OS handles. Like the error says, reflected channels don't have file descriptors, so they aren't recognized by the operating system. I'm surprised Tcl doesn't internally create anonymous pipe pairs to facilitate interfacing. Speaking of anonymous pipe pairs, if I could open anonymous pipe pairs in Tcl script, I wouldn't need reflected channels for this application. I could redirect to/from one end of the pipe and use chan event to monitor the other.

Oh wait, there's the [chan pipe] command. Forgot about that. It works just fine for what I'm doing. Also, using [chan pipe] lets us implement most of [open |] using [exec].

foreach chan {stdin stdout stderr} {
    lassign [chan pipe] rd$chan wr$chan
} 
set pids [exec {*}$pipeline <@ $rdstdin >@ $wrstdout 2>@ $wrstderr &]
puts $wrstdin "input text to send to pipeline stdin"
puts "received stdout [gets $rdstdout]"
puts "received stderr [gets $rdstderr]"

I've spent some time figuring out how to get all of this asynchronous using above code as starting point.

This is similar to popen3 or popen4 found in other languages, returning a list of pid, stdin, stdout, and stderr of the [exec]uted process.

proc popen4 {args} {
  foreach chan {In Out Err} {
    lassign [chan pipe] read$chan write$chan
  } 

  set pid [exec {*}$args <@ $readIn >@ $writeOut 2>@ $writeErr &]
  chan close $writeOut
  chan close $writeErr

  foreach chan [list stdout stderr $readOut $readErr $writeIn] {
    chan configure $chan -buffering line -blocking false
  }

  return [list $pid $writeIn $readOut $readErr]
}

# Example usage.

set done false

lassign [popen4 exercise.rb] pid stdin stdout stderr

chan event $stdout readable {
  puts -nonewline [chan read $stdout]
  if {[chan eof $stdout]} { set done true }
}

chan event $stderr readable {
  puts -nonewline [chan read $stderr]
  if {[chan eof $stderr]} { set done true }
}

chan event $stdin writable {
  puts "stdin writable"
  chan puts $stdin "foobar"
  chan close $stdin
}

vwait done

jesperj 1010-07-15:

When searching for a way to make the output of [exec] show up in real time in the terminal, and not after [exec] is done (in my case I wanted to show the output and progressbar from wget while it was downloading) I found that the following worked for me:

set url "http://www.tcl.tk/man/tcl8.5/TclCmd/exec.htm"
eval exec wget $url >>& /dev/tty

NEM (15th July 2010): The cross-platform way to do this is to redirect the output to Tcl's stdout (and input from stdin):

chan configure stdout -buffering none
exec >&@stdout <@stdin wget --progress=bar $url

AMG: What is the purpose of [eval] in this code example? To make it so $url can be a list of URLs and other options to wget? I recommend using {*} instead.

@AMG: Yes you are right, the eval in this case is probably not needed. I tried to simplify how I did it in my script for the wiki example. Yes the purpose of the eval is to be able to pass options to wget. However, being as tired as I am atm I will just show how it looks in my script and hope it makes sense somewhat:

# $target is an URL that gets defined earlier.

# Below replacing whitespace with %20 and # with %23 in the URL in $target
set target [string map {{ } %20 {#} %23 } $target]

set command "wget -w $WAIT_TIME --no-check-certificate --no-cookies -np --user-agent \"$USER_AGENT\" \
--random-wait -S -c  --limit-rate=200k -R .html*"
lappend command $target

eval exec $command >>& /dev/tty

I hope it made more sense this time. It's 08:26 and I've been up all night so I'm not 100% sure of what I'm doing :-)


AMG: At present, "<<" doesn't work for data containing NUL bytes [L2 ]. (Update: it works correctly from Tcl 8.6b3 onwards.) Among other things, this makes it unsuitable for use with external compression programs. [open |] doesn't have this problem, as shown in the bug report. See also TIP 259 [L3 ] which proposes to fix this problem with [exec].

However, [open |] has a different problem: compression programs (and many other kinds too) don't finish writing to their stdout until they've reached EOF on their stdin. Prior to TIP 332 [L4 ], [exec <<] was the only way to close a program's stdin before reading its stdout! [close]'ing the bidirectional channel returned by [open |] would close it outright, preventing the script from reading anything from it. Another approach exists: redirecting the program's input and/or output from/to separate channels. But this requires both ends of each channel to be reflected into the script, which means using [chan pipe] (TIP 304 [L5 ]) or [chan create] (TIP 219 [L6 ]), neither of which are available in Tcl 8.4. That leaves two more options: actual named pipes in the filesystem (created with mknod p, requires Unix), or temporary files (ugly and dangerous).

For more on [chan pipe] redirection, see: a way to 'pipe' from an external process into a text widget.

AMG: Another issue exists with [exec] (and it's present in Tcl 8.6a3 as well as all older versions): it can apply encoding translations to input and output. Here's a simple demonstration:

% binary scan [exec xxd -r -p << 08090a0b0c0d0e0f] H* out; set out
08090a0b0c0a0e0f

If you look closely, you'll see that 0d was changed to 0a. This is thanks to CR/LF translation. It's not clear how to turn this feature off, at least not until TIP 259 is implemented. Using [`open

] instead of [exec`] avoids the issue by exposing the translation

configuration options. However, [open |]] is nowhere near as convenient as [exec], and in older versions of Tcl it doesn't work with many filters due to the lack of half-closing.