exec , a built-in Tcl command, executes other programs.
Supported switches are:
exec processes and removes any control values in args and then executes the program named by the first remaining arg, passing it each additional remaining args as a separate argument to the program. exec does not use other shells to execute the program, so it is not necessary to add any extra layers of quoting to an arg value.
Control values are used to redirect stdout, stderr, and stdin, to form and execute a multi-program pipeline, and/or to cause the execution to occur in the background. By default, exec captures and returns the content the program produces on stdout. In background mode, exec returns the process identifiers of all the processes in the pipeline.
Each program in a pipeline can have its own redirections. When a particular redirection operator is given twice for a command, the last one wins. With <@, any translation or encoding on the provided channel is ignored, as the operating system file descriptor is used directly. ycl chan osin/osout/osboth can be used to work around this.
When arguments passed to the program may be confused with exec operators such as <, >, |, or &, use lexec instead.
exec doesn't use auto_execok to resolve the program name. On Unix systems, it uses execve(), which relies on $PATH. On Windows, it uses CreateProcess() , which relies on $PATH and other things.
DKF: auto_execok is part of the machinery of code that is used to parse interactively-typed commands; while it does path searching, it really does more than that (like handling the bizarreness of shell commands on DOS/Windows, etc.) It returns a list of words to substitute (in a loose sense) in place of the command name when passed to exec, or an empty string if the word does not refer to anything executable.
The following illustrates execution of a pipeline of commands. No shell quoting is needed. Just quote arguments to each program the same way you would quote arguments to any Tcl routine:
exec /bin/ps -ax | my_filter arg1 arg2 | awk {{ print $1}}
This example is a bit contrived because it's probably easier to do the scripting in Tcl rather than use awk, but it does illustrate how to build program pipelines and properly quote arguments to the programs.
On Windows, Cygwin can provide many of the programs that one would expect to find in a Unix environment.
See Is tclsh or wish suitable as a login shell
Simple command:
exec /bin/sort -u /tmp/data.in -o /tmp/data.out
process pipeline:
exec /bin/sort -u /tmp/data.in | /bin/wc -l
Command with arguments in variables:
exec /bin/sort -T [file dirname $sorted_name] -o $sorted_name $name
Command with file globbing:
exec ls -l {*}[glob *.tcl]
or, prior to 8.5:
eval [list exec ls -l] [glob *.tcl]
Interactive terminal program session on windows (from 8.5 onwards):
exec start your_program <args>
VZ: I need to do in my Tcl script something like this:
exec awk /pattern/ {print $1} file
PYK: The awk script should be one argument:
exec awk {/pattern/ {print $1}} file
Redirect stderr to stdout, and stdout to /dev/null:
exec /bin/ksh -c "command 2>@1 > /dev/null"
KBK et al:
The exit status of the last process in the pipeline that had non-zero exit status is stored in -errorcode in the error options. To get a handle on those options, wrap exec in catch or try:
set status [catch {exec $theCommand $arg1 $arg2 ...} result]
Now you can find out with the combination of $status and $errorCode.
if {$status == 0} { # The command succeeded, and wrote nothing to stderr. # $result contains what it wrote to stdout, unless you # redirected it } elseif {[string equal $::errorCode NONE]} { # The command exited with a normal status, but wrote something # to stderr, which is included in $result. } else { switch -exact -- [lindex $::errorCode 0] { CHILDKILLED { foreach {- pid sigName msg} $::errorCode break # A child process, whose process ID was $pid, # died on a signal named $sigName. A human- # readable message appears in $msg. } CHILDSTATUS { foreach {- pid code} $::errorCode break # A child process, whose process ID was $pid, # exited with a non-zero exit status, $code. } CHILDSUSP { foreach {- pid sigName msg} $::errorCode break # A child process, whose process ID was $pid, # has been suspended because of a signal named # $sigName. A human-readable description of the # signal appears in $msg. } POSIX { foreach {- errName msg} $::errorCode break # One of the kernel calls to launch the command # failed. The error code is in $errName, and a # human-readable message is in $msg. } } }
DKF: From 8.6 onwards, this is easier because we have try:
try { set result [exec $theCommand $arg1 $arg2 ...] } trap NONE errOut { # $errOut now holds the message that was written to stderr # and everything written to stdout! } trap CHILDKILLED {- opts} { lassign [dict get $opts -errorcode] -> pid sigName msg # process $pid was killed by signal $sigName; message is $msg } trap CHILDSTATUS {- opts} { lassign [dict get $opts -errorcode] -> pid code # process $pid exited with non-zero exit code $code } trap CHILDSUSP {- opts} { lassign [dict get $opts -errorcode] -> pid sigName msg # process $pid was suspended by signal $sigName; message is $msg } trap POSIX {- opts} { lassign [dict get $opts -errorcode] -> errName msg # Some kind of kernel failure; details in $errName and $msg }
PYK 2015-04-15: Could exec be enhanced to provide the exit status values of all the commands in the pipeline?
In the Tcl chatroom, KBK gave this example for building an exec command dynamically:
set command exec lappend command gcc lappend command -c -O3 lappend command -fpic -march=pentium4 -fomit-frame-pointer lappend command foo.c bar.c grill
... followed by
{*}$command
Or, prior to Tcl 8.5:
eval $command
To display output instead of capturing it:
exec >@stdout 2>@stderr myprogram
A shorter way to redirect both stdout and stderr:
exec >&@stdout myprogram
It may be useful to first:
chan configure stdout -buffering none
To pass options that have been collected into a list:
exec myprogram {*}$options
or, prior to Tcl 8.5:
eval exec [list myprogram] $options
For example:
# $target is an URL that gets defined earlier. # Below replacing whitespace with %20 and # with %23 in the URL in $target set target [string map {{ } %20 {#} %23 } $target] set command "wget -w $WAIT_TIME --no-check-certificate --no-cookies -np --user-agent \"$USER_AGENT\" \ --random-wait -S -c --limit-rate=200k -R .html*" lappend command $target exec {*}$command >>& /dev/tty #or, using eval: eval exec $command >>& /dev/tty
I hope it made more sense this time. It's 08:26 and I've been up all night so I'm not 100% sure of what I'm doing :-)
When there is whitespace in the path of the exectable, e.g.,
set executable c:\Program Files\Tcl\bin\wish.exe
The following will work:
exec $executable
However, when exec is paired with eval, the executable would then become two separate words. Here's one way to keep it together:
eval exec [list $executable]
MG: Another solution is to force the shortname, without spaces:
set path {c:\Program Files\Tcl\bin\wish.exe} catch {set path [file attributes $path -shortname]}
On Windows, $path will now be set to
C:/PROGRA~1/Tcl/bin/wish.exe
you can add in a file nativename if you want to keep back slashes instead of forward slashes. In fact, it's probably a good idea to do that whenever you're passing a path to something outside your own program, if you can't guarantee that it, and anything it may need to call, can handle paths with spaces. The only time you really need to use the real path is when you're displaying it to the user for something, at which point something clearer to read is preferable.
set cmd /path/to/some/command set my_arg_1 first set my_arg_2 second # solution: set runcmd [list exec $cmd $my_arg_1 $my_arg_2 2>@stderr] if {[catch $runcmd res]} { error "Failed to run command $runcmd: $res" } puts "result is: $res"
2>@stderr redirects stderr to, uh... stderr. Useful to accomplish the same thing as -ignorestderr on older versions of Tcl.
Cameron Laird wrote in comp.lang.tcl on the << operator of exec, which allows feeding stdin right from Tcl, without going through a file:
I emphasize this: not only is "<< ... another of Tcl's nice magics", but it's one poorly known even to many experienced Tcl-ers, AND its absence from exec's correspondents in Python and Perl is a considerable inconvenience.
In a later posting, Cameron says:
It's valuable to realize that exec difficulties can sometimes be circumvented by exploitation of open. In the context of avoiding exec's unescapable special characters, I prefer to use the far-too-little-understood << argument, in the manner of
set url http://ats.nist.gov/cgi-bin/cgi.tcl/echo.cgi?hello=1&hi=2 exec [file join $env(COMSPEC)] << "start \"\" \"$url\" exit"
RJ: I just saw this used on another page. Is << interpreted in Tcl or the shell? Are there other exec redirectors?
MHo: It's interpreted by Tcl...
AMG: At present, << doesn't work for data containing NUL bytes. See exec << truncates at NUL (Update: it works correctly from Tcl 8.6b3 onwards.) Among other things, this makes it unsuitable for use with external compression programs. open | doesn't have this problem, as shown in the bug report. See also TIP 259 which proposes to fix this problem with exec.
However, open | has a different problem: compression programs (and many other kinds too) don't finish writing to their stdout until they've reached EOF on their stdin. Prior to TIP 332 , exec << was the only way to close a program's stdin before reading its stdout! closing the bidirectional channel returned by open | would close it outright, preventing the script from reading anything from it. Another approach exists: redirecting the program's input and/or output from/to separate channels. But this requires both ends of each channel to be reflected into the script, which means using chan pipe (TIP 304 ) or chan create (TIP 219 ), neither of which are available in Tcl 8.4. That leaves two more options: actual named pipes in the filesystem (created with mknod p, requires Unix), or temporary files (ugly and dangerous).
For more on chan pipe redirection, see: a way to 'pipe' from an external process into a text widget.
AMG: Another issue exists with exec (and it's present in Tcl 8.6a3 as well as all older versions): It can apply encoding translations to input and output. Here's a simple demonstration:
% binary scan [exec xxd -r -p << 08090a0b0c0d0e0f] H* out; set out 08090a0b0c0a0e0f
If you look closely, you'll see that 0d was changed to 0a. This is thanks to CR/LF translation. It's not clear how to turn this feature off, at least not until TIP 259 is implemented. Using open | instead of exec avoids the issue by exposing the translation configuration options. However, open | is nowhere near as convenient as exec, and in older versions of Tcl it doesn't work with many filters due to the lack of half-closing.
samoc: This is what I have come up with to implement binary-data compatible exec : bexec.
PYK 2019-04-24: This is due to translation on the channel used to read data back into Tcl. Tcl redirects the data into into xxd without translation.
PYK 2021-02-08: Tcl uses the current system encoding to translate the value provided via <<. Therefore, to pass byte values directly, use iso8859-1:
set encoding [encoding system] encoding system iso8859-1 try { exec << $bytes } finally { encoding system $encoding }
This doen't solve the problem for the returned value though because there's no way to turn off end-of-line translations: Tcl performs the standard translation for the platform it is running on. Below is a replacement for exec that avoids corrupting binary data by setting the system encoding to iso8859-1. It also allows the caller to provide the desired configuration of the output channel, e.g. -translation binary. It uses chan pipe, which was added in version 8.6:
proc cexec {config args} { lassign [chan pipe] pread pwrite set encoding [encoding system] encoding system iso8859-1 try { try { try { chan configure $pwrite -translation binary chan configure $pread {*}$config exec {*}$args >@$pwrite } finally { close $pwrite } set data [read $pread] } finally { close $pread } } finally { encoding system $encoding } return $data }
cexec only works as long as the resulting output is small enough to fit into the buffer provided by the operating system. After that, exec hangs indefinitely as the system waits for a reader to clear out the buffer. To fix this, the task of reading the result could be given to a new interpreter in its own thread:
proc cexec {config args} { package require Thread lassign [chan pipe] pread pwrite chan configure $pread {*}$config set tid [thread::create] thread::transfer $tid $pread set script [list apply [list chan { global res try { set res [read $chan] } finally { close $chan } }] $pread] thread::send -async $tid $script set encoding [encoding system] try { encoding system iso8859-1 try { chan configure $pwrite -translation binary globalexec {*}$args >@$pwrite } finally { close $pwrite } } finally { encoding system $encoding } set res [thread send $tid { set res }] thread release $tid return $res }
This implementation of cexec is not a slave to the buffer, but it's also more complex. A better solution is to use open, which returns immediately, allowing reading to commence while the external process continues to work and produce output. cexec functions like exec except that the first argument is a list of configuration arguments for the resulting output channel:
# configurable exec proc cexec {config args} { set encoding [encoding system] encoding system iso8859-1 try { set chan [open |$args] } finally { encoding system $encoding } chan configure $chan {*}$config try { set data [read $chan] } finally { close $chan } return $data }
cexec is available in ycl.
Modifying the UNIX environment when calling exec:
I think it is the case that ALL unices (and I hope OSX is included) have the program /usr/bin/env, the purpose of which is to modify environment for exec'd (including Tcl exec calls) processes.
exec /usr/bin/env PATH=/my/new/path:$env(PATH) progname
2012-08-18:
one easily forgotten feature of exec: It calls waitpid(2), thereby reaping the dead child processes and zombies. Which is usually what you want, except when you need a process monitor. As exec has no -dontreap option, I ended up writing my own exec as an extension.
In Eggdrop tcl , comp.lang.tcl, 2001-10-09, David Wijnants wrote:
Tcl 'reaps child processes' when exec is called, but if you've got a long-running process, with a long time between execs, the zombies will hang around for a long time. One solution is to do a pro-forma exec every couple of seconds.
proc grimreaper {} { catch {exec {}} after 3000 grimreaper }
Just call this routine once during initialisation, and it'll do the job every three seconds. The effect on CPU seems to be non-existent.
RS 2007-11-05: exec {} throws an error after searching $PATH etc. It may be more gentle to call an existing "dummy" like exec true, as noted today on c.l.t..
It isn't normally (ever?) necessary to use auto_execok in conjunction with exec, which takes care of those details itself. If for some reason it gets used anyway, first, please update this page with a rationale, and second, make sure to expand the list it returns:
exec {*}[auto_execok wish] $pwd/tkcon.tcl
Or, prior to Tcl 8.5, where {*} is not available :
eval exec [auto_execok wish] [list $pwd/tkcon.tcl]
APN It is recommended to use auto_execok at least on Windows. For example
% exec date /t couldn't execute "date": no such file or directory % exec {*}[auto_execok date] /t Wed 11/16/2016
Here auto_execok is smart enough to know date is an internal command of cmd.exe. There may be other cases as well.
Falco Paul, PYK 2018-03-18
execpipe executes a process, sends the output both to stdout, and also captures it into a variable, similar to Unix tee command. An error is returned if the exit status of the external command is not 0.
proc execpipe {varname args} { upvar 1 $varname data lappend args 2>@stdout set chan [open |$args] fconfigure $chan -buffering line set data {} while {[gets $chan line] >= 0} { puts $line lappend data $line } close $chan }
Example:
set script { foreach word {triumphant splendor on my brow} { puts $word } } execpipe res [info nameofexecutable] <<$script puts [join $res]
Mho: For a similar construct, see Another BgExec.
In some cases, even simple programs like Unix rm can become interactive:
exec rm -i file1 file2
In such a case, the standard streams should be redirected so that that exec doesn't try to use them for its own purposes:
catch {exec rm -i /tmp/a /tmp/b /tmp/c >@stdout 2>@stderr <@stdin} results
AM 2004-03-17: I ran into a strange phenomenon while trying to tame an external program on Windows:
set infile [open |[auto_execok run.bat]]
Note: stdout and stderr are not available in Windows when Tcl is invoked as wish, because Windows does not support stdio for GUI apps. The resulting error message will look something like:
channel "console1" wasn't opened for writing
This can be avoided when using Tk programs by invoking tclsh, and including package require Tk in the script.
In the following example, $r would normally contain the output of query.exe:
MHo 2008-07-03: I still see some mysterious behaviour: if I use exec from within a starpack based on wish, then sometimes exec won't be able to catch the output from some programs. For example, the result of
catch {exec -- query.exe termserver /continue 2>@1} r
But if the script is running in wish or any other program that doesn't have an associated console, $r will instead be empty.
Windows starpaks that based on wish will not have the standard channels available, and will produce these systems.
One workaround is to create a temporary .BATch file which in turn provides a console for whatever it runs. The better solution is to test that standard channels are writable before attempting to write to them.
From the exec documentation:
exec will not work well with TUI applications when a console is not present, as is done when launching applications under wish. It is desirable to have console applications hidden and detached. This is a designed-in limitation as exec wants to communicate over pipes. The Expect extension addresses this issue when communicating with a TUI application.
MHo: It seems to me that if starting Win32-Console-mode-Applications, exec does not create a new console window (this is an option in the original win32-api create process, see TWAPI. By default, the executed process inherits the console from the parent process. This is sometimes not correct. While the Win32-Console-Api is absolutely Win32-specific and this topic is not of interest to the Tcl programmers in general, it would be fine if one would be sometimes able to have granular control over such exotic, platform specific options - unfortunately, twapi isn't at this stage of development always an alternative yet.
LV: One place where developers may be surprised is trying to execute an external program that they expect will interact with the user. For instance:
$ cat io.p #include <stdio.h> int main() { char val1[1024], val2[1024]; printf("please enter first number: "); gets(val1); printf("please enter second number: "); gets(val2); printf("thank you\n"); exit(0); }
If the developer would compile this program (which, granted, does nothing useful), they might be surprised to find that coding
exec gcc io.c -o io exec ./io
does not result in the display of the prompts. The reason for this is that Tcl's exec has created a pipe for stdout, since the general case of using exec is more likely a construct like
set results [exec ./io]
and that the program would run to completion, after which exec would take the output and assign it to the variable.
So, one has to code around the default behavior if a more interactive access is needed. Over on comp.lang.tcl, Alexandre Ferrieux wrote, in the thread 'Problem in calling c programs and compiling them in tcl/tk from mid November, 2008, that one needs to use something like:
exec ./io >@ stdout 2>@ stderr
(adding the catch construct around it and possibly the -ignorestderr depending on the behavior of the program).
male 2004-09-14:
Again: the exit code of a piped executable
Our problem is, that an external company provides an C executable (on Windows) using old FORTRAN functionality mapped in a DLL. This FORTRAN code returns failure code integers using 4Bytes (INTEGER*4). These failure codes are used as exit code of the C executable.
Catching close on the blocked command channel to the C executable let the exit code be stored $errorCode.
So far so good!
Now the problem!
On Windows, the C function, exit, permits using a 4-Byte integer (int or int32), but we get only the last byte of the exit code.
example: the original failure & exit code is 655, the exit code in Tcl via $errorCode is 143 (655 & 255 = 143)
Has anybody a tip how to avoid this? Any hint or suggestion? Please think of the fact that the executable is not maintained by us; it's a company external executable! Or is this a tcl specialty to be platform independent by supporting only an 1-byte--sized integer as exit code in command channels?
MHo: Perhaps you can try to use get_process_exit_code from twapi?
male 2004-09-14: my own answer: It's a pity that the Microsoft Windows platform exit code is strictly reduced from 4 bytes to 1 byte!
The comp.lang.tcl thread http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=1c8670d2.0405260607.5a4c08ce%40posting.google.com&rnum=1&prev=/groups%3Fhl%3Den%26lr%3D%26ie%3DUTF-8%26selm%3D1c8670d2.0405260607.5a4c08ce%2540posting.google.com decribes this.
JB: If you need to store the output of an exec into a list, split is useful. This will split up the text (note the newline \n) with carriage returns so you can easily manipulate the output. Handy if you're trying to pull information out of a repetitive list.
set result [split [nmap -sP 10.0.0.0/24] \n] foreach l $result { puts stdout [lindex $l 1]}
George Peter Staplin: I found that with my file manager there were some problems with exec and mplayer. If I started my file manager with & (in pdksh), then mplayer would block waiting for stdin, but if I started it as a foreground process it worked properly. This is how I fixed the problem (verified to work with another Tcl/Tk program and mplayer by Steve Redler IV):
proc nullexec f { # This is needed for applications like mplayer. if {![info exists ::nullexec_closed_stdin]} { catch {close stdin} open /dev/null r set ::nullexec_closed_stdin 1 } if {[catch {eval exec -- $f >& /dev/null &} pid] && {NONE} ne $::errorCode} { tk_messageBox -title Error -message $pid -type ok return } }
It works by redirecting /dev/null into stdin so that any program that wants to read stdin won't block.
LV: Recently on comp.lang.tcl, a poster asked why the Tcl script being written would hang. The example of code was
exec /path/to/xselection FREDY -- {A B C}
The command itself would run from a shell command prompt, but when run in tclsh with the above command line, it would not terminate. After discussion, Alexandre Ferrieux mentioned the program might have a requirement of running with its stdin and/or stdout connected to a terminal, and suggested trying
$ cat | /path/to/xselection FREDY -- "A B C" | cat
to see if the program worked as expected, since essentially, this is how tcl's exec worked. When this was tried, indeed the program appeared to hang from the command line.
LES 2005-05-07: What if the command may take a really long time to finish and return and I want to set a time limit for a proc to return whether the exec'ed command is done or not? How?
LV: What do you want to happen in the case of a time out? I mean, with exec, you can use & and just go on and do work, but then, I don't know that Tcl has a way to determine whether the command completed or not. Or, you could use open "| command" r, which puts the command in the background, then do some sort of setup where you check for output occasionally, and check to see how much time has elapsed, and you could then close the file handle when a time limit has passed. Or perhaps you could use after to set up some sort of thing to set of an alarm when the time limit has passed.
LES: "Set up some sort of alarm"? Yes, I would like to say: " - Hey, Tcl, remember that exec or open I just mentioned a couple of lines ago? Well, forget about it, I don't care what it returns anymore." But how? I think about it and imagine some sort of break command for exec situations, but there is no such thing.
Take a look at Another BgExec. The only problem there is that a timeout only closes the process pipelines but probably leave the processes running due to tcl's lack of a built-in kill-command...
Guillaume Plenier 2005-07-28: I am currently working at a starpack including executables for which I wrote a Tk graphical interface. Everything works fine on my computer because the executable files are in my path, but if I take the starpack and run it under another computer, I have error messages like " -executable- command not find" or something close to that. I tried several things to indicate where my executable files are for exec to find them, but apparently exec looks in the computer path variable to find the unknown programs and not where indicated (I changed env(PATH), used $starkit::topdir etc...)
MG 2005-07-28: You could try something like this:
set paths [list ~/path/number/1 ~/path/number/2 ../path/number/3] set success 0 foreach x $paths { if {[file exists [file join $x $fileToExec]]} { set success 1 exec [file join $x $fileToExec] break; } } if {! $success} { # we didn't find it in any of out paths - try just exec'ing and hope for the best catch {exec $fileToExec} }
Peter Newman 2005-07-29: DOS/Windows can't run EXEs (or DLLs) that are physically in the StarKit/Pack. You'll have to extract them first. MHo: Concerning DLLs, you are wrong: Doing a load in Starkits/Starpacks automatically copies the DLLs to a temporary location before loading.... execx could be a partial solution for exec...
For more info. check out the freewrap docs. There's discussion there about embedded binary files - and some techniques for dealing with them.
Guillaume Plenier 2005-07-31: I don't like the idea of extracting my executables first and will probably try to find other solutions.
AMG: I wish I could use exec to redirect to/from reflected channels, but this doesn't work. (PYK 2019-04-24: This can be done with ycl chan osin/osout, which creates a chan pipe facade around another channel.)
proc mychan {cmd chan args} { switch -- $cmd { initialize { return {initialize finalize watch write} } write { set data [lindex $args 0] puts "got [string length $data] bytes: >$data<" return [string length $data] }} } exec ls >@ [chan create write mychan]
This produces an error: channel "rc0" does not support OS handles. Like the error says, reflected channels don't have file descriptors, so they aren't recognized by the operating system. I'm surprised Tcl doesn't internally create anonymous pipe pairs to facilitate interfacing. Speaking of anonymous pipe pairs, if I could open anonymous pipe pairs in Tcl script, I wouldn't need reflected channels for this application. I could redirect to/from one end of the pipe and use chan event to monitor the other.
Oh wait, there's chan pipe. Forgot about that. It works just fine for what I'm doing. Also, using chan pipe lets us implement most of open | using exec.
foreach chan {stdin stdout stderr} { lassign [chan pipe] rd$chan wr$chan } set pids [exec {*}$pipeline <@ $rdstdin >@ $wrstdout 2>@ $wrstderr &] close $wrstdout close $wrstderr puts $wrstdin "input text to send to pipeline stdin" puts "received stdout [gets $rdstdout]" puts "received stderr [gets $rdstderr]"
I've spent some time figuring out how to get all of this asynchronous using above code as starting point.
This is similar to popen3 or popen4 found in other languages, returning a list of pid, stdin, stdout, and stderr of the executed process.
proc popen4 args { foreach chan {In Out Err} { lassign [chan pipe] read$chan write$chan } set pid [exec {*}$args <@ $readIn >@ $writeOut 2>@ $writeErr &] chan close $writeOut chan close $writeErr chan close $readIn foreach chan [list stdout stderr $readOut $readErr $writeIn] { chan configure $chan -buffering line -blocking false } return [list $pid $writeIn $readOut $readErr] } # Example usage. set done false lassign [popen4 exercise.rb] pid stdin stdout stderr chan event $stdout readable { puts -nonewline [chan read $stdout] if {[chan eof $stdout]} { set done true } } chan event $stderr readable { puts -nonewline [chan read $stderr] if {[chan eof $stderr]} { set done true } } chan event $stdin writable { puts {stdin writable} chan puts $stdin foobar chan close $stdin } vwait done # close channels to avoid bugs when use popen4 many times chan close $stdin chan close $stdout chan close $stderr
MHo 2019-01-27:
Here is an alternative way to call a single Powershell command using TWAPI´s create_process. The advantage is that this works from within wish, too, as create_process manages the otherwise missing console. I observed elsewhere that calling powershell from within wish does not work (possible workaround is an indirect call via cmd.exe, which creates a visible black screen, or to use TWAPI´s allocate_console and hide it beforehand...).
# # Calls a single Powershell command (blocking, hidden) # Arg: The command to give to Powershell via -command switch # Ret: A List of three elements: # -1 "" <errtext> -> error from packa re or create_process (twapi) # 0 <stdouttxt> "" -> Ok # 1 "..." <stderrtext> -> Maybe Ok, something written to stderr # proc execPowershellCmd {cmd} { set cmd "-command $cmd" foreach chan {stdin stdout stderr} { lassign [chan pipe] rd$chan wr$chan } if {[catch { package require twapi_process set cmd [string map [list \" \\\"] $cmd] twapi::create_process [auto_execok powershell] -cmdline $cmd -showwindow hidden \ -inherithandles 1 -stdchannels [list $rdstdin $wrstdout $wrstderr] } ret]} { return [list -1 "" $ret] } chan close $wrstdin; chan close $rdstdin; chan close $wrstdout; chan close $wrstderr foreach chan [list $rdstdout $rdstderr] { chan configure $chan -encoding cp850 -blocking true; # to be further examined } set out [read $rdstdout]; set err [read $rdstderr] chan close $rdstdout; chan close $rdstderr return [list [string compare $err ""] $out $err] } # Selftest; call it with arg like Get-Help if {[info exists argv0] && [file tail [info script]] eq [file tail $argv0]} { label .l1 -text $argv pack .l1 text .t1 -width 160 -height 40 pack .t1 -fill both -expand 1 button .b1 -text " (Exit) " -command exit -state disabled pack .b1 -fill x lassign [execPowershellCmd $argv] rc out err .t1 insert end "Rc:\n\n$rc\n\nOut:\n\n$out\n\nErr:\n\n$err" .t1 configure -state disabled; # readonly .b1 configure -state normal }
P.S.: It seems to me that there is something wrong with the syntax hilighting here...