open , a built-in Tcl command, opens a file or executes a command and returns the name of a channel which provides access the contents of the opened resource.
[gets]: [read]: [file]: [puts]: [catch]: [stderr]: [pipeline]: [Example of reading and writing to a piped command]: `[chan pipe]`: [Execute in Parallel and Wait]:
wb appeared in Tcl 8.5. kennykb reports in the chat room that he recalls it was also available in Tcl 7.3 days or earlier.
[https://www.tcl-lang.org/man/tcl/TclCmd/open.htm%|%official reference]: [https://www.tcl-lang.org/man/tcl8.5/tutorial/Tcl26.html%|%Running other programs from Tcl - exec, open]: [TIP] [https://www.tcl-lang.org/cgi-bin/tct/tip/183.html%|%183]: Add a Binary Flag to `open`:
If b is appended to mode it has the same effect as fconfigure ... -translation binary
set fd0 [open simpleinput] ;# open the file for reading set fd1 [open simpleinput r] ;# open the file for reading set fd2 [open reldirect/simpleoutput w] ;# open the file for writing set fd3 [open /full/path/file r+] ;# open the file to be Read and written set fd4 [open /tmp/output {WRONLY BINARY}] ;# Since Tcl 8.5 [http://tip.tcl.tk/183]: open in binary mode for writing set fd4b [open /tmp/output wb] ;# should be equivalent to the fd4 example. set fd5 [open |[list simple1] r] ;# Read the stdout of simple1 as fd5, stderr is discarded set fd6 [open |[list simple2] w] ;# Writes to fd6 are read by simple2 on its stdin set fd7 [open |[list simple1 |& cat] r] ;# Results in stdout AND stderr being readable via fd7 set fd7a [open |[list simple1 2>@1] r] ;# Same effect as fd7 set fd8 [open |[list simple2] r+] ;# Writes to fd8 are read by simple2 on its stdin, # whereas reads will see the stdout of simple2, # stderr is discarded set fd9 [open |[list simple1 2>@stderr] r] ;# stdout readable via fd9, stderr passed-thru to stderr of parent.
One variation on the fd8 example above which can be useful for debugging is
set fd8 [open [list | tee tmp/in | simple2 | tee tmp/out 2>tmp/err] r+]
In addition to running simple2 as an asynchronous child process, this also creates three files tmp/in, tmp/out, and tmp/err to which will be written everything that passes over simple2's stdin, stdout, and stderr respectively. tee here is a Unix utility that copies everything from stdin to stdout, but optionally also writes all of it to the files given as arguments.
AMG: Here's a one-liner for creating an empty file:
close [open $filename w]
glennj: see also "touch" from the tcllib package, fileutil.
Command pipelines executed by open generate errors exactly as described for exec, and close returns the error when it receives the channel returned by open as an argument. However, there is no -ignorestderr option. One way to work around that is to redirect stderr, perhaps straight back to stderr: >@stderr.
AMG: I prefer to construct the first argument to open thus: |[list progname arg1 arg2 arg3 ...]. This protects against whitespace embedded in the arguments from futzing up the works. Even though there's no problem with your command line, one day you might change it, perhaps to use a parameter to your proc as an argument. So I just do it "right", right from the start, to prevent forgetting to make the change in the future. It's like the problem with optional { braces } in C: they're not needed for one-line if/for/while/etc. bodies, but when you add another line you might forget to add the braces.
The following procedure invokes an external command and returns the output, the diagnostic output, and the exit code of that command.
proc invoke command { lassign [chan pipe] chanout chanin lappend command 2>@$chanin set fh [open |$command] set stdout [read $fh] close $chanin set stderr [read $chanout] close $chanout if {[catch {close $fh} cres e]} { dict with e {} lassign [set -errorcode] sysmsg pid exit if {$sysmsg eq {NONE}} { #output to stderr caused [close] to fail. Do nothing } elseif {$sysmsg eq {CHILDSTATUS}} { return [list $stdout $stderr $exit] } else { return -options $e $stderr } } return [list $stdout $stderr 0] }
Example:
set command { puts stdout {hi to stdout} puts stderr {hi to stderr} exit 42 } lassign [invoke [list tclsh <<$command]] stdout stderr exit puts "stdout: $stdout" puts "stderr: $stderr" puts "exit: $exit"
Here is a similar procedure, which accomplishes almost the same thing as the first, except that in one case, it leaves an artifact in the stderr output. If the child process generates nothing to stderr but has an exit status other than 0, exec writes the following message to stderr: "child process exited abnormally":
proc invoke command { set fh [open |$command] set stdout [read $fh] set stderr {} set status [catch {close $fh} stderr e] if {$status} { dict with e {} lassign [set -errorcode] sysmsg pid exit if {$sysmsg eq {NONE}} { #output to stderr caused [close] to fail. Do nothing } elseif {$sysmsg eq {CHILDSTATUS}} { return [list $stdout $stderr $exit] } else { return -options $e $stderr } } return [list $stdout $stderr 0] }
Another method is to redirect stderr to stdout, so that it is only necessary to capture one stream:
set fd1 [open |[list somecmd 2>@stdout] r]
Here's a quick test sequence that takes advantage of close returning the standard error for blocking pipes:
set somecmd {sh -c { #this is sh, not Tcl syntax echo "to stdout" ; echo >&2 "to stderr" }} set errorCode {} puts {no redirection:} set f [open [list | {*}$somecmd] r] set std_out [read -nonewline $f] catch {close $f} std_err puts " std_out is {$std_out}" puts " std_err is {$std_err}" puts " errorCode is {$errorCode}" set errorCode {} puts redirected: set f [open [list | {*}$somecmd 2>@stdout] r] set std_out [read -nonewline $f] catch {close $f} std_err puts " std_out is {$std_out}" puts " std_err is {$std_err}" puts " errorCode is {$errorCode}"
The output should be:
no redirection: std_out is {to stdout} std_err is {to stderr} errorCode is {NONE} redirected: std_out is {to stdout to stderr} std_err is {} errorCode is {}
The following code is a template for executing an external program and collecting stdout, stderr, and the exit code, all asynchronously.
This example is also available in ycl::exec.
MattAdams: If you are using this be careful with the writable fileevent for stderrin below. The way this is structured it will be called ad nauseam, causing this process to consume more CPU time than is reasonable, and because it keeps the interpreter busy it will block any idle events from being run. It is much neater and less troublesome to have the handler close stderrin after it successfully closes stdout.
#! /bin/env tclsh proc invoke {command {onexit {}} {onstdout puts} {onstderr {puts stderr}} {read {}}} { lassign [chan pipe] stderr stderrin lappend command 2>@$stderrin set stdout [open |$command] set handler1 [namespace current]::[info cmdcount] coroutine $handler1 handler $onstdout $onstderr $onexit fileevent $stderrin writable [list apply {{stdout stderrin} { if {[chan names $stdout] eq {} || [eof $stdout]} { close $stderrin } }} $stdout $stderrin] #warning: under the most recent Tcl release (8.6.1), any errors in handler #1 will not be reported via bgerror, but will silently disrupt the program. #For the status of this bug, see #http://core.tcl.tk/tcl/tktview/ef34dd2457472b08cf6a42a7c8c26329e2cae715 fileevent $stdout readable [list $handler1 [list stdout $stdout]] fileevent $stderr readable [list $handler1 [list stderr $stderr]] return $stdout } proc handler {onstdout onstderr onexit {read gets}} { set done {} lassign [yield [info coroutine]] mode chan while 1 { if {[set data [{*}$read $chan]] eq {}} { if {[eof $chan]} { lappend done $mode if {[catch {close $chan} cres e]} { dict with e {} lassign [set -errorcode] sysmsg pid exit if {$sysmsg ne "CHILDSTATUS"} { return -options $e $cres } } else { if {![info exists exit]} { set exit 0 } } if {[llength $done] == 2} { if {$onexit ne {}} { after 0 [list {*}$onexit $exit] } return } else { lassign [yield] mode chan } } else { lassign [yield] mode chan } } else { {*}[set on$mode] $data lassign [yield] mode chan } } }
Example:
set command { puts stdout {hi to stdout} puts stderr {hi to stderr} exit 42 } set chan [invoke [list tclsh <<$command] {apply {{code} { set ::exit $code }}}] puts "pid: [pid $chan]" vwait exit exit $exit
In TIP 304 , Alexandre Ferrieux writes: A popular workaround for script-only purists who want to get at the stderr of a command is to spawn an external "pump" like cat in an open ... r+, and redirect the wanted stderr to the write side of the pump. Its output can then be monitored through the read side:
set pump [open |cat r+] set f1 [open [list | cmd args 2>@ $pump] r] fileevent $f1 readable got_stdout fileevent $pump readable got_stderr
Now this is all but elegant of course, difficult to deploy on Windows (where you need an extra cat.exe), and not especially efficient since the "pump" consumes context switches and memory bandwidth only to emulate a single OS pipe when Tcl is forced to create two of them via open ... r+.
MSW: Note that piping stderr and stdout simultaneously won't work for all operating systems. I remember having problems on solaris with lost data through the pipes (one filled up while the other wasn't drained?). I had to redirect stuff to files and watch the files to follow the output (invoked program in question was cvs). Furthermore buffering differences hurt when you try to simultaneously catch err and out. I did not try the exact proposed solution above. Someone with Solaris 8 and 9 might want to test this (don't have access to one atm).
DKF, et al:
For mode a, Tcl seeks (seek()) to the end of the file immediately upon opening the file, and presumes that's good enough, but that can cause a race condition. if another program then appends to the file before this one does. If the system does proper (thread-safe and process-safe) appending, as required by POSIX, it's not a problem. Good append behaviour tends to get supported early in a system becuase it is useful for not losing data from the system logs.
Here's how to detect a problematic system:
Open a file in append mode (with a or a+) several times and write a single line to the file each time (I'd suggest writing digits here; that's easy to understand!) Do not explicitly flush or close those file handles. Instead, exit the program; the language's shut-down code should handle the flushing and closing for you. Then look at the contents of the file produced. Check if all the values you wrote actually made it to the file. Order does not matter here. If you got all the lines written, that is good.
On a problematic system, using the APPEND flag, which translates into the low-level O_APPEND flag to the PrivoxyWindowOpen() syscall, alters the semantics of writes so that a seek-to-end happens as an atomic part of the write itself, and that ensures that no data is ever lost, though of course apps still need to take care to only ever write complete records/lines to the file.
Example:
set f [open $theLogFile {WRONLY CREAT APPEND}] ;# instead of "a"
or
set f [open $theLogFile {RDWR CREAT APPEND}] ;# instead of "a+"
Does Tcl's current (8.4.1.1 and earlier) behavior of a have some benefit over the behavior of APPEND? Or should this difference be considered a bug that is fixed?
FW seconds this question.
LES: 2005-04-11: This page's last update was stamped "2003-09-01" when I came here now and changed it. Has that question been answered yet?
Lars H: I'll hazard an answer. Perhaps the benefit of a over APPEND is that it is less surprising?! Imagine a program that appends some data to e.g. a list of changes and then seeks to the start to write an updated timestamp. If someone went in and replaced that "a" with {WRONLY CREAT APPEND} then the surprising result would be that all timestamps ended up at the end of the file, because every puts would seek to the end, would it not?
DKF: If you need random access to a file, you shouldn't use "a". You certainly wouldn't be doing this if you were using stdio.
PYK 2015-04-27: As of 8.6 it's possible to close only one side of a channel. Among other things, this can be use to harness another program as a data filter:
set data {one two four} set chan [open |md5sum w+b] puts $chan $data close $chan write set signature [read $chan] close $chan
Windows features the ability to store multiple data streams into one file:
foreach stream {one two} { set fh [open myfile:$stream w] puts $fh "data in stream $stream" close $fh } foreach stream {one two} { set fh [open myfile:$stream] puts [read $fh] close $fh } file stat myfile stats puts $stats(size)
output:
data in stream one data in stream two 0
Since there is no data in the primary stream, the size of the file is reported to be 0.
In a MS-DOS shell, specifying a file name ending in a colon causes the primary stream to be used:
rem this is cmd.exe echo hello > one: type one
On Tcl, an attempt to open a file whose name ends with : is an error.
TWu 2024-07-30 Hint: With Tcl/Tk 8.6.4.1 and above you can NOT open (read, create and write) alternate data streams (ADS). Access to existing ones from the Windows 7/10/11 system itself is also not possible, e.g. streams of type :Zone.Identifier:$DATA (mostly found under folder Downloads) can only be read or changed via cmd.exe commands type or echo. Try to create an ADS for a file (or folder) results in an additionally and ordinarily file with a replacement character for each colon in the name! It is possible to exec PowerShell (directly or by script) or cmd.exe (directly or by batch).
JMN 2024-08-14 Failure to read/write alternate streams seems like regression that shouldn't have happened. I think this is a bug.
TWu 2024-08-28 Thanks to Jan, the "open" bug for ADS is now resolved, see Tcl Code check-in .
fconfigure options that are specific for a certain channel type are documented on the man page for the command creating that type of channel. Hence the -mode option, for serial ports, is documented on the reference page for open.
In Alpha, there are many real-life examples of communicating with external processes using pipes. The examples include
as well as standard shells. There are several pages dedicated to these issues on the AlphaTcl Wiki .
see notably Alpha and Unix , which is an overview page. The problems discussed include how to capture stderr, how to filter out prompts and transform an asynchronous interaction into a synchronous one with timeout mechanisms using vwait, how to link a process to a window, how to provide active cells in an Alpha document, a sort of worksheet interface to an external process, how to provide step functionality (stepping through a list of commands, at each step seeing the result in a console), and a check-as-you-type spellchecker (using aspell).
Recall that Alpha is a text editor, just as powerful and configurable as Emacs, but having Tcl as extension language instead of lisp! The AlphaTcl library (http://alphatcl.sf.net ) is nearly 20000 lines of Tcl code, full of gems and useful tricks. AlphaX runs on OSX, AlphaTk (written entirely in TclTk) runs on any platform.
Attempting to open a hidden file in w mode will fail:
set fid [open testhidden w] close $fid file attributes testhidden -hidden 1 set fid [open testhidden w]
Output:
couldn't open "testhidden": permission denied
However, opening such a file in r+ mode will succeed. This has to do with what flags open passes to the [Microsoft Windows%|%Windows function, CreateFile . One relevent excerpt from that documentation:
See also:
[http://stackoverflow.com/questions/13215716/ioerror-errno-13-permission-denied-when-trying-to-open-hidden-file-in-w-mod%|%IOError: [Errno 13] Permission denied when trying to open hidden file in “w” mode], stackoverflow, 2012-11-04:
marcDouglas: Think your problem was with the fconfigure statement. I removed the last two options ( -translation crlf -eofchar {} ) and now it works for me. Thanks for the little script, it works perfectly to start what I wanted to do.
WangFan: I use "catch {open "|plink -ssh $account@$host" w+} input}", in which "plink" is download from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html . When pipe is readable, I read the pipe, but I always get NULL. Does anybody know why? The tcl script is shown below:
package require Tk frame .t text .t.log pack .t.log pack .t set account user set host 10.1.1.2 if [catch {open "|plink -ssh $account@$host" w+} input] { .t.log insert end $input\n return 0 } else { fconfigure $input -blocking 0 -buffering line fileevent $input readable Log } proc Log {} { global input if [eof $input] { Stop } else { gets $input line .t.log insert end $line\n .t.log see end } return } proc Stop {} { global input catch {close $input} return }