::blt::bgexec statusVar ?options…? externalCommand ?arguments…? ?redirections…? &

See Also

Another BgExec
Another standalone bgexec
Includes an implementation of bgexec.
Matthias Hoffmann - Tcl-Code-Snippets - Misc - Bgexec
Offers a good alternative to bgexec by turning background execution into asynchronous I/O on a channel.
Standalone bgexec


Runs externalCommand (with its arguments) as a subprocess/pipeline in the background. On the termination of the subprocess, sets the globally-resolved variable statusVar to the exit code of the subprocess. Supports redirections, just as normal exec does. The options, which all start with a - (minus) symbol, may be:

-onoutput callback
When data is available on the standard output of the subprocess, give it to callback to handle (by concatenating as a single value and evaluating the result in the global scope).
-onerror callback
When data is available on the standard error of the subprocess, give it to callback to handle (by concatenating as a single value and evaluating the result in the global scope).
-linebuffer bool
Whether to perform line buffering on the input (when bool is true) or to consume it as it becomes available. Defaults to false.
-keepnewline bool
Whether to retain the newline when reporting read values to callback handlers. Defaults to false.

Example of stderr capture:

proc DisplayErrors data {
    puts "stderr> $data"
bgexec myVar -onerror DisplayErrors programName &

Also has -onoutput.

-onerror sets up a callback so that when output appears on the stderr of programName, DisplayErrors is called with the string, and the proc can then do with it what it likes. -onoutput sets up a similar callback, watching stdout of programName.

gah writes, "What I like about 'bgexec' is that it handles all the dumb stuff under the hood--so you (the Tcl programmer) don't have to worry about it. It automatically performs non-blocking I/O. It collects data at whatever rate, without you doing anything. It raises the programming problem from 'readable' events, non-blocking I/O, and buffer lengths, to Tcl variables and procedure calls. You don't have to be an expert in I/O or process semantics. If you want to kill the pipeline, all you have to do is set a variable. It's the same whether you're on Windows or Unix or MacOSX."

Vince: All wonderful reasons why this functionality really should go into Tcl's core.

US: Ceterum censeo bgexec esse integrandam (Marcus Porcius Cato, slightly modified)

KJN: I agree with all the above. There's one further feature that I would like, which is the ability to open a file descriptor that is connected to the standard input of the first process in the pipeline, like "open |programName" does. Then bgexec would combine the input-handling facilities of open, with the very convenient output-handling facilities already available in bgexec. The new feature could be implemented as an option

  -input varName

that returns the file descriptor as $varName.

This functionality can be implemented in a roundabout way by using a pre-existing named pipe (e.g. /tmp/somepipe). The trick mentioned below by SLB is also incorporated:

proc putsout data {
    puts -nonewline $data

global myStatus

set pipeID [open /tmp/somepipe r+] ;# this command does not block!
bgexec myStatus -keepnewline 1 -onoutput putsout -onerror putsout programName <@$pipeID &

Now, the process programName runs in the background, and can be drip-fed input by writing to the file descriptor $pipeID; its output is collected one line at a time by putsout, which in this example writes to stdout, but can do whatever you like.

If you open a pipe for reading (i.e. with access option "r"), the open command will block unless the pipe is already open for writing (access option "w"); and vice versa! The workaround used here is to open the pipe for both reading and writing (access option "r+"), and use the same file descriptor for both operations: there may be more elegant workarounds.

This code works a lot like "open |programName r+", but it's convenient not to have to deal with fileevent and output collection - bgexec does this for us, and so our program is very short: only 6 lines of code.

DAS: I have implemented -input in my Standalone bgexec critcl wrapper of bgexec.

SLB One point about bgexec that tripped me up recently was in using a command such as this:

::blt::bgexec ::bgexecStatus -linebuffer 1 -onerror puts -onoutput puts cat file.txt &

This reported all lines in file.txt except for the blank lines.

If you want the blank lines to can change the command to this:

::blt::bgexec ::bgexecStatus -linebuffer 1 -keepnewline 1 -onerror {puts -nonewline} -onoutput {puts -nonewline} cat file.txt &

keepnewline mode has the documented effect of adding a trailing newline to each line of output and an undocumented effect of retaining blank lines.

bgexec.html says that keepnewline mode is the default, this seems to be incorrect.

This behaviour is present 2.4z on both Windows and Solaris.

For a pure tcl-implementation of bgexec, see Matthias Hoffmann - Tcl-Code-Snippets. Sorry, if I had known that a bgexec exists I had used a different proc name...

CAU has successfully effected a pseudo-threaded remote interpreter for database migration from Ingres to SQLServer, but has encountered a problem loading BLT inside an interp. The system is as follows:

  • Source system is Solaris and running Activestate tcl 8.4.11
  • Target is Windows Server 2003 running Activestate tcl 8.4.11
  • BLT bgexec is used on both sides to run external commands (Ingres extract and bespoke data mapping commands on Source, SQL Server server BCP and SQL commands on Target).
  • Using the Tcl event loop, the source concurrently extracts 4 streams of data from the Ingres database via bgexec.
  • Data is mapped and FTP'd to target.
  • Target uses 3 interps to manage concurrent bgexec processes importing data into SQL Server.
  • Data is concurrently extracted from Ingres and Loaded into SQLServer with high throughput.

This process runs every night for building a data warehouse in SQLServer.

My problem in all this is in creating multiple interps on the target system, I was unable to get "package require BLT" to work in any form from within the slave interps running on Windows. Other packages seem to load without any problem, including those loading compiled dll's, tclodbc works fine. Circumventing the "package require ..." mechanism and trying to load/source the code directly still failed.

I tried this on Windows XP too with the same result. Loading BLT into an interp works fine on Solaris.

Anyway, the upshot of this is I had to load BLT in the master interp and create an alias to it from each slave interp. Not a great problem, but it does make it more difficult tracking events in the master interp.

Anyone ever had a similar problem loading BLT inside a slave interp?

MB : The jobexec class allows to create background jobs. The original source code was taken from Matthias Hoffmann - Tcl-Code-Snippets - Misc - Bgexec and was extended by me. It is publicly available in the Tclrep project, in the module jobexec :

CAU: Today I hit a file size limit using bgexec:

set logfile     my_logfile.txt
set inputfile   input.dat
set outputfile  output.dat
eval set pid [::blt::bgexec triggervar /path/to/prog -o arg < $inputfile > $outputfile 2>> $logfile &]

This has been working fine for the last 5+ years, and currently runs on Solaris with Tcl 8.4.13 with BLT 2.4z. I can execute the command independently in Korn shell and the process handles files over 2GB, but when called from bgexec there seems to be a 2GB file size limit imposed by BLT or Tcl (can't work out which just now).

I can get around this problem by piping the output to another script that does the file redirection:

eval set pid [::blt::bgexec triggervar /path/to/prog -o arg < $inputfile | 2>> $logfile &]

Just hope this info helps anyone with the same problem.