Tcl provides several commands which create commands and script evaluation contexts.
The purpose of this page is to enumerate and classify those commands.
[*] apply doesn't create a Tcl command in the same way the other command generators do; it doesn't add anything to the interpreter's command tables. Rather, the "command prefix" [apply $lambda] can be used like a Tcl command in most contexts.
generator | cmd? | #args | invocation | destructor |
---|---|---|---|---|
proc | yes | any | by name | |
coroutine | yes | 1 | by name | |
Tcl_CreateObjCommand | yes | any | by name | see man page [L4 ] |
namespace ensemble | yes | any | by name | namespace delete |
class create | yes | any | by name | $class destroy |
$class create | yes | any | by name | $obj destroy |
interp alias | yes | any | by name | interp alias |
interp create | yes | any | $interp eval | $interp destroy |
thread::create | no | any | thread::send | thread::release |
apply | no | any | by reference | implicit |
NB:
Legend for above table:
Column | Description |
---|---|
generator | what command generates an instance of this form? |
cmd? | does this form construct a command? |
#args | how many args does the constructed form take? |
invocation | how is this form invoked? |
destructor | what explicit destructor disposes of this form? |
The generation of commands appears to be a means of invoking the form and of controlling the resources associated with the form.
There are two exceptions: In the case of apply, resources are associated with a value, and the lifetime of the generated form is tied to that of the value. In the case of thread, explicit refcounting is employed. In all other cases [rename $name {}] destroys both the command and its associated form. In the case of namespace ensembles, that is the main method of destroying the command (that it gets destroyed when the namespace is destroyed is more of a side-effect; also note that several ensembles can have the same namespace).
interp-created commands duplicate functionality available through interp, to control interp resources. It would be more consistent with the other forms if $interp args were passed directly into the interp as if [interp eval]'d.
coroutine-created commands are the only forms restricted to single args.
NEM notes that coroutines are not general commands, but rather a communication and concurrency primitive. The nearest equivalent are channels, which do indeed take only a single argument (via puts).
AMG: With both coroutines and channels, you can create wrapper procs to encode/decode the communication protocol. For coroutines, the protocol is datagram-oriented: each datagram is one Tcl word. For channels, the protocol is stream-oriented.
(NEM Umm... coroutines are more like streams than datagrams).
AMG: How are they like streams? Data is delivered to coroutines exactly one packet at a time, whereas stream-oriented protocols deliver variable numbers of characters. It's possible to build one type of protocol on top of the other, but there's a fundamental difference at the low level.
NEM datagram protocols are usually fixed size packets, require explicit addressing, no prior set up of a "channel" and are unreliable. None of these things apply to coroutines. The coro command and yield are fairly direct analogues of puts and gets, with the exception that send/recv must occur in a strict interleaving.
Lars H: Still, now that coroutines are commands, they might as well try to act like it. Had the intention been that they should only be a "communication and concurrency primitive", then an ensemble like chan with subcommand such as
would seem more appropriate.
NEM is perfectly happy with that change, but would be interested in what way coroutines are not like commands, given that they are commands. The argument instead seems to be that coroutine is like proc so it should act like proc (rather than a particular proc). I reject the premise of that argument (lack of a parameter list for one).
Lars H: "Not like commands" — well, that would be CMcC's original quandry of why they should be restricted to taking only one argument, when there is no other class of commands that are so restricted. Your reason for why things should stay as they are was pretty much that one shouldn't think of coroutines as commands, and that's were I jumped on.
NEM There is the obvious class of commands that take one arg (eg 1 arg procedures). The argument is that no command creator generates only single arg commands, therefore coroutines should accept more than one arg. I think this is a straw man. If the argument is that the coroutine command should work more like proc and take a parameter list, then I would be interested in that. However, given it is simple to combine a coro with a proc currently, I'd like to see some compelling use cases for conflating their functionality.
AMG: Would [coroutine create cmd ...] create a command such that [coro ?datagram?] behaves the same as [coroutine interact coro ?datagram?]? I assume coro would be the return value of [coroutine create]. Also, would it be possible to use rename to give the coroutine command a specific name? If so, would it be possible to pass the new name as the coro argument?
Lars H: In analogy with channels, I imagined coroutine create to merely return a token, not create an accompanying command. This is just a "what if coroutines had been patterned after streams rather than subroutines" fantasy, provided as a test of NEM's claim that they actually were.
I wonder about [coroutine eval]. It can currently be implemented by passing the script as the argument to the coro command, but the coroutine has to be designed to expect yield to return a script to eval. If that's not the only thing that the coroutine can do, the argument to the coro command would have to be tagged, e.g. [coro {script {puts moo}}]. If the coroutine is already designed for this, [coroutine eval] adds no new functionality; if the coroutine is not designed for this, [coroutine eval] probably doesn't make sense anyway. What happens when the eval'ed script completes? Is the interpreter result value yielded? Does this process affect the "instruction pointer" of the coroutine's execution frame? Speaking of yield, how about changing [yield] to [coroutine yield], since it's only valid inside a coroutine anyway?
coroutines are different from the other types of commands in that they cannot take more than one argument. It's possible to cram any amount of data into that one argument, but this restriction does make it impossible to use them directly in a seamless implementation of common Tcl-style ensembles, options, command prefixes, etc. To smooth over this impedance mismatch, it's necessary to wrap coroutines with a proc that encodes its args into a list, and to do similar decoding on the return value of [yield].
Hmm, something came to mind. There's no reason why this wrapper proc has to be customized for the specific coroutine. Just do this:
proc invokecoro {coro args} { $coro $args }
Now any coroutine that expects [yield] to return a list can be called with multiple arguments. The "function" (in the sense described by Higher order TIP discuss and [L5 ]) is a two-element list: the word "invokecoro" followed by the word that was the first argument to [coroutine]. Here's some craziness for you:
namespace eval frobozz_impl { namespace ensemble create -subcommands {print fprint crash math} proc print {args} {puts $args} proc fprint {channel args} {puts $channel $args} proc crash {} {error "oh no!"} proc math {args} {expr [concat $args]} } proc frobozz {} { set return [frobozz_impl {*}[yield]] while {true} { set return [frobozz_impl {*}[yield $return]] } } coroutine wingnut frobozz invokecoro wingnut print hello world invokecoro wingnut fprint stderr hello world invokecoro wingnut math 2 + 2
I think this discussion might need to be merged into multi-arg coroutines.
NEM Yes, your invokecoro is discussed on that page as apply-list, and in the original TIP (which seems not to have been read much) as resume [L6 ].
CMcC What things 'are' is the opposite of abstraction. Programs 'are' just a bunch of bits, but we don't program in binary. One of the interesting things about computers is that they allow you to simulate more abstract things using less abstract things. In this case, if we take NEM's word, one can simulate generalised commands using coroutines. Except that, (almost) uniquely, coroutines pass only one arg.
NEM Passing a single arg is not a limitation either in theory (cf. lambda calculus) or in practice. The functionality is there and works, this discussion is just about syntax.
CMcC Given that most commands in existence take more than one argument, it is impossible to simulate such commands with a command which only takes one. This is what I consider a limitation in practice.
NEM It's not impossible, it's very easy. You just need to curry your coro command when you pass it as a callback:
# Generic procs: proc curry {cmd args} { list resume {*}$cmd {*}$args } proc resume {coro args} { tailcall $coro $args } # Usage: coroutine tracer ::apply {{} { while 1 { lassign [yield] name1 name2 op puts "VAR TRACE: $name1 ($name2) $op" } }} set foo 1 trace add variable foo write [curry tracer]
CMcC Given that most commands in existence take more than one argument, it is impossible to simulate such commands with a command which takes only one. I consider this a limitation in practice.
Regarding "NEM notes that coroutines are not general commands"...
In retrospect, and with a few months to let the dust settle: I have never claimed that coroutines *are* general purpose commands, any more than I would claim that threads were. I merely believe that they should be able to *emulate* general purpose commands, as can each of the other command-creating-commands listed above. This more modest requirement, and its attendant great utility, to me make ::yieldm a self-evident good. Whatever coroutines are, it makes no sense to me to arbitrarily limit what they can do.