ET 2023-12-08 - (1.13h)

The Tasks module's primary goal is to make threads simple to use: as easy as calling a standard procedure. Tasks can be used in place of or together with the Threads package.

Tasks are maintained on github at: https://github.com/rocketship88/Tasks.git along with a thread-able debugging tool. The module is in the file tasks-1.13.tm and more information can be found in the discussion section Code source and how to load below.

A Task mimics a standard proc called with an arglist. Tasks can team up to concurrently process arglists from a single job queue to increase performance. Tasks are pure tcl and documented here on the wiki and hosted on github.

The below example demonstrates how with Tasks one can easily transform a proc into a team of tcl threads by adding a T.

This example was derived from Ashok's web page on Promises [L1 ] and tasks were created using his book The Tcl Programming Language. The example computes the number of digits in 4 large fibonacci numbers using the tcl math library.

First is shown a standard sequential approach which took 4.8 seconds followed by a Tasks version that took 1.6 seconds on a 4 core cpu (timing code omitted).

    proc fibsize {num} {
        package require math
        return "$num size -> [string length [math::fibonacci $num]]"
    set n -1
    foreach num {100000 100100 100200 100300} {
        set answer([incr n]) [fibsize $num]
    parray answer
#   answer(0) = 100000 size -> 20899
#   answer(1) = 100100 size -> 20920
#   answer(2) = 100200 size -> 20941
#   answer(3) = 100300 size -> 20962
#   Time = 4.869001 seconds

Next we transform the sequential version into a multi-threaded program by using tasks. We replace proc with Tproc without any changes to the procedure name, args, or body. Tproc creates the same proc and also a group of tasks that each import the proc.

The procedure tgroup is then used to feed the tasks the same 4 numbers to process concurrently. The results are saved in an array (global or namespaced) variable of the same name as the Tproc.

    source /path/to/tasks-1.13.tm        ;# easiest way to load the module
#    package require tasks               ;# or use this if tasks-1.13.tm is copied to a known module directory
    namespace import tasks::* 
    Tproc fibsize {num} {
        package require math
        return "$num size -> [string length [math::fibonacci $num]]"
    tgroup fibsize -run 100000 100100 100200 100300
    parray fibsize rvar*
#   fibsize(rvar,0) = 100000 size -> 20899
#   fibsize(rvar,1) = 100100 size -> 20920
#   fibsize(rvar,2) = 100200 size -> 20941
#   fibsize(rvar,3) = 100300 size -> 20962
#   Time = 1.607991 seconds

Tproc / tgroup quick start

 Quick Start

Tproc has the structure of a regular proc but has 3 pieces of information that follow the script body of the procedure:

Tproc name {arg1 arg2 ...} {
} task_count  -some_options   a_list-of-imports-and-initializers

The task count, must be present, if one uses the options and list of imports/inits. These are:

taskcountThe number of tasks (threads) to create with -tasks N
options-once / -import_tasks / -min_import_tasks
imports&initializersa list of procs (wildcards allowed) and -initializers

If none of the above 3 are present (as in the above example), they default to -tasks 4, no options, and no imports or initializers.

The 3 -options following the task count that can all be omitted do the following:

once-once / This removes the forever loop that normally causes the Tproc to keep processing jobs
import_tasks-import_tasks / This inserts a namespace import tasks::* into the generated script
min_import_tasks-min_import_tasks / This reduces to two (twait and treturn) the low level tasks procs included in the script

There are 2 types in the list that follows

proc-namesThese are each a procedure or pattern as allowed in [info procs]
initializersThese begin with a dash and are tcl statements to insert

The tgroup command format is as follows:

tgroup a_groupname    -sub_option    option_parameters

The tgroup command can also do the actual task creation (described in a later discussion block), but with Tproc, it is used to feed it jobs. It is called with the Tproc name (used for the groupname) as its first parameter and has several sub-options. It has 5 that are used to run jobs, wait for them to complete or reset the output result array so it can be used again with new data.

These are the options used with Tproc:

foreach-foreach arglist1 ... arglistN / This feeds any number of jobs into the input queue and can be repeated
call-call arglist1 ... arglistN / This feeds exactly the number of jobs as there are tasks, repeating as needed
wait-wait all / wait for completion of all jobs
reset-reset / used to clear the result array so more calls begin their outputs at index 0
run-run arglist1 ... arglistN / does a -reset -foreach -wait all

Here is the above Tproc example specifying 8 tasks explicitly, using the namespace option and moving the package require math outside of the forever loop, so it is done only once. An example using tgroup to feed it some jobs follow, with an alternate way to load tasks:

    package require tasks       ;# load as a tcl module
    namespace import tasks::*   ;# optional, can also use tasks::Tproc etc. 
    Tproc fibsize {num} {
        return "$num size -> [string length [math::fibonacci $num]]"
    } -tasks 8 -import_tasks [list {-package require math}]

    tgroup fibsize -run 100 200 300 ;# run all 3 async and then wait for completion
    parray fibsize rvar*  ;# results are here, as fibsize(rvar,n) for n=0..2 for the 3 values, 100, 200, 300 in order

To see more details, expand the discussions below.

There are several youtube videos, which begin with: intro to tcl tasks: [L2 ]

A video with some tips and tricks for using tasks can be found here: [L3 ]

Some details on using the Task and tgroup commands: [L4 ]

ATasks example solution to the Classic producer/consumer CS homework problem available below is shown running on a linux system along with the Task Monitor and Send Command tools followed by a complete code walk-through: [L5 ]

A debugger for Tasks (with a pdf manual) is now available (at github [L6 ]) and a video tutorial is at [L7 ]

 Tasks Introduction

Introduction to Tcl Tasks

Tasks are a compatible extension to tcl threads that provide a middle ground between full processes and Tcl Threads. Tasks comprise 5 primitives and 2 meta-procedures. They present a familiar call/return model using standard arglists with a client/server capability. The primitives plus some debugging procedures can be used alone or with the high level procedures to build and run task groups.

Tasks can call other tasks similarly to calling a local procedure and can do so synchronously or asynchronously . The following is a description of the 5 primitives used by the 2 group building commands Tproc and tgroup shown in the first section.

  • Task's 5 Primitives
    Task taskname ?options? ?list of imports? {
         twait -> varname1 varname2 ...   
         # do something with the args $varname1 $varname2 ...
         treturn result

    tcall $taskname ?-async? returnvar <- arg1 arg2 ...
    tvwait returnvar
  • Creating Tasks

A task is a tcl thread created using the Task proc, as shown above. The taskname is the name of a (global or namespace) variable assigned the thread id and also used for creating some tsv shared variables. There are several options, and any needed proc's defined in the program can be easily imported into the thread's interpreter with the optional import list arg.

The last arg to the Task proc is the script. Task creates a new thread, supplements the script with error catchers and options for a while loop, proc imports, and initializers plus some global control variables and starts the task running. The script will typically begin with a twait call that returns with a job and it's arglist. treturn sends a result back to the client caller. This is normally repeated in a forever loop to mimic a procedure call or a server request.

  • Calling Tasks

On the other side of the transaction, the procedure tcall sends an arglist to a task, If it was called with the option -async one can then wait with tvwait until the returnvar is set. Without -async, it will synchronously wait for returnvar to receive a result and also returns its value as the command result.

  • Multi-Tasking with Job Queues

Tasks manage their own job queues and when several feed off the same queue, jobs run concurrently. With the task grouping utility tgroup, submitting a set of parallel jobs to run on a multi-core processor, is like calling a procedure with multiple arglists. The queues are distinct from the event loop queue, while remaining compatitible with it's usage.

  • Meta-tasking

Programs can be built using the 5 primitives alone, and/or can use the higher level procedures Tproc and tgroup which build and manage jobs for one or more tasks.

Tproc defines a procedure that is also loaded into N tasks that work in a group. Then tgroup is used to feed jobs to the group's shared queue that can run in parallel on a multi-core system. Using Tproc one can transform a standard tcl proc into a Task group by simply adding a T in front of proc.


Task Details

  • Tasks vs. Procedures

A Task resembles a procedure, however, treturn does not end the task and return like a procedure. It returns a result to the caller task (or main) and the code continues on with any statement following the treturn. It might loop back to another twait (the default choice), or go into some other loop or just straight line code. Each twait call can have a different number and type of arguments, handy for task initialization followed by a work loop.

  • Tasks by default are enclosed in a forever loop

By default, the script is enclosed in a while 1 loop unless the -once option to the Task command is used. Then the task will exit if it reaches the end of the script. If the -once is not specified, then a break can be used to exit the Task supplied forever loop, and a thread exit will occur. There is no added thread::wait since the thread script is not empty.

This loop allows a script to function like a called proc, but one that runs concurrently rather than using a call stack. At minimum, it can free up the main thread to keep a GUI responsive. It can also function as a client/server model with a shared job queue. In contrast to events, a task only works on jobs when it is ready to do so and has finished a previous job.

  • Tasks work with arglists lassign'ed to variables

Inside the script for a task, one uses the twait and treturn commands, to wait for a job (an arglist) and return a result. twait also can optionally do an lassign of the args to a list of variables.

Note: the -> used in the twait call above is actually a variable name. The -> variable (being the first arg) receives the full arglist (like the special variable args to a proc). This extra visual cue is optional, and is often used when the full arglist is not needed, but a placeholder variable is required. This variable can be named as desired; argv or args is also a good choice.

  • Task queues

Callers send arglists, which are saved in a sharable queue until the Task attempts to retrieve a job using twait. If the queue is shared with other tasks, then whichever task can get to the queue first, will get the next request. If nothing is queued, the task(s) will block until a job request arrives (but tasks use a timer to re-enter the event loop every 50 ms (the default) while waiting for an arglist to be queued by a tcall command).

A task can have its first twait accept an arglist with initialization data, and then drop into a loop calling twait/treturn for a different set of args to receive for the subsequent job requests. See the producer / consumer example that uses this technique.

  • Synchronous and Asynchronous calls

Synchronous calls use tcall to mimic a procedure call, but the procedure in this case is running concurrently in its own thread. This returns, with a value, after the task issues it's treturn statement.

    tcall $taskname resultvar  <- arg1 arg2 ... ;# returns the resultvar value as the command result 

Asynchronous calls also use tcall to mimic a concurrent procedure call, that runs in it's own thread, but can do other things before waiting on a result. For example, this allows one to fire off multiple -async calls that run concurrently, and then wait for all to complete.

    tcall $taskname -async resultvar  <- arg1 arg2 ... 
# ... do something else ...
    tvwait resultvar 

tcall sends an arglist to a task by using the contents of taskname, which contains the thread id (set by Task). tcall requires a variable name to receive a result which is the signal back to the caller that the job is done. tcall returns the value of the resultvar when called sync.

For consistency with thread::send, the option -async may be specified before the $taskname arg, i.e.

tcall -async $taskname ...

If the taskname is not a valid thread id, an error is thrown.

tvwait does a vwait on the resultvar, (but only) if it doesn't yet exist (tcall unsets it). This is necessary to avoid a race condition if some code uses vwait or update and enters the event loop.

Some versions of the http package use a vwait which is an example of where the event loop might be entered without the programmer being aware, and so the unset/vwait approach is needed to avoid a possible race condition.

tvwait can also be used in place of a vwait if the user program unsets a variable first. The producer / consumer example does just this, since that code uses after and vwait to simulate production time.

                                                   - - -


Tasks are easy to add to a program

Tasks are programmed in pure TCL code. No other packages are required other than Threads and Tk. They are currently just a code block that can be included in a program or placed in a file and sourced.

Note: the tasks code needs to reside in the main thread. In order to use the windows console for output, the main thread id must be known, and the code source assumes it is being run in the main thread.

Tasks vs. tcl threads

Tasks differ from simply using thread::create and thread::send (or tpool) in several ways.

  • Tasks use a share-able queuing method

Tasks implement a work queue using mutexes and condition variables. These are easy to synchronize and share across several threads (tasks). By having several tasks share the same queue, it is easy to provide single-queue multiple-server functionality where jobs can run in parallel on a multi-core processor.

To get this functionality with the Threads package, one can use the threadpool (tpool) package, which requires more steps than tasks. Tpool uses job id's whereas Tasks keep track of jobs automatically. Tasks can incorporate traces which can be used to get notifications when jobs are done without having to wait and keep track of job ids. Tasks automatically save result values in an array rather than requiring calls to tpool::get to retreive the results. However, tpool does provide a job queue, similar to tasks.

  • Each task is called with an argument list

Rather than always receiving a snippet of tcl code, tasks use the more familiar method of passing data in an arglist, that all procs, methods, commands, and processes (argc/argv) use for their inputs. Tpool jobs use the standard tcl thread method of passing in a script which is more complex than simply using arglists. Tasks can more easily mimic procedure calls and using Tproc can convert any existing procs into task groups with ease.

  • Tasks are sequential code

Event driven programs are usually more complex to understand than code which uses a more conventional sequential model. Tasks are designed to do simple, but sometimes cpu intensive jobs rather than an interactive gui, which is best done with events. However, thread::send can still be used and if the task is waiting for work, it can receive a command.

  • Tasks include proc importing

When a task starts up, it can import into its interpreter procedures (either global or in a namespace) from the calling task (or the main thread). This makes it easy to initialize a Task with all the code it needs to do simple jobs. The main thread can do the bulk of the program calling on a few tasks running in parallel to do some processing that can utilize modern multi-core cpus.

Tasks can process one job or loop back for more work

After processing an arglist and returning a value, the task can then simply exit, or remain running in a loop waiting for the next arglist to process, similar to a client/server arrangement.

By default, a loop surrounds the script, unless the -once option is specified in the Task command. Each task has a queue, so pending asynchronous calls will be queued fifo in an orderly fashion.

Helper tasks are simple to add

One or more tasks can be helpers, (with the same or different scripts) and will all share the main task's queue implementing a single queue multiple server organization. This is a key feature of Tasks. Callers can be from the main thread or any task (except itself).

Adding helpers is quite easy. Helper tasks are created with a simple taskname convention of worker/boss as shown below. All jobs are sent to the boss task.

    Task boss ... $script
    Task worker1/boss ... $script
    Task worker2/boss ... $script

    tcall $boss ....

Caller's job requests are fanned out fifo from the (boss) queue to the boss and helper workers and any waiting task in the group can grab a job from the queue with twait. Caller requests can load up the shared queue without limit. See the example web request below that fires off 200 requests for 10 tasks.

Here's a program that will create 4 tasks for 5 jobs. Note: the task name is available to each task using the global variable t_name, created by the Task command.

Because the computation takes a second or two, 4 will be run concurrently, and which ever one finishes first will snag the 5th one which was waiting in the job queue of tBoss. Due to scheduling of threads, there's no guarantee of which task that will be. Each task returns a list of 2 items, a text message and the result of the computation.

    package require tasks
    namespace import tasks::*
    set script {
        package require math
        twait   ->  i  number
        treturn  [list                                          \
                   "$i courtesy of $::t_name : $number"         \
                   [string length [math::fibonacci $number]]    \
    set jobs {100 200 300 400 500}

    foreach   taskname {tBoss help1/tBoss help2/tBoss help3/tBoss } {
        Task $taskname  -import  $script      ;# create the 4 tasks
    foreach i $jobs {                         ;# run the 5 jobs
        tcall $tBoss -async   rvar($i)   <-  $i  [expr { $i + 100000 }]
    foreach i $jobs {tvwait rvar($i)}         ;# wait for all jobs
    parray rvar

#   Each run could assign different tasks to each job, here's 2 results
#   Since there are only 4 tasks for 5 jobs, someone has to do 2 jobs:
#   rvar(100) = {100 courtesy of help1 : 100100} 20920
#   rvar(200) = {200 courtesy of tBoss : 100200} 20941
#   rvar(300) = {300 courtesy of help2 : 100300} 20962
#   rvar(400) = {400 courtesy of help3 : 100400} 20983
#   rvar(500) = {500 courtesy of help1 : 100500} 21003
#   rvar(100) = {100 courtesy of help1 : 100100} 20920
#   rvar(200) = {200 courtesy of help2 : 100200} 20941
#   rvar(300) = {300 courtesy of tBoss : 100300} 20962
#   rvar(400) = {400 courtesy of help3 : 100400} 20983
#   rvar(500) = {500 courtesy of tBoss : 100500} 21003

Tasks are compatible with other Thread functions and the event queue

  • tsv, mutexes, and conditional variables

The tsv shared variables are easily incorporated. Mutexes and conditional variables can also be used. Also a thread::send or thread::cancel can still be used. The taskid used in tcall is just the thread id.

During the wait for work, the event queue is checked every 50 ms by doing an update, so any pending events, such as a thread::send or updating of any gui widgets will have time to be serviced. 50 ms is the default, configurable with the global ::t_twait_timeout as mentioned in the task environment section.

  • Communicating Tasks

Tasks are a framework within which a program can run several concurrent threads. If these task threads communicate with one another, the programmer will still need to insure that deadlock or starvation does not occur. If a task does only functional code, i.e. does not use resources that other tasks might be using (other than the shared job queues used with helper tasks) then provided a task does not try to recursively call itself, there should not be any deadlocks.

When several tasks share a job queue, there is no method to insure that all tasks are assigned work. It depends on the system's thread scheduler and the number of parallel cores/threads in the cpu. Typically, helper tasks use the same script, so it shouldn't matter which task works on which jobs in the queue.

Tasks include puts like debugging and an error handler

Tasks comes with utility procedures for debugging such as a puts like statement that can create a Tk text widget (one per task) and a built in error catcher that will display a tk_messageBox.

Note, due to a Tk bug on linux, putz will not create Tk windows, but rather will output to stdout using puts, but with the taskname prepended. This mode can also be used on windows to output to the console if desired. This bug is reported to be fixed in 8.6.12. I've not tested this on linux. The t_debug variable has a value of 4 which can force linux to create separate Tk windows the same as on windows.

                                                        - - -

Task Procedures

 The 5 primitive Task commands

Create a Task

  • Task taskname ?-once? ?-import_tasks? ?-min_import_tasks? ?import list? script

taskname is the name of a variable that will be assigned the thread id (also called the task id). It is also the name used for some tsv shared variables that the Task system uses internally.

This taskname can also be namespace qualified, e.g. ns::taskname, where a prior namespace eval ns {...} was used in the program.

Note: the taskname is created at global scope unless an explicit namespace is used.

  • Helper Tasks

The taskname can also be a pair of names separated by a slash (e.g. helper1/main). The first name will be used for the taskname, and the second one is the name of a previously created task whose job queue will be shared, instead of creating another one for the new task. See the discussion above in the features section under helper tasks. The tasknames can also be array elements. See below in the small example discussion. Helper task names have the same local/global rule as mentioned in the previous section.

  • Task options

The -once option suppresses the forever while loop normally around the script.

The -import_tasks option is a shorthand way to add a namespace import tasks::* at the right place in the script imported into the thread. This option can be abreviated to -import.

The -min_import_tasks option can be used to limit the tasks procedures imported to the thread to only twait and treturn. Others, such as putz can still be imported as described in the next section.

  • Importing Procedures

The import list is a list of proc's that can be imported into the task's interpreter. Each list element can be a pattern as described in the [info procs] command. This can include a namspace qualifier, e.g. starkit::* which would include all the procs from the starkit namespace. When an item includes a namespace, that namespace is eval'd e.g. namespace eval starkit {namespace export *} which will define the namespace and also export all the procs defined in that namespace (even if they haven't yet been defined).

  • Initialization Code

Additionally, each item in the import list can be of the form {-tcl command} where the - is removed and the remaining item is inserted as a command at the point of the imported procedures. These commands are inserted before the script (and not inside the added forever loop), but after all the tasks::* procedures. The order of insertion is left to right. For example, suppose you need to include a package command (plus assorted other procs and inits), but don't want to place it inside the added forever loop :

Task taskname -import_tasks [list {-package require math} mystuff::* someproc -$bigscript] $script

An alternative is to provide one's own loop as shown in the next section.

  • Task Script

The script can be supplied in braces or with a $script variable, depending on preference. A variable is normally used with helper tasks when each uses the same script.

The script is situated at global level in a new thread in a new interpreter. Any simple (non-namespace, or proc/method local) variables in the script are global. While tasks are here often shown as simply twait/treturn pairs at the top and bottom of a task's script to mimic a procedure, that structure is not required.

For example, a task script can include proc's, TCLOO code, namespaces etc, and can specify -once. Tasks are usually sequential code, and not event driven and so it is often beneficial to choose a structure like this:

Task mytask -once -import_tasks ?import list? {
    package require something
    proc test {arg1 arg2 args} {
        # test something

    set someglobal "some init value"

    # some TCLOO code perhaps

    while 1 {
        twait args
            # do a job
        treturn "some result"

Another layout is presented in the producer/consumer example below, where 2 pairs of twait/treturn are used, one for initialization, and then a looped pair for job requests.

  • Event driven Tasks

A task can be event driven like a standard thread script and just make use of the Task framework to import procedures, use the debugging putz proc, and the error catching code. It can even use twait and treturn before using thread::wait to go into the thread event loop and wait for thread::send calls.

The following example imports the Time proc and defines an event triggered proc callback. While this is not much different than simply using thread::create and thread::send it does demonstrate that tasks are quite compatible with the threads package and can be used in many ways, such as this hybrid example which uses a single twait to get some init data, and also makes use of the debugging putz proc.

    package require tasks
    namespace import tasks::*
    proc Time {} {                        ;# a timestamp with 3 digit ms resolution
        set ms [clock milliseconds]
        set secs [expr {   $ms / 1000   }]
        set ms [string range $ms end-2 end]
        return [string range [clock format $secs] 11 18].$ms
    Task evt -import_tasks -once {Time} {
        proc callback {args} {             ;# a proc to call as an event
            putz "[Time] inside callback with args = |$args|"
            return "send back args = |$args|"
        set t_debug 2                     ;# on windows, putz goes to console, stdout on linux
        twait -> init1 init2              ;# get some args (optional)
        putz "[Time] twait received: init1= |$init1| init2= |$init2| "
        thread::wait                      ;# now becomes an event driven task
    tcall $evt -async {} one two          ;# send args with -async task call use a {} for unneeded result variable
    wait 2000
    thread::send -async $evt [list callback three four five]   ;# use standard thread::send -async
    wait 1000
    set out [thread::send $evt [list callback 3 4 5]]          ;# not -async, so returns a value
    wait 500
    puts "[Time ] out= |$out| "

# output:    
#   evt        ! 09:51:33.656 twait received: init1= |one| init2= |two| 
#   evt        ! 09:51:35.656 inside callback with args = |three four five|
#   evt        ! 09:51:36.656 inside callback with args = |3 4 5|
#   09:51:37.157 out= |send back args = |3 4 5|| 
  • Tasks can function like monitors

If a task (or any thread) calls thread::wait and event code re-enters the event loop (e.g. with vwait or update) then pending events can be delivered before the previous ones have completed.

When using twait/treturn and no thread::send calls, one can use sequential code that includes vwait delays and/or update calls and the task will not be interrupted. Each tcall (whether -async or not) will run strictly fifo. Thus Tasks operate like a monitor and can be used for synchronization where that method is desired.

The buffer task in the producer / consumer example below can include sequential delays (calling on the wait procedure that uses vwait or be paused by a GUI checkbox) and not be interrupted and re-entered which could cause data corruption.

  • Reviewing the generated Script

Tasks add some additional code even when -once is used, in particular there are 2 catches around the script. It is advisable to use the following command from a terminal to output the actual script that is generated, say for task mytask from above:

puts [tset mytask script]
  • task environment

Task defines several global variables inside the task's thread interpreter.

    t_pid              the thread id that created the task, this need not be the caller's id
    t_name             the tasks's task name
    t_debug            this controls putz output, see the comments before the putz proc code
    t_debug_contents   this controls debug output of the queue contents, default is end (for lrange 0 end) -1 = none, 0 = just 1, ...
    t_twait_timeout    this controls how often twait does an update while waiting, in ms default=50

There are 2 checkboxes on the Tk putz window which have 2 globals.

    t_putz_output       a toggle to turn putz output on/off
    t_task_pause        a toggle to pause the task, see tpause_check

The Task system uses 3 tsv shared variables:

    main                Stores the thread id of the main thread
    tvar                all the shared variables about a task
    tids                all the task ids with their task names

Each task saves it's state in the shared tsv variable tvar. There are 12 elements that are comprised of taskname,variable for each task. These can be examined using the tdump procedure.

  taskname,cond           the conditional variable handle
          ,count          count of times that twait returned a job 
          ,error          the text of a caught error 
          ,gvar           the global return variable name
          ,mutex          the mutex handle
          ,pid            the parent thread id, of the caller note: not always = t_pid 
          ,queue          the job queue 
          ,result         the result of the last treturn
          ,script         the modified actual script being used by the task
          ,share          the name of the task that is the main task in a shared group (or null) 
          ,tid            the thread id of this task 
          ,user           a user available item which has no current use 
  • task caller id

The id of the caller of a task is saved in a tsv shared variable. To access that, after a twait call, one can use the following:

    set caller_id [tset $::t_name pid]
  • task exit or error

When a task exits, either from an error or by design, the task shared variables are not removed and can be examined using the tdump command. In some cases, an error might not be reported but it's text might be found in the tsv shared variable name,error which can be viewed using the tdump or tset commands.

A possible future enhancement is to provide a way to clean up and create the same task again, however, currently only an error will be generated if the same task name is used more than once in another Task command.

Task returns the newly created task's thread id (taskid).

Call a Task and Wait till Done

  • tcall $taskname ?-async? rGlobal ?<-? ?arg ...?
  • tvwait rGlobal ?taskid?

tcall sends a message to the task id specified by the contents of $taskname and uses the provided rGlobal variable name to set on completion and also provides an argument list. rGlobal can also be an array element, e.g. rGlobal($element), and optionally include a namespace qualifier. tcall will add :: in front of rGlobal if not specified in the tcall.

tcall optionally allows, as a visual cue, a text string beginning with <- (e.g. <---- is also ok). If present, the argument is ignored. For example,

tcall $test -async rvar <-- 1 2 3

If the first argument might need to send a text string beginning with <- then this usage would become required.

tcall returns the result when called synchronously (i.e. no -async) and a 1 if -async. If the $taskname argument does not contain a valid thread id, an error will be thrown.

For consistency with thread::send, the option -async may be specified before the $taskname arg, i.e.

tcall -async $taskname ...

tvwait will wait for the completion of the tcall job and receive the return value. If the optional taskid is used, then tvwait will also verify that the task is still running (exists) to avoid waiting forever.

A tvwait is not required, there is no limit to the size of the input queue. In some cases, it can be useful to never tvwait, and do all -async tcall's.

For example, tasks are being used in a remote control program for a streaming device that requires delays, or commands will be dropped. It is much easier to code sequentially using send and wait calls in sequence than to use event callbacks. The job queue is used to buffer up requests until they can be serviced.

These 2 procedures can be used from the main thread or any task. Note: Tasks are not re-entrant, and so should not call themselves recursively. Tasks use only a single set of tsv shared variables to save the rGlobal name and the arglist, and they are not stacked, like with a recursive proc.

Wait for a Call and Return a Value

  • twait ?argv? ?varname1 varname2 ..?
  • treturn result

This pair is used inside of a task and is analogous to a procedure's arglist and return value. However, unlike a procedure, treturn does not end the task nor pop a stack frame. Instead, this pair is normally (and by default) processed in a forever loop. But that is optional, and a task can have code that follows a treturn. See the producer / consumer example which has 2 sets of twait/treturn statements, where the first pair is to get initialization data, and the second pair are looped to retrieve job requests.

twait waits for work to do and gets the job arglist. It can then optionally lassign argv (the arglist) to the variables varname1 varname2 ...

If the argv value is not needed, it is sometimes set to -> for visibility purposes. For example,

twait -> x y z

which would set the variable -> to all the arguments (like args in a proc) and then lassign them to the arguments x, y, and z. The full args are still available in the variable -> if needed. This usage is optional, and any other variable name can be substituted for the -> such as args or argv.

A task can issue a thread::send to itself (or receive one from another task) which will cause an event. However, events in a task (or any tcl thread) work the same as in a single threaded program, and must enter the event loop at some point to be serviced. twait does this every 50 ms while waiting for job requests. The 50 ms is the default, see the global t_twait_timeout in the Task environment section to change it.

treturn signals the caller and returns a result. If the caller is waiting with a tvwait, it will resume. A treturn is not required unless the tcall-er is going to tvwait for a result.

                                              - - -

 Tproc and tgroup


  • tgroup name ?-option?

This is a task builder which can create a group of tasks sharing the same queue. See the discussion below and also in the comments before that proc in the code for more information. There are examples below in a discussion. Here is a summary of the tgroup sub-commands, with a more detailed discussion in the examples section.

     tgroup groupname -tasks ?-?N  ?task options? ?list of imports? script (same options as the Task command)

     tgroup groupname -call    {arglist 1} {arglist 2} ... (only 1 of these per -wait)
     tgroup groupname -foreach {arglist 1} {arglist 2} ... (can be repeated followed by a -wait )

     tgroup groupname -wait all 
     tgroup groupname -run  {arglist 1} {arglist 2} ...    (does a -reset, -foreach, -wait all)

     tgroup groupname -add_tasks   ?N?                     (N is the number of tasks to add to the group, N defaults to 1)
     tgroup groupname -reset                               (reset indices to 0, clears old args and results from group array)

     groupname(rvar,*)                                     (this is the result variables for each job)  
     groupname(args,*)                                     (this is the arglists for each job)  

The -tasks option is used only once to create the tasks. Afterwards, multiple calls with -call, -foreach, and -run can be used. However, these should not be mixed. Each is somewhat different. If N is negative, then abs(N) tasks are still created, but a trace is put on each result.

The -call must be followed by a -wait, and before another -call/-wait pair is used, a -reset is required. The -call will repeat the arglists if there are fewer than the number of tasks, and throws an error if there are more.

The -call is most useful when only 1 arg is provided. It will then try to call each task in the group passing in that same arg. A task can use it's task name ($t_name) to determine which one it is, if desired. The task names will be name0, name1, ... name(n-1) so the number can be easily extracted using string range or a regexp command. Note: This is not perfect. If the call to a task is done extremely quickly, then it is possible for some task to finish up and get to the queue again before another task can which might not have been able to do any work at all. There needs to be some starvation checks, but they don't exist yet. It also depends on how many cores and the thread scheduler as well. This is a work in progress option.

The -foreach can be followed by more -foreach calls, and will continue to increment the index, and so the results will accumulate, even if a -wait all is issued. After a -reset further -foreach calls will begin over at index 0 and prior results are cleared.

A -run does a -reset, -foreach, and a -wait all in sequence. So, subsequent -run calls will all use indices 0...N-1 for the number of arglists in the -run.

Traces are optional by using a negative value for -tasks. Note that trace callbacks occur as the results become available, without the need to do a -wait all. Each trace will be a write variable trace that calls a procedure named the same as the groupname, and is delivered with 3 values, the name of the array, the element of the array, and an operation of write. To retrieve the value inside a trace callback, one does this:

proc groupname {aname element operation} { ;# uses groupname as the trace proc to call with 3 args
    set value [set ${aname}($element)]

Note the required braces for ${aname} here.

The results are stored in an array of the same name as the tgroup name (which is global or an explicit namespace can be specified). The arglists are also available in the array, plus some bookkeeping used by tgroup.

The -wait all tests the resultant variables for being set with their values. If (some or all) values are already set, there will be no waiting for those values, only with the values not yet set (knowable because the variables were unset before being queued to a task). Values are set by event callbacks as the called task issues a treturn back to the caller task/thread.

The -wait all does not do a -reset. It can be used multiple times with some additional -foreach sub-commands. It will wait for all current jobs to be finished, then more jobs can be added, and another -wait all can be done to wait for the new batch to be done. For example, this code will process a total of 1000 jobs, but after 10 are queued, it will wait till they finish, and then it will wait 10 seconds before queing up the next 10 jobs. Finally, after the loop is done, it will wait for all to finish.

    for {set m 0} {$m < 1000 } {incr m} {
        tgroup  sum_task -foreach [list 100  $m ]
        if { $m % 10  == 0} {
            tgroup  sum_task -wait all
            wait 10000
    tgroup  sum_task -wait all

The sub-command -add_tasks is a new command found in the 1.13 version of tasks, which is only available at the github website. [L8 ]

This can dynamically add tasks (threads) to a group that was created with Tproc or tgroup. The default is to add 1 task. The new task will be groupnameN where N is the next higher number.

Note: -add_tasks must be run in the same thread and interpreter as the group being added to. This is because these all share a groupname array that will have the results. This array is only available in the thread that issued the Tproc or tgroup calls. The new tasks will all use the script that the main task in the group uses, which is typically the same for all tasks. The global variable t_pid, will be equal to the tid of the added task, rather than the tid of the creator of the Tproc or tgroup tasks. The new tasks will immediately attempt to retreive jobs from the shared job queue as soon as they run the twait command.

  • Tproc name args body ?-options?

This transforms a regular proc to a set of tasks that share a queue. It uses tgroup to create the tasks and the options are the same as for Task (but no script at the end). This also creates a regular proc using the name, args, and body of a proc.

The first 2 args after the body default to -tasks 4. To add additional options, the -tasks N must be first specified explicitly. After a Tproc is created, one can use the tgroup commands such as -run, -foreach, and -wait the same as if a tgroup -tasks command had been issued (because Tproc does just that).

Tproc first creates a proc using the name, args, and body of the procedure by calling proc. It then uses the tgroup command to create N tasks that each import the proc and create a thread script (normally in a loop) that looks like the following,

    twait argv
    treturn [name {*}$argv]

The options -once, -import_tasks, and -min_import_tasks are allowed here. If these are specified, then -task N must be explicit and occur first. All of these 3 options are sent to the tgroup -tasks command, with name replaced by the Tproc name.

This method passes along any arglist to the actual proc which will test that the number of arguments agree, supporting any required or default arguments and the special argument args. If there is an error, it will be caught and reported in a message box just as any proc.

  • Tproc creates a proc and N tasks

A Tproc creates both a local proc and a tgroup of tasks. The local proc is called the same as any proc; the task group is called using tgroup. When the group is called, any of the main or helper tasks that are not busy can service the request.

A Tproc group can also be called using tcall by using the $name0 variable, where name is used to create a group of tasks (as described in tgroup). For example, below are both methods for calling a Tproc,

  • Using the Tproc generated code
Tproc test {arg1 arg2 {arg3 100}} {
    set value ...
    return $value
} -tasks 2

##################### using tcall ###########################

# a sync call is just an -async and a tvwait in one statement

tcall $test0 rvalue <-  one two                  ;#arg3 will default to 100

# or call async with unique (a requirement) return value variables 

tcall $test0 -async rvalue1 <-  one two 
tcall $test0 -async rvalue2 <-  three four 200

# do something as the 2 run, then wait for both to finish

tvwait rvalue1
tvwait rvalue2

##################### using tgroup ##########################

# using tgroup and -run, both the above can be specified in a single call with the 2 arglists:

tgroup test -run {one two} {three four 200}     ;# does a -reset, then runs both -async, then a -wait all 

# tgroup stores the results in an array of the same name as the Tproc, and can be listed:

parray test rvar,*                              ;# * will be 0..n-1 for N jobs

# one can also use tgroup -foreach to run one at at time -async (e.g. see the web extract example)

tgroup test -foreach {one two}
tgroup test -foreach {three four 200}
tgroup test -wait all

# note that -foreach can also have more than one arglist, in any combination

tgroup test -foreach {one two} {three four 200}
tgroup test -foreach {five six}
tgroup test -wait all                           ;# waits for all 3

# a tgroup -reset can be used to do another set of -foreach (the results will start over at index rvar,0)

Tproc uses tgroup, which creates a group of tasks, named test0, test1, ... testN-1 for N tasks (as specified in -tasks N with tgroup). test0 is the main task, with test1 - N-1 as helper tasks all sharing the test0 queue. Note, these N test variables are global, or can specify a specific namespace.

A common error using tcall is to forget to use the zero'th name here, $test0 and instead use $test. This will cause an error referring to $test with the message that test is an array. This is because tgroup uses the name (without a number) to save state information.



  • putz ?-nonewline? "text" ?color?

On windows, this will create a Tk window for debug output, one per Task. There are several buttons and checkboxes, for turning on/off the output, clearing the text, and pausing the task. The large button is also a quick exit of the program along with some Task info.

Windows now include a puts wrapper. This is in the github latest code only 1.13c (it's 1.13 but a comment has the added c). It also is needed to use -nonewline. See the comment block in the code which explains how the wrapper handles puts calls now. The quick summary is that one can use puts freely in tasks now, even including calls with stdout or stderr as the i/o channel and using -nonewline. When the channel is something else, the puts wrapper does not use putz, but instead passes it to the saved and renamed original puts. This happens when doing a package require command, for example.

See the tag's in the code for the current set of colors. Add additional ones for your own use. Color and/or font changes, as well as foreground/background colors can be specified.

On Linux, this will go to stdout using a puts. If called from the main thread, this will also translate to a puts call.

The t_debug global variable controls putz output with an option for a debug trace. It can have 7 values,

-1 = no putz output
0  = no debug trace (default is 0x0)
1  = debug trace
2  = no debug trace, no tk windows, on linux, uses puts, on windows, write to console (in main thread)
3  = same as 2, but with a debug trace
4  = same as 0, and overides linux forcing and permits tk window (n.b. this can cause tcl/tk to crash)
5  = same as 1, and overides linux forcing (ditto)

When the platform is not windows, a value of 0 or 1 is changed to 2 and 3 respectively. On windows, if set to 2 or 3, it will go to the console, and if the color is red (or anything except normal) then a puts to stderr is used (console stderr writes are red on windows). A value of 4 or 5 overides this forcing to use puts instead of using a tk window. Until this is fixed, tcl/tk can crash if more than one thread tries to use Tk. This should only be used during debugging (on linux). NOTE: said to be fixed in 8.6.12 (by adding locking calls).

When doing a trace, the queue contents are shown which can be quite large, there is another global, t_debug_contents which can be used to limit the output. It's value is used as the to arg to an lrange, with default of end. A value of -1 will suppress the contents totally, and other values, e.g. 3, can be used to limit the output to 4 pending queue items.

With 1.10, putz creates a toplevel window instead of using . so user programs can use the normal . toplevel. This means the . window will be created as well, and so if t_debug is hex (i.e. begins with 0x) then the . window will be withdrawn. To overide that, a value of 0..5 can be used. Note putz can detect if it is, say, 0x0 vs. 0 by using string operations. Admittably, this is ugly, but it saves using another global.

Colors can also be added by using tag configure commands from the task. NOTE: putz must be run at least 1 time first, and t_debug must be either 0 or 1. After this, there will be a putz toplevel window .taskdebug.ttttt which can be modified. For example:

set color_tag { # this will be imported AFTER the first putz in the import list below
    .taskdebug.ttttt tag configure lblue  -foreground black -background LightBlue -font {Arial 14}
Tproc test {arg1 arg2} {
    putz "arg1= |$arg1| arg2= |$arg2| " lblue
    return "args were $arg1 and $arg2"

} -tasks 1 -import_tasks [list {-putz ""} -$color_tag]

tgroup test -run {one two} {three four}
  • tset taskname element ?val?

Set a Task shared variable value, or get it if no ?val?

  • tdump ?pat? ?max-width?

Debug dump of shared variable data used by tasks. pat is an regex that defaults to .* and max width def=90.

If the pattern starts with a -pat or +pat the - or + is removed, but then only the matched lines (i.e. not the extra task info) will be output.

If the pattern starts with a +pat there will be no putz output but instead, the command returns a list of 2 element lists where the 2 items are the shared variable element (e.g. task1,count) and it's value, with no maximum width.

This can be used from the main, but also from a task since it uses putz for its output.

  • tname taskid

taskid (which is the thread id also) will lookup and return the taskname.

  • tpause_check

This is checked in twait, but can be used also by the task code. It checks for the t_task_pause global, which is initialized to 0, and does a non-busy wait for the global to change back to 0 after it has been set to 1, by clicking the checkbox widget to on, or setting the variable in the code. It is connected to the pause checkbox in the Tk window that putz creates.

A task can use the initialization option in a tasks import list to set this to 1 (which changes the default of 0 set earlier) to start up a task paused. This can be done with an import list entry of: {-set ::t_task_pause 1}. Note that if t_debug is set to 2 or 3, (or running on linux) there will be no tk window with a checkbox to clear the pause. Only by a thread::send back to the task to change this to 0 would the task ever be able to resume.

This proc also issues a putz when the task is paused. If t_debug is 0 or 1 and running on windows, a tk window will open if not already.


  • comma number

comifies a positive integer, only used in testing

  • wait ms

A non busy wait using vwait and a global variable.

  • xwait ms

A busy wait, for easily testing compute bound threads. The time is approximate.

  • tlg ?pattern? ?delimiter? ?max width?

This will dump global variables using putz. If there is an () output, then the variable is an array and the indices are output. Newlines are changed to unicode character. The delimiter can be unicode, e.g. \u250b for a tripple dot vertical line. The maximum width displayed defaults to 80. This can be sent to any task to dump its global variables.

When a task starts, a global variable is set to the current list of globals from [info global]. When the tlg command is used with no arguments, then it will list only variables that are not in the list. To list all variables, one can specify the argument pattern as a *.

Note: since each task resides in a separate thread and thus separate interpreter, global variables in a task are not global to the entire program. Any global variables in a task are not visible to the main thread or other tasks.

  • tla array_name ?pattern? ?reverse?

This is similar to parray, but uses putz to dump it's output. The reverse arg can be -1,0,1. A 0 is the default. If -1 or 1 the pattern is used against the value instead of the index. For 1 it lists matches and -1 it lists those that do not match. Pattern is a string match glob style pattern.

  • twidgets

This is a widget tree display tool for a task. It requires BWidgets. It has 3 buttons. Open, Close, Refresh. Open will expand all the tree levels, and close reverses that. Refresh makes it restart itself, in case some new widgets have been created since it was started. It can be run in any task, to give a look at the widgets there. It can be closed and re-run again later.

Clicking on any widget item will use putz to output a 2 column list of all the configure attributes.

This can also be used in the main thread as well, but requires a value for t_debug be set first. To use in the main: set ::t_debug 0 ; twidgets.

                                                    - - -

 Task Monitor and Send Commands

Task Monitor

  • task_monitor ?table-rows? ?font-size? ?repos-rows? ?putz-window? ?maxsize-result?
  • task_monitor kill

This starts the task monitor in a separate thread and toplevel window. All the arguments are optional. Use {} for any to use defaults (if you need to specify later ones). This can be closed and restarted or killed by command. It kills only the thread, not the process.

table-rowsThe initial number of rows to use in the task data table (expands as needed)
font-sizeThe font size initially, can be adjusted at run time
repos-rowsThe maximum number of rows to use when repositioning putz windows
putz-windowMonitor will also open a putz window if yes/true/1 etc. default is no
maxsize-resultMaximum number of characters copied to the result column - default 500

The Task monitor is a separate task (thread) that one can run to bring up a window that presents a table format for the Tasks in the program. Here is a sample format of what it looks like:

Task row count Q-len rvar share result error caller putz user
_taskmonitor 1 1 ::tasks::mon_start {max rows: 10} mainthread args: 10 9
sum_task0 2 5 7 ::sum_task(rvar,19) 119 mainthread 100 19
sum_task1 3 4 ::sum_task(rvar,20) sum_task0 120 mainthread 100 20
sum_task2 4 6 ::sum_task(rvar,21) sum_task0 117 mainthread 100 17

The column headers can be left or right mouse clicked to, respectively, narrow or widen columns. Shift clicking increases the adjustment by 5 times (e.g. 3 vs 15 chars wider/narrower per click). The scrollbars can be operated with the mousewheel for vertical scrolling, and with the shift-key for horizontal scrolling, if your version of tcl/tk supports that.

Above the data table are several controls. Each spinbox should respond to the mousewheel.

Refresh[ # ] spinbox - time in seconds (steps of .5)
Font-size[ # ] spinbox - integers 6-20

X-width[ # ] spinbox - for repositioning
Y-height[ # ] spinbox - for repositioning
RepositionButton - Repostion all putz windows

OnButton - set pause checkbox in putz windows
OffButton - clear pause checkbox in putz windows
SendRuns the send command task
Color[ ] checkbox - to briefly color changed values
On-Top[ ] checkbox - Monitor will stay on top of other windows
Pause[ ] checkbox - Monitor will not update while paused
ExitButton - Exits the program

Debug Window Repositioning

  • repos ?WxH? ?rows? ?x-offset? ?y-offset?

This will find all the putz windows, and re-align and position them. If the first argument is given (only) then the number of rows will be chosen based on a presumed screen size of 1080p or greater. If the rows argument is given (and not null) then that will be the number of rows. The last two arguments can be used to provide a different spacing between windows.

The task monitor has a button to call repos and can supply width and height arguments from 2 spinboxes.

Send Command

  • send_command

This will open a small window (in its own task) with 2 text entry boxes, that makes it easy to send commands (using thread::send) to any (or all) task(s).

taskname*patterncommand to send to 1 or more tasks

The first text entry is for the name of the task(s) to send a command to, which can be entered manually or using the Task menu. If manually entered, it can be a string match pattern and each matching task will be sent the same command.

The second entry is the command to send. Both text boxes support up/down arrow (or mousewheel) for a history, and there are 3 additional buttons that can be used. The Send button is the same as doing a <return> followed by an <up> (in the send entry) and so can be used to send a command multiple times easily.

This entry also accepts <tab> for command and variable expansion, similar to the windows console, and <escape> for bracketing completion for square bracekts, curly braces, and parens. <escape> will fill in one, and shift-escape fills in all. After filling, it will select the text between the bracket pairs. The F1 key can be used to extend the selection one character on both sides.

Note: the <tab> expansion requires either a single task name or tid in the taskname entry. If there are wildcards it will lookup in the first task.

The send command window has 3 menu columns:

Task: includes the tasks names of all tasks (when the command first starts up). This fills in the taskname entry. If new tasks have started since the command was issued, this menu will refresh itself (it checks every 2 seconds for new tasks).

Commands: This will set the second entry to various commands. See the description of the tlg and tla commands. The widget tree command can be used to launch a widget display utility inside any task. This and the Task menu have tearoff options. Several commands just set the command entry so it can be edited and then sent. The putz commands and the widget command send as well.

The menu command, Lookup with browser, will launch the default browser on the platform (windows or linux) and pass the text in the second entry to the https://www.magicsplat.com/tcl-docs/docindex.html website.

Extra: This includes a manual refresh of the Task menu and an option to keep the send command window on top.

Note, the send uses a thread::send and so the task must enter the event loop (or use thread::wait) to process a command. If the task only does solid compute, then no events will be able to be processed while that compute is ongoing. A twait command will check the event queue periodically, so that counts as being in the event loop.

The widget tree can be run again, and closed. This also applies to the putz windows. However, the task monitor and send command tasks should not be closed, rather, use the minimize to hide, since these are tasks and they can't be restarted (currently - a wish list item is to allow this). This utility requires the BWidgets package.

Trying out the examples

To demo the examples, download the source from the page referenced below, followed by an example to a file and run it with tclsh or wish. It requires Threads and Tk.

Note: On windows, one should also include a [console show] command to see the output.

Warning: If using the single file 8.7 (windows) builds, there will be a silent error on the examples that do a package require math. It appears that these builds do not fully setup auto_paths. If the thread exits from this error, one should use tdump to see if the error attribute shows this to be an error during task/thread creation.

Code source and how to load

 source and loading

Due to size limitations (no longer could do a history diff) the source code has been moved

It's here now: https://github.com/rocketship88/Tasks.git on github.

The code source (currently tasks-1.13.tm) is a single file that can be loaded in several ways.

  • using the source command

Place the file into a location of your choosing and use the source command.

    source path/to/tasks-1.13.tm
  • as a module file with the .tm extension

Use the tcl::tm::path add command and then do a package require. If it's named: some/path/to/tasks-1.13.tm use these 2 statements:

    tcl::tm::path add some/path/to
    package require tasks
  • installing it system wide as a module

There will be a directory on the system, say,


where you can place the file. The command tcl::tm::path list can be used to find a known directory. The module needs to be named tasks-1.13.tm and you will need su privs to copy the file there.

There may be other modules, such as http in that directory as well. Give the file the same permissions. Then you only need to do

    package require tasks 

 Examples using only the 5 primitives

Small example

This example runs the proc sum in a separate thread, perhaps to keep the GUI responsive. It is called first synchronously, and then a second time asynchronously.

    package require tasks
    namespace import tasks::*                ;# easiest to just import them all, but could limit to tasks::\[Ttp]* etc.

    proc sum args {foreach arg $args {incr s $arg} ;return $s} ;# sum the arglist items
    Task sumserver  -import_tasks {sum} {   ;# import all the tasks::* and also sum, this is a repeating task
            set t_debug 1                   ;# turn on debug tracing
            twait argv                      ;# wait for work and get the args
                set result [sum {*}$argv]   ;# call sum with the input args (note imported proc)
                putz "result= |$result| "   ;# output some debug info
            treturn $result                 ;# send back the results, sets a variable with the result, then repeat from top
#   call synchronously:
    tcall $sumserver resultvar <- 100 200 300
    puts "resultvar= |$resultvar| " 
#   call asynchronously, then wait for it after doing something else 
    tcall $sumserver -async resultvar <- 1 2 3 4 5 6 7
#      ... can do something else while it crunches in the background ...
    tvwait resultvar

    puts "resultvar= |$resultvar| " 
    tdump                                   ;#some debug info about the task(s)

Here's the output on a linux system with the debug turned on, but where the tk window cannot (currently) be used:

$ tclsh wiki-example.tcl
sumserver  ! sname(share name) = || tname(use)= |sumserver| ::t_name(me)= |sumserver| 
sumserver  ! queue not empty len= 1 contents: {tid0x7feb1e590740 ::resultvar {100 200 300}}
sumserver  ! 10:48:03.079 job 1: worklen= |3|     work= |100 200 300| parentid= tid0x7feb1e590740 globalvar= ::resultvar 
sumserver  ! args varname= |argv| rest of variables= ||
sumserver  ! result= |600| 
resultvar= |600| 
sumserver  ! sname(share name) = || tname(use)= |sumserver| ::t_name(me)= |sumserver| 
sumserver  ! queue not empty len= 1 contents: {tid0x7feb1e590740 ::resultvar {1 2 3 4 5 6 7}}
sumserver  ! 10:48:09.153 job 2: worklen= |3|     work= |1 2 3 4 5 6 7| parentid= tid0x7feb1e590740 globalvar= ::resultvar 
sumserver  ! args varname= |argv| rest of variables= ||
sumserver  ! result= |28| 
sumserver  ! sname(share name) = || tname(use)= |sumserver| ::t_name(me)= |sumserver| 
sumserver  ! queue is empty, so wait 0
resultvar= |28| 

------ Task(s) dump -----------------------------------------
tsv::names  = |main tvar tids|
tsv::tids   = |tid0x7feb1d685700 tid0x7feb1e590740|
tid/names   = |tid0x7feb1e590740 mainthread tid0x7feb1d685700 sumserver|
mainthread tid: tid0x7feb1e590740  exists: 1
sumserver  tid: tid0x7feb1d685700  exists: 1
                 (sumserver,cond)     = |cid1| 
                 (sumserver,count)    = |2| 
                 (sumserver,error)    = || 
                 (sumserver,gvar)     = |::resultvar| 
                 (sumserver,mutex)    = |mid0| 
                 (sumserver,pid)      = |tid0x7feb1e590740| 
                 (sumserver,queue)    = || 
                 (sumserver,result)   = |28| 
                 (sumserver,script)   = |#Preamble??namespace eval tasks {}?set ::t_pid tid0x7feb1e590740?set ::t_name sumserver?set| 
                 (sumserver,share)    = || 
                 (sumserver,tid)      = |tid0x7feb1d685700| 
sumserver  ! queue is empty, so wait 50
sumserver  ! queue is empty, so wait 100

This example demonstrates using a namespace and array elements for the helper tasks and the result answers.

    package require tasks
    namespace import tasks::*
    namespace eval foo {}
    set script {
        twait args                         ;# all the args in a single list, no lassign's needed
        treturn [tcl::mathop::+ {*}$args]
    Task foo::main                -import_tasks $script
    Task foo::helper(1)/foo::main -import_tasks $script
    Task foo::helper(2)/foo::main -import_tasks $script
    tcall $foo::main -async foo::answer(1) <- 1 2 3
    tcall $foo::main -async foo::answer(2) <- 1 2 3 4
    tcall $foo::main -async foo::answer(3) <- 1 2 3 4 5
    foreach i {1 2 3} {
        tvwait foo::answer($i)             ;# wait for all 3 jobs to complete
    parray foo::answer
    tdump -result|,tid                     ;# dump values for (last) result and thread id

# results:
#   foo::answer(1) = 6
#   foo::answer(2) = 10
#   foo::answer(3) = 15
#                  (foo::helper(1),result)     = |10| 
#                  (foo::helper(1),tid)        = |tid00004D08| 
#                  (foo::helper(2),result)     = |15| 
#                  (foo::helper(2),tid)        = |tid000037CC| 
#                  (foo::main,result)          = |6| 
#                  (foo::main,tid)             = |tid00003578| 

 Examples and details using tgroup and Tproc

The tgroup procedure is a quick way to launch a group of tasks, sharing a single queue, that can be run given a set of arglists. They will run in parallel if possible. The general form is:

 to start a new group

     tgroup groupname -tasks ?-?N .... args to a task beginning with the options (used only 1 time to create a task group)

 to process arglists

     tgroup groupname -call    {arglist 1} {arglist 2} ... (only 1 of these per -wait and -reset)
     tgroup groupname -foreach {arglist 1} {arglist 2} ... (can be repeated)

 to wait for all jobs to be done

     tgroup groupname -wait all                            (wait for all async jobs started since a -reset)
 to combine -reset -foreach and -wait all

     tgroup groupname -run  {arglist 1} {arglist 2} ...    (can use multiple times after -tasks)

 resets the counts to state just after using -tasks

     tgroup groupname -reset                               (reset indices to 0, clears old args and results from group array)

The first argument to each tgroup call is the tasks group name which is used to create task names (with helpers): group0, group1/group0, group2/group0, .... for a total of N tasks, as given by the -tasks N option. If it's negative, then use abs(N) and set a flag for traces.

The group name can also be a qualified namespace name, e.g. ns::groupname. The namespace must have been created earlier in the program, e.g. namespace eval ns {}.

Note: The groupname is always global or an explict namespace qualifer. If tgroup is used inside a local scope (e.g. proc) the group name will still be global or an explict namespace can be used.

  • -tasks

The -tasks N option starts N tasks/threads. The arguments that follow this option are identical to those in the Task procedure which follow the taskname argument. This option calls the Task procedure to create the N tasks. It is run just once per task group creatiton. Once created, sets of -run, -foreach, and -call can be issued, and the results found in the group array. After a -reset, these calls can be re-issued for more results.

  • -foreach and -call

The -foreach option calls the tasks -async for each arglist that follows. This option can be used with one or more arglists (and one or more times) as convenient. The results will be saved in the group array for each arglist until a -reset is used.

The -call option also calls the tasks -async for each arglist that follows. If there are fewer arglists than tasks, the list of arglists will repeat at the first one until N jobs are queued, where N is the number of tasks. Unlike -foreach, it is an error to include more arglists than were created using -tasks N. See the example below that uses a trace option.

With each of these 2 options If the trace flag was set, a trace is put on each job result, using the group name as a callback procedure.

  • -wait

The -wait all option will wait for all the work to complete. It is used once after the previous options have been run.

  • -run

The option -run (supplied with args) combines a -reset, -foreach arg arg... and -wait all. All the jobs are run -async. The -reset first clears any args and prior results from the group array. Each run will produce results for the args given to the -run option.

    tgroup fibonacci -tasks 4 $script      ;# create 4 tasks

    tgroup fibonacci -run 10 20 30 40      ;# run 4 jobs and wait
#        ... do somethine with the 4 results ...
    tgroup fibonacci -run 20 20 30         ;# run 3 more jobs and wait 
#        ... do somethine with the 3 results ...
    tgroup fibonacci -run 30 20            ;# run 2 more jobs and wait 
#        ... do somethine with the 2 results ...

#   results for each of the -run calls will be found in the group array (fibonacci in this case)
#   the group array also saves the arglists. 
  • -reset

The -reset option will clear out any previous results and args, and resets the job count to zero, so further saved results will use indices rvar,* where * = 0 .. n-1 jobs.

This can be used with the -foreach option above, which can be run multiple times, and then use the -wait all option to wait for all the previous jobs started by a -foreach to be done. It will also reset after a -call and -wait all pair and so another pair can be run on the same tasks created by the -tasks option.

For example the following will run 3 sets of 2 jobs. It uses the one -tasks setup, then clears any previous results from -foreach and -wait all's and runs 2 jobs. It then loops back 2 more times to clear the results first, and and then another set of 2 jobs. Strictly speaking, the -reset is not needed the first time through the loop, since -tasks already did a reset, but it does no harm.

    tgroup fibonacci -tasks 4 $script             ;# create 4 tasks

# Run 3 sets of 2 jobs each using above 4 tasks

    foreach m {1 2 3} {

        tgroup fibonacci -reset                   ;# reset all counts, args, and results
        tgroup fibonacci -foreach [list 1 $m]     ;# run 1 job that gets sent 2 args
        tgroup fibonacci -foreach [list 2 $m]     ;# another job one with 2 args
        tgroup fibonacci -wait all                ;# this waits for the 2 -foreach jobs

        # use results for the above 2 -foreach's found in the array fibonacci(rvar,0..1)


The group name is also used to create a global array with the args and results, plus other info. Results are in rvar,* and arglists in args,* as can be seen in the following example.

The example script below will compute the sum of their arglist numbers and do a busy wait we can monitor for cpu time. It uses the -run option which first does a -reset and then inserts each of the arglists -async into the queue and then does a -wait for all jobs to complete. On a multi-core system the jobs can run in parallel to improve performance.

    package require tasks
    namespace import tasks::*
;   proc sum {args} {
        xwait 2000                          ;# simulate 2 seconds of heavy compute
        return [tcl::mathop::+ {*}$args]    ;# use the multi arg add operator
    tgroup summer -tasks 4  -import {sum} {   ;# create 4 tasks using the same script
            twait argv
            treturn [sum {*}$argv]
    tgroup summer -run {1 2 3} {4 5 6} {7 8 9} ;# run 3 jobs and wait for them 
    parray summer args*
    puts ""
    parray summer rvar*

The result is:

summer(args,0) = 1 2 3
summer(args,1) = 4 5 6
summer(args,2) = 7 8 9

summer(rvar,0) = 6
summer(rvar,1) = 15
summer(rvar,2) = 24

Here's an example where we use a trace, by using a -N for number of tasks. The traces use the group name as a proc to be the callback of the trace, which sends 3 args, varname, element, operation. This example uses -call and so the arglist is repeated to fill out the number of tasks.

    package require tasks
    namespace import tasks::*
;   proc sum {args} {
        lassign $args first
        xwait [expr {   200 * $first   }]   ;# use first number to determine busy wait time
        return [tcl::mathop::+ {*}$args]    ;# use the multi arg add operator
;   proc Time {} {
        set ms [clock milliseconds]
        set secs [expr {   $ms / 1000   }]
        set ms [string range $ms end-2 end]
        return [string range [clock format $secs] 11 18].$ms
    puts "[Time] starting"

    tgroup summer -tasks -8  -import {sum} { ;# setup 8 tasks, set trace flag 
        twait argv
        treturn [sum {*}$argv]
;   proc summer {args} {                     ;# trace callback at each task completion
        lassign $args aname element op
        set value [set ${aname}($element)]
        puts "[Time] [format %-26s |$args| ] aname= |$aname| element= [format %-10s |$element| ] op= |$op| value= |$value| "
    tgroup summer -call {1 2 3} {4 5 6} {7 8 9} {10 11 12} ;# when fewer arglists, repeat at front
    tgroup summer -wait all
    puts ""
    parray summer args*
    puts ""
    parray summer rvar*

# output:    
#    16:33:02.303 starting
#    16:33:03.006 |::::summer rvar,4 write|  aname= |::::summer| element= |rvar,4|   op= |write| value= |6| 
#    16:33:03.042 |::::summer rvar,0 write|  aname= |::::summer| element= |rvar,0|   op= |write| value= |6| 
#    16:33:04.186 |::::summer rvar,5 write|  aname= |::::summer| element= |rvar,5|   op= |write| value= |15| 
#    16:33:04.226 |::::summer rvar,1 write|  aname= |::::summer| element= |rvar,1|   op= |write| value= |15| 
#    16:33:05.033 |::::summer rvar,2 write|  aname= |::::summer| element= |rvar,2|   op= |write| value= |24| 
#    16:33:05.172 |::::summer rvar,6 write|  aname= |::::summer| element= |rvar,6|   op= |write| value= |24| 
#    16:33:05.520 |::::summer rvar,7 write|  aname= |::::summer| element= |rvar,7|   op= |write| value= |33| 
#    16:33:05.824 |::::summer rvar,3 write|  aname= |::::summer| element= |rvar,3|   op= |write| value= |33| 
#    summer(args,0) = 1 2 3
#    summer(args,1) = 4 5 6
#    summer(args,2) = 7 8 9
#    summer(args,3) = 10 11 12
#    summer(args,4) = 1 2 3
#    summer(args,5) = 4 5 6
#    summer(args,6) = 7 8 9
#    summer(args,7) = 10 11 12
#    summer(rvar,0) = 6
#    summer(rvar,1) = 15
#    summer(rvar,2) = 24
#    summer(rvar,3) = 33
#    summer(rvar,4) = 6
#    summer(rvar,5) = 15
#    summer(rvar,6) = 24
#    summer(rvar,7) = 33

This next example compares a sequential vs. a task compute of the total number of digits of 100 fibonacci numbers. The task method is more code, but it also computes the numbers twice, once using a trace callback. Then it traverses the output array of answers. Both the array and the callback have the same name: fibonacci since tgroup uses the group name for the array with the output, and also the trace callback.

This example runs 8 tasks (threads) which gain about a 5x speed up over a sequential method. Each task did 12 or 13 jobs (100 / 8) on a 4 core 8 hyperthread intel chip.

    package require tasks
    namespace import tasks::*
    set nCPUs  8

    set first 20001
    set last  20100
    set tm [time { ;# one thread last-first+1 jobs
############################ sequentially  ########################
        package require math
        proc fibonacci_len {n} {
            return [string length [math::fibonacci $n]]
        for {set n $first} {$n <= $last } {incr n} {
            incr total1 [fibonacci_len $n]
    puts "total1= |[comma $total1]| $tm"
    set tm [time { ;# one thread per cpu, last-first+1 jobs
############################# using tasks #########################
        proc fibonacci {arr element op} {               ;# trace callback
            incr ::total3 [set ${arr}($element)]        ;# sum up each as they finish
        tgroup fibonacci -tasks -$nCPUs -import {fibonacci_len} { ;# set up 1 task per cpu hyperthread
            package require math
            twait ->  n
            treturn [fibonacci_len $n]
        for {set n $first} {$n <= $last } {incr n} {    ;# run the task last-first+1 times
            tgroup fibonacci -foreach $n
        tgroup fibonacci -wait all
        set m -1
        for {set n $first} {$n <= $last } {incr n} {    ;# sum up the answers from the array
            incr total2 $fibonacci(rvar,[incr m])
    puts "total2= |[comma $total2]| $tm"    
    puts "total3= |[comma $total3]| $tm"    
    tdump -count
#   total1= |419,047| 5426745 microseconds per iteration
#   total2= |419,047| 1163442 microseconds per iteration
#   total3= |419,047| 1163442 microseconds per iteration
#                    (fibonacci0,count)   = |13| 
#                    (fibonacci1,count)   = |13| 
#                    (fibonacci2,count)   = |13| 
#                    (fibonacci3,count)   = |13| 
#                    (fibonacci4,count)   = |12| 
#                    (fibonacci5,count)   = |12| 
#                    (fibonacci6,count)   = |12| 
#                    (fibonacci7,count)   = |12| 

Below is the same computation using Tproc and tgroup. Tproc combines the creation of the fibonacci_len proc along with the tasks which each also import the procedure.

The Tproc options are to create 8 tasks that each minimally import only twait and treturn from tasks::*. A Tproc initializer is used to load the math package outside the created proc fibonacci_len so it's only done once in each task. The commented out puts can be used to view the Tproc generated thread script.

package require tasks
namespace import tasks::*

set first 20001
set last  20100

set tm [lindex [time {                              ;# timimg of a tasks run using Tproc/tgroup
    Tproc fibonacci_len {n} {
        return [string length [math::fibonacci $n]]
    } -tasks 8 -min_import_tasks [list {-package require math}]
    for {set n $first} {$n <= $last } {incr n} {    ;# run the task last-first+1 times
        tgroup fibonacci_len -foreach $n
    tgroup fibonacci_len -wait all
    foreach result [array names fibonacci_len rvar,* ] {
        incr total2 $fibonacci_len($result) 
} 1] 0]

#puts [tset fibonacci_len0 script]
puts "total2= |[comma $total2 "_" ]| [comma $tm] microseconds"
tdump -count    

#   total2= |419_047| 1,152,205 microseconds
#                    (fibonacci_len0,count)      = |13| 
#                    (fibonacci_len1,count)      = |13| 
#                    (fibonacci_len2,count)      = |13| 
#                    (fibonacci_len3,count)      = |12| 
#                    (fibonacci_len4,count)      = |12| 
#                    (fibonacci_len5,count)      = |13| 
#                    (fibonacci_len6,count)      = |12| 
#                    (fibonacci_len7,count)      = |12| 

This next example demonstrates that the taskname can be a namespace qualified name, for those averse to using the global namespace. Note also that only 3 jobs ran and so the counts were 1 for 3 tasks and 0 for others.

    package require tasks
    namespace import tasks::*
    proc fibonacci_len {n} {
        return [string length [math::fibonacci $n]]
    namespace eval fib {}
    tgroup fib::fibonacci -tasks 8 -import {fibonacci_len} { ;# set up 1 task per cpu hyperthread
        package require math
        twait ->  n
        treturn [fibonacci_len $n]
    tgroup fib::fibonacci -run 10 20 30

    parray fib::fibonacci rvar*
    tdump -count
#   fib::fibonacci(rvar,0) = 2
#   fib::fibonacci(rvar,1) = 4
#   fib::fibonacci(rvar,2) = 6
#                 (fib::fibonacci0,count) = |1| 
#                 (fib::fibonacci1,count) = |1| 
#                 (fib::fibonacci2,count) = |1| 
#                 (fib::fibonacci3,count) = |0| 
#                 (fib::fibonacci4,count) = |0| 
#                 (fib::fibonacci5,count) = |0| 
#                 (fib::fibonacci6,count) = |0| 
#                 (fib::fibonacci7,count) = |0| 

 Example non modal tk_messageBox

In a recent comp.lang.tcl posting, the topic was how to use tk_messageBox non-modally. Below is how one can use Tasks to accomplish that.

If we set up only a single task, then any further calls to tk_messageBoxNm will be queued up until the user dismisses the on screen dialog.

However, for this example, we demonstrate how one could permit up to 3 messages on screen, since the rest of the program is no longer blocked. If 3 messages are awaiting acknowledgment, then any further ones will wait in the task group queue.

This example sends 4 messages with a variety of types. It uses a 2 second delay between them (to give the demo a better audio/visual effect). When all 4 are finally dismissed, the tgroup -wait all will return and a message is written to stdout with puts.

    package require tasks
    namespace import tasks::*
    pack [ button  .path    -text "hello" -command {puts hello-world}]   -fill both -expand true
    tgroup  tk_messageBoxNm -tasks 3 -import_tasks [list {-package require Tk; wm withdraw .} ] { ;# each thread needs to load Tk 
        twait argv
        treturn [tk_messageBox {*}$argv]                                                          ;# pass along the args from the user call

#   Demo with 4 messages, foobar1-4, and 4 different messageBox styles.

    foreach {message type} [list   foobar1 ok   foobar2 okcancel   foobar3 yesno   foobar4 yesnocancel] {
        tgroup tk_messageBoxNm -foreach "-message $message -type $type"   ;# -foreach sends in requests -async
        wait 2000                                                         ;# delay 2 seconds so we can hear all the bells 
    tgroup tk_messageBoxNm -wait all                                      ;# now wait for all 4 to have been acknowledged
    puts "after all 4 are acknowledged" 

The next two examples demonstrate how some lower level thread commands can be used along with the Task system.

 Example to combine with thread::send

This example demonstrates how one can combine lower level thread::send calls with Tasks. The task waits for input, but does not block the event queue forever (it issues an update every 50 ms while waiting for tcall input).

    package require tasks
    namespace import tasks::*
    Task test -import  {
        twait argv a1 a2
        putz "argv= |$argv| a1= |$a1| a2= |$a2| "
        treturn   " this is a string with a1= |$a1| a2= |$a2| "
    wait 1000
    thread::send -async $test [list putz "this should open a tk window on windows (or output to stdout on linux)"]
    wait 1000
    tcall   $test result some input
    puts "result= <$result> "

Here's the output (on windows)

The Tk window has this:

this should open a tk window on windows (or output to stdout on linux)
argv= |some input| a1= |some| a2= |input| 

And the console will have this:

result= < this is a string with a1= |some| a2= |input| >  

 Example to combine with thread::cancel

This example demonstrates using thread::cancel calls with Tasks.

When a cancel arrives as an event, it throws a cancel error. If you intend to use a cancel, you can catch it, or any thing else that does a return to the event loop, such as the wait call in this code. If it cancels before the treturn, the result-var will still be unset. Here's some test code.

    package require tasks
    namespace import tasks::*
    ;   proc sum {args} {
        putz "sum up: $args"
        return [tcl::mathop::+ {*}$args]    ;# use the multi arg add operator
    Task test  -import {sum} {
        set t_debug 2                       ;# putz (no debug) sent back to console or stdout
        if [catch {
            twait argv                      ;# wait for work and get the args
            putz "started the job"
            wait 2000                       ;# this is a non-busy wait that can get canceled too
            putz "after the wait 2000 in the task"
        } err_code] {
            putz "task got an error = $err_code"
            break     ;# out of our hidden while loop, to exit the task/thread
        set result [sum {*}$argv]   ;# call sum with the input args (note imported proc)
        putz "result= |$result| "
        treturn $result                 ;# send back the results, sets a variable with the result
    tcall $test result  <- 5 10 15      ;# show that it works
    puts "result= |$result| "
    tcall $test -async result <- 10 20 30  ;# but this one will end up being cancelled
    wait 1000
    thread::cancel $test
    wait 1000
    tvwait result $test                 ;# since the task may have exited, use this
    if [catch {
        puts "result= |$result| "
    } err_code] {
        puts "error outputing the result: $err_code"
    wait 5000

Here's the output to a windows console:

test       ! started the job
test       ! after the wait 2000 in the task
test       ! sum up: 5 10 15
test       ! result= |30| 
result= |30| 
test       ! started the job
test       ! task got an error = eval canceled
Task: tid0000358C does not exist, while waiting on ::result
error outputing the result: can't read "result": no such variable

------ Task(s) dump -----------------------------------------
tsv::names  = |main tvar tids|
tsv::tids   = |tid0000358C tid00002CE0|
tid/names   = |tid00002CE0 mainthread tid0000358C test|
mainthread tid: tid00002CE0  exists: 1
test       tid: tid0000358C  exists: 0
                 (test,cond)          = |cid1| 
                 (test,count)         = |2| 
                 (test,error)         = || 
                 (test,gvar)          = |::result| 
                 (test,mutex)         = |mid0| 
                 (test,pid)           = |tid00002CE0| 
                 (test,queue)         = || 
                 (test,result)        = |30| 
                 (test,script)        = |#Preamble??namespace eval tasks {}?set ::t_pid tid00002CE0?set ::t_name test?set ::t_debug | 
                 (test,share)         = || 
                 (test,tid)           = |tid0000358C| 

This example demonstrates some additional techniques that can be used with Tasks. It demonstrates the use of a direct thread::send call between tasks.

 Example producer/consumer with a buffer task

Using tasks to experiment with the N-producer/N-consumer single queue (also known as the bounded buffer size) problem.

Here we have several producer and consumer tasks which communicate through the intermediary of a buffer task.

The queue is in buffer's thread interpreter (global) memory. All of the producer/consumer tasks call the buffer task requesting some data off the queue, or to put some data on the queue (fifo). These requests are also queued fifo, as with any task. Each task includes a variable that buffer will thread::send to signal with.

Buffer returns 0/1 as to the success of a request, which depends on the state of the queue, full, empty, or in between, and which operation (produce or consume) is desired. If not sucessful, the caller is put on a queue (there is 1 for each type) which is also stored along with the data queue in buffer's interpreter global memory.

The included variable name is used to signal back to the caller task when the queue status has changed. All waiting tasks (of the corresponding type) are signaled and they will all then resume from waiting for the signal. Then they will each attempt another try at either producing or consuming, but typically, only 1 will be sucessful, and so the others will go back to waiting.

This example runs best on windows, where all the tasks have a window where one can pause any or all of them. On linux, there will still be a bit of a gui, but most of the output will be to stdout. You can pipe through grep however.

    package require tasks
    package require Tk
    namespace import tasks::*
    proc addwaiter {id type var } {
        lappend ::waiters($type) [list $id $var]
    proc signalwaiters {type} {
        while { [llength $::waiters($type)] > 0 } {                  ;# signal all waiters of same "type"
            lassign [lindex $::waiters($type) 0] id varname          ;# task id an the varname to wait on
            putz "signal $type id= |$id| [format %-13s [tname $id]] varname= |$varname| "
            thread::send -async $id [list set ::$varname $type/666]  ;# set the waiters wait var, that resumes him
            set ::waiters($type) [lrange $::waiters($type) 1 end]    ;# remove the first waiter from queue
    proc dumpq {} {
        putz ""
        foreach type {produce consume} {
            putz "--- $type --- <$::waiters($type)>" green
            foreach item $::waiters($type) {
                lassign $item id var
                putz "id= |$id| [format %-15s [tname $id] ] var= |$var| "
        putz "--- queue   --- <$::queue>\n" green
#   ########################## buffer ###############################################
    Task buffer -import {addwaiter signalwaiters dumpq} {
        twait -> qsize          ;# first time called, we just get our max queue size
        treturn ok              ;#
        putz "Buffer Queue max size= |$qsize| "
        catch {wm geom .taskdebug 1109x495+-5+6}
        set queue {}            ;# this is our queue of data produced and consumed
        set waiters(consume) {} ;# these are the lists of consumers who are blocked
        set waiters(produce) {} ;# and the producers
        package require Tk
        toplevel .top           ;# our queue text entry with the items and the length
        entry   .top.queue    -text "queue"   -textvariable ::queue        -font {courier 18} -width 60
        entry   .top.queue2   -text "length"  -textvariable ::queuesize    -font {courier 18} -width 3
        ttk::labelframe .top.delay    -text "Delay"
        ttk::spinbox .top.delay.sb    -from 0 -to 1000 -increment 25    -textvariable ::delay_buffer -width 5 -font {courier 18}
        pack    .top.delay                    -side right
        pack    .top.queue      -expand true  -side right -fill both
        pack    .top.queue2     -expand true  -side left  -fill both
        pack    .top.delay.sb   -expand true              -fill both
        wm geom .top 1255x62+374+859
        wm attributes .top  -topmost 1
        set ::delay_buffer 0
        while { 1 } {
            wait $::delay_buffer
            twait -> type data var                      ;# called with real requests, if type is consume, data is just a place holder
            set pid [tset buffer pid]                   ;# get our parent (caller) id so we can signal if needed
            putz "$pid [format %-14s [tname $pid] ] type= |$type| data= |$data| var= |$var| " green
            set ::queuesize [llength $queue]            ;# for our queue size text entry
            if       { $type eq "produce" } {
                putz "   produce:  len/max= [llength $queue] / $qsize   <$queue> before insert" red
                if { [llength $queue] >= $qsize } {     ;# is there room for another
                    addwaiter $pid produce $var         ;# no put this guy on our producer waiting list
                    treturn 0                           ;# return 0 if the queue is full
                } else {
                    lappend queue $data                 ;# add data to the end of the queue (fifo)
                    set ::queuesize [llength $queue]    ;# for our queue size text entry
                    signalwaiters consume               ;# signal all waiting consumers that there's new data available
                    treturn 1                           ;# return 1 when data added to queue sucessfully
            } elseif { $type eq "consume" } {
                putz "   consume:  len/max= [llength $queue] / $qsize   <$queue> before consume" red
                if { [llength $queue]  == 0} {          ;# is there anything to consume
                    addwaiter $pid consume $var         ;# no put this guy on our consumer waiting list
                    treturn [list 0 0]                  ;# {code data} - data is just a place holder here
                } else {
                    set data    [lindex $queue 0]       ;# get the next one off the data queue
                    set queue   [lrange $queue 1 end]   ;# now remove that one
                    set ::queuesize [llength $queue]    ;# for our queue size text entry
                    putz "   remove <$data> queue now:    <$queue> "
                    signalwaiters produce               ;# signal all producers there's room now
                    treturn [list 1 $data]              ;# return code 1, and some data
            } elseif { $type eq "dump" } {
                wait 3000 ;# time to look at the dump
            } else {
                error "bad type"
#   ########################### producer ####################################
    set pscript {
        twait -> bid delay geom first   ;# one time we get called with the buffer task id
        treturn ok
        putz "producer init"
        catch {wm geom .taskdebug $geom}
        set data [expr {   $first - 1   }]
        while { 1 } {
            putz "produce during [comma $delay] miliseconds" green
            wait $delay ;#simulate time to produce
            incr data   ;# this is what we produce, just keep incr'ing it
            set try 0   ;# how many tries before we can give this successfully to the buffer task
            while { 1 } {
                unset -nocomplain                   ::prod_full_var ;# in order to wait on a var, we must unset it first
                tcall $bid rvar  <-   produce $data ::prod_full_var ;# sync call to the buffer, with our signal var as a parm
                incr try                                            ;# with multiple producers, we all get a shot at the queue
                if { $rvar } {                                      ;# rvar is 1/0 for sucess or no room in buffer
                    putz "fits   on try number: $try    data we inserted = |$data|" red
                    break                                           ;# leave this loop and go back to producing a new data item
                } else {
                    putz "no-fit on try number: $try    try again, tvwait on ::prod_full_var"
                    tvwait ::prod_full_var      ;# the buffer task will save prod_full_var and signal us when room in queue
    Task producer  -import $pscript
    Task producer2 -import $pscript
#   ################################# consumer ####################################################
    set cscript {
        twait -> bid delay1 modulo delay2  geom ;# buffer task/thread id, 2 delays with a modulo on delay2
        treturn ok                              ;# we return only to resume the callback that started us going
        putz "consumer init"
        catch {wm geom .taskdebug $geom}
        while { 1 } {
            set try 0
            while { 1 } {
                unset -nocomplain                         ::cons_empty_var
                tcall   $bid     rvar  <-     consume 0   ::cons_empty_var   ;# returns list of {code data} code 0/1
                lassign $::rvar   code        data
                if { $code  } {
                    break           ;# the data was returned from the queue
                } else {            ;# the queue was empty, so we need to wait for a signal after a producer queues some data
                    putz  "Queue empty, wait for a signal try: [incr try]" red
                    tvwait ::cons_empty_var
            putz "Got one $data" red
            wait $delay1
            if { [incr counter] % $modulo == 0 } {
                catch {wm title . "delaying $delay2"}
                wait $delay2
                catch {wm title . "continue"}
    Task consumer  -import  $cscript
    Task consumer2 -import  $cscript
    Task consumer3 -import  $cscript
#    Task consumer4 -import  $cscript
#   ################################# consume 1 button callback ####################################################
    proc consume1 {args} {
        incr ::level
        if { $::level > 1 } {
#           putz "busy in the last one, level = $::level so ignoring this one"
        } else {
            while { 1 } {
                unset -nocomplain                  ::cons_empty_var
                tcall $::buffer rvar  <- consume 0 ::cons_empty_var
                lassign $::rvar code data
                putz "consume reequest [format %-15s |$::rvar| ] code= |$code| data= |$data| "
                if { $code  } {
                } else {
                    tvwait cons_empty_var
            putz "Got one $data" red
        incr ::level -1
#   ##################################### some gui buttons #######################################################
    button  .consume    -text "consume" -command    consume1             ;# do just 1 consume, report to console
    button  .dump       -text "dump"    -command    dump                 ;# dump the queues in the buffer task
    pack    .consume    -fill both      -expand     true
    pack    .dump       -fill both      -expand     true
    wm geom . 369x105+1+857
#   ###################################### start up our tasks ####################################################
#   These next 6 tcall's are sync calls and we don't care about the return values, so we use <- for that variable
    tcall $::buffer    <-   10                                       ;# send buffer his max size
    tcall $::producer  <-  $::buffer 199  792x222+1112+7   0         ;# buffer id, delay to produce, window geom, starting data value
    tcall $::producer2 <-  $::buffer 100  792x229+1119+272 10000     ;# buffer id, delay to produce, window geom start at 10k
    after 5000  {tcall  $::consumer   <- $::buffer 300 10 2000 517x220+-6+543}    ;# delay starting our consumers
    after 15000 {tcall  $::consumer2  <- $::buffer 25  30 3000 531x221+517+544}   ;# delay, modulo, delay2, geom
    after 17000 {tcall  $::consumer3  <- $::buffer 300 10 2000 521x220+1055+545}
    tdump                                                                ;# threads dump to console
    proc dump {} {
        tcall $::buffer -async xxxx <- dump                              ;# send to buffer ask for a queue dump
    wait 2000

This example demonstrates that using multi-tasking can provide performance benefits even for a program that does not do heavy compute, but rather just does many simultaneous web requests.

 Example web page extractions using Tproc

The below example uses twapi and https (on windows) or tls (on linux) to extract the titles from 200 RFC's. It does it sequentially and using 10 tasks created by the Tproc command. Tasks were >8x faster. The results include the size of each rfc and its title.

  • Inside Tproc and tgroup

Tproc calls proc to create the procedure rfc::do_url and then calls tgroup to create ntasks task threads that each import rfc::do_url. The sequential code calls rfc::do_url directly with the results saved in the array ans.

The tgroup -foreach sub-command is used to place 200 arglists into the groups queue (asynchronously). Each of the 10 Tproc created tasks concurrently get arglists from the shared job queue and call rfc::do_url to get a result.

The results are sent back to the caller thread (main) and stored in an array also named rfc::do_url using indices rvar,* for *=0..199. Then a tgroup -wait all waits for all 200 to complete.

  • putz debugging and tdump

Each task uses a debugging putz call. In the main thread, it goes to the console; in each task (because t_debug = 2) putz does a thread::send to the main with a puts statement, so that also goes to the console. On Windows, a t_debug value of 0 can alternatively be used to create separate debugging text windows one for each task if desired. The tdump utility command is used to show how many jobs each task worked on.

  • initializers and proc imports

Note the use of the Tproc initializer code stored in the ilist list variable, which follows the body of the procedure. It closely mimics the main thread version. This results in the http/https init code being inserted outside of the procedure, so it is done only once per task. This also demonstrates importing the time stamping procedure Time and also how to set the t_debug to 2. Explicit namespaces are also demo-ed.

The use of 10 tasks doing the https requests concurrently provided the speed up.

    package require tasks
    namespace import tasks::*
;   proc Time {} {                              ;# import this to each task
        set ms [clock milliseconds]             ;# format current time with 3 digit ms
        set secs [expr {   $ms / 1000   }]
        set ms [string range $ms end-2 end]
        return [string range [clock format $secs] 11 18].$ms
# ------------------------------- main thread  ----------------------------------  
    package require http                        
    if { $::tcl_platform(platform) ne "windows"  } {
        package require tls
        http::register https 443 [list ::tls::socket -ssl2 0 -ssl3 0 -tls1 1 -tls1.1 1 -tls1.2 1 ]
    } else {
        console show
        package require twapi_crypto
        http::register https 443 [list ::twapi::tls_socket]
# ----------------------------- Tproc init code ---------------------------------    
    set     ilist        Time                                                    
    lappend ilist        {-set ::t_debug 2 }                                     \
                         {-package require http} 
    if { $::tcl_platform(platform) ne "windows"  } {
        lappend ilist   {-package require tls}                                   \
                        {-http::register https 443 [list ::tls::socket -ssl2 0 -ssl3 0 -tls1 1 -tls1.1 1 -tls1.2 1 ]} 
    } else {
        lappend ilist   {-package require twapi_crypto}                          \
                        {-http::register https 443 [list ::twapi::tls_socket]}   
    set from 1
    set to   200
    set ntasks 10
    puts "ntasks= |$ntasks| " ; update
    namespace eval rfc {}                       ;# demo using a namespace
    set tm0 [lindex [time {                     ;# timing of the task create
        Tproc rfc::do_url {url n {debug no}} {
            set tok [http::geturl $url]
            set result [http::data $tok]
            set extract " ---no title found $n---"
            regexp {.*<title>(RFC [0-9]+:[^\n]+)</title>.*} $result ->  extract
            http::cleanup $tok
            if { $debug && ((($n % 50) == 0) || ($n == 1))} {
                putz "[Time] do RFC ($n) = $extract" ;# goes to console or stdout
            return [list [format %6s [string length $result] ] $extract]
        }  -tasks $ntasks    -import_tasks    $ilist                          

    } 1] 0]
    set tm1 [lindex [time {                     ;# timing of a sequential run
        for {set n $from} {$n <= $to} {incr n} {
            set url "https://www.rfc-editor.org/rfc/rfc${n}.html"
            set rfc::ans($n) [rfc::do_url $url $n yes]
    } 1] 0]
    for {set n $from} {$n <= $to} {incr n} {    ;# dump results from array ans
        if { $n <= 5 || $n > $to - 5 } {
                puts "== $rfc::ans($n)"
        } elseif {$n == 6} {
            puts "   ..."
    puts "\n--------------\n"
    set tm2 [lindex [time {                        ;# timimg of a tasks run
        for {set n $from} {$n <= $to} {incr n} {
            set url "https://www.rfc-editor.org/rfc/rfc${n}.html"
            tgroup rfc::do_url -foreach [list $url $n yes] ;# launch each job -async
        tgroup rfc::do_url -wait all                       ;# wait for to-from+1 jobs
    } 1] 0]
    set m -1                                       ;# tgroup output indices = rvar,0 .. rvar,$njobs-1
    for {set n $from} {$n <= $to} {incr n} {       ;# dump results from array
        incr m
        if { $n <= 5 || $n > $to - 5 } {
            puts "-- $rfc::do_url(rvar,$m)"        ;# tgroup uses same name for the array as the Tproc
        } elseif {$n == 6} {
            puts "   ..."
    tdump -count
    puts "[format %10s [comma $tm0] ] microseconds create $ntasks tasks"
    puts "[format %10s [comma $tm1] ] microseconds sequential"
    puts "[format %10s [comma $tm2] ] microseconds tgroup"
    puts "ratio: sequential/tasks = [expr {   $tm1 * 1.0 / ($tm0 + $tm2)   }]"

# ntasks= |10| 
# 20:58:16.560 do RFC (1) = RFC 1: Host Software 
# 20:58:23.089 do RFC (50) = RFC 50:  Comments on the Meyer Proposal 
# 20:58:29.852 do RFC (100) = RFC 100:  Categorization and guide to NWG/RFCs 
# 20:58:36.520 do RFC (150) = RFC 150:  Use of IPC Facilities: A Working Paper 
# 20:58:43.183 do RFC (200) = RFC 200:  RFC list by number 
# == { 30199} {RFC 1: Host Software }
# == { 26441} {RFC 2: Host software }
# == {  9962} {RFC 3:  Documentation conventions }
# == { 14866} {RFC 4:  Network timetable }
# == { 35063} {RFC 5:  Decode Encode Language (DEL) }
#    ...
# == { 14834} {RFC 196:  Mail Box Protocol }
# == { 15047} {RFC 197:  Initial Connection Protocol - Reviewed }
# == {  8219} {RFC 198:  Site Certification - Lincoln Labs 360/67 }
# == { 27665} {RFC 199: Suggestions for a Network Data-Tablet Graphics Protocol }
# == { 27720} {RFC 200:  RFC list by number }
# --------------
# rfc::do_url4 ! 20:58:43.367 do RFC (1) = RFC 1: Host Software 
# rfc::do_url5 ! 20:58:43.972 do RFC (50) = RFC 50:  Comments on the Meyer Proposal 
# rfc::do_url0 ! 20:58:44.650 do RFC (100) = RFC 100:  Categorization and guide to NWG/RFCs 
# rfc::do_url0 ! 20:58:45.288 do RFC (150) = RFC 150:  Use of IPC Facilities: A Working Paper 
# rfc::do_url8 ! 20:58:46.042 do RFC (200) = RFC 200:  RFC list by number 
# -- { 30199} {RFC 1: Host Software }
# -- { 26441} {RFC 2: Host software }
# -- {  9962} {RFC 3:  Documentation conventions }
# -- { 14866} {RFC 4:  Network timetable }
# -- { 35063} {RFC 5:  Decode Encode Language (DEL) }
#    ...
# -- { 14834} {RFC 196:  Mail Box Protocol }
# -- { 15047} {RFC 197:  Initial Connection Protocol - Reviewed }
# -- {  8219} {RFC 198:  Site Certification - Lincoln Labs 360/67 }
# -- { 27665} {RFC 199: Suggestions for a Network Data-Tablet Graphics Protocol }
# -- { 27720} {RFC 200:  RFC list by number }
#                  (rfc::do_url0,count)        = |20| 
#                  (rfc::do_url1,count)        = |20| 
#                  (rfc::do_url2,count)        = |20| 
#                  (rfc::do_url3,count)        = |21| 
#                  (rfc::do_url4,count)        = |20| 
#                  (rfc::do_url5,count)        = |18| 
#                  (rfc::do_url6,count)        = |20| 
#                  (rfc::do_url7,count)        = |20| 
#                  (rfc::do_url8,count)        = |21| 
#                  (rfc::do_url9,count)        = |20| 
#    184,990 microseconds create 10 tasks
# 26,823,532 microseconds sequential
#  2,854,554 microseconds tgroup
# ratio: sequential/tasks = 8.824853991256584

 example socket server

This is a server that takes an input string and reverses it, from Ashok's book. In each task, it is event driven. But it allows for many concurrent connections since the number of tasks is unlimited (within resources).

Each of these two code blocks needs to run as a separate program. Run either first, but if clients are run w/o a server there will be network connection errors reported. There's a 50 ms delay that can be added between client connects if socket handles are exhausted. Uncomment to use the delay.

The client uses a separate task (thread) to do it's work, so the gui in the main task (including the console) are responsive at all times.

# a socket server - to use, notice the two significant comment blocks, where it reads, and then returns a value
    tcl::tm::path add d:/stuff                      ;# add to the list of module paths - change this to where you put tasks-1.x.tm
    package require   tasks                         ;# finds tasks-1.x.tm in d:/stuff highest version x
    namespace import tasks::*
    catch {console show}
    set dbg 0           ;# debug putz calls
    set Timeout  10000  ;# N ms timeout on client not behaving
    set Tasks    6      ;# servers
    set SimTime  0      ;# time we busy wait for each result to simulate server time
    set SimTime2 20     ;# extra time we just wait after each close, we want the faster ones to get more jobs
    set SimError 0      ;# simulate errors to test client 0/1
    set SimPort 10042   ;# port number to listen on, client must agree
    checkbutton .path1 -variable SimError -text "force Errors"
    button .path2 -text "Add a Task" -command {tasks::tgroup r_server -add_tasks 1}
    pack .path1 .path2 -fill both -expand true -side left
    wm geom . 383x31+35+550
    wm protocol . WM_DELETE_WINDOW exit
#   originally from server.tcl - code from Ashok's The Tcl Programming Language

    proc Time {} {                              ;# import this to each task
        set ms [clock milliseconds]             ;# format current time with 3 digit ms
        set secs [expr {   $ms / 1000   }]
        set ms [string range $ms end-2 end]
        return " \u25b7 [string range [clock format $secs] 11 18].$ms \u25c1 "
    proc on_read {so client_port} {             ;# this is the event callback imported into each task
        set n -9999
        set line {no-data}
        if [catch {
######################################### This reads the data sent by the client ######################################### 
            set n [gets $so line]      ;# n is the bytes read into variable line - but with exceptions see below
                                                                           if ($::dbg) {putz "[Time] $::counts started one [string length $line] n=$n $so" green}
        } err_code] {
            catch {chan close $so}
            putz $err_code  red                 ;# output an error message 
                                                                            tset $::t_name error $err_code      ;# output to the tsv error field where task monitor can see it
                                                                            #set user "on_read: n= |$n| line= |$line| so= |$so| client_port= |$client_port| "
                                                                            #tset $::t_name user $user               ;# output to the tsv user field where task monitor can see it
        if { $n < 0 } {
            if {  [chan eof $so] } {
                after cancel [list timeout $so] ;# we can cancel this now
                catch {chan close $so}          ;#must close this before doing any putz calls, since it may call update
                                                                           if {$::dbg} {putz "[Time] n < 0 and eof - Closed channel $so"}
                wait [expr {   $::SimTime2 + ($::SimTime / 4)   }]          ;# don't be in too much of a hurry to get a new request if we're slow to produce, let quicker ones get jobs
                set ::closed_io 1
#                                                                           putz $user
            } else {
                                                                           if {$::dbg} {putz "[Time] not eof yet, but n < 0  eof: [chan eof $so] $so"}  
        } elseif {$n > 0} {
            after cancel [list timeout $so]     ;# we can read now, so cancel any pending timeout
            xwait $::SimTime                    ;# simulate heavy compute, a busy wait for approx SimTime ms
            after $::timeout [list timeout $so] ;# now set up the next timer
            set f 0
            if { ([incr ::counts] % 100) == 0 && $::SimError} {
                set f 1                         ;# send bad data every once in a while to test client if simerror is 1
                putz "[Time] ::counts = $::counts - time to force an error" red
                wait 1000                       ;# wait a bit if we're going to force an error
            if [catch {
################################################### This is the servers response to input in variable line ####################
                puts $so "[string range [string reverse $line] $f end]"      ;# this is what we serve up, a string reverse
                                                                           if {$::dbg} {putz "[Time] $::counts finished one [string length $line] $so" green}
                tset $::t_name user "[string length $line] [string range [string reverse $line] end-20 end]..." ;# write partial where monitor can see it
            } err_code] {
                                                                           if {$::dbg} {putz "[Time] error on puts back to client: $err_code $so" red}
            after cancel [list timeout $so]     ;# we finished the write
            after $::timeout [list timeout $so] ;# now set up the next timer
        } else { ;# we ignore a zero length input here, Ashok would do an exit, we don't exit our threads
            #exit 0
                                                                           if {$::dbg} {putz "[Time] $::counts n=zero $so eof: [chan eof $so] " yellowonblack}
    proc timeout {so} {
        putz "[Time] timeout socket |$so| from: $::client_ip : $::client_port" red
        catch {chan close $so}                  ;# if it's already closed, ignore
#                                                                           putz "[Time] after chan close closed_io exist: [info exist ::closed_io]   chans: |[chan names]|"
        set ::closed_io 1

# ----------------------------------
# The tgroup below creates threads (count = Tasks) which are called for each new connection to the listener
# the listener code in the main thread creates a new socket associated with the connection, but needs to detatch the new socket
# from the main thread so it can pass it on to a worker thread. When the worker wakes up (twait returns with args) it has
# to attatch to the socket which is what it uses to read data. It uses the imported proc on_read to do the work when
# data has arrived. on_read gets the socket and uses it to try to read the data, which could be an eof or error
# Each time through the task loop, it handles 1 new connection, transfers the socket, and sets up a timeout 
# Each task imports 3 procs and sets global config parameters in each thread (from the globals above in the main thread)
# the main now has a gui to set the force error flag, which is sent into each task as a parameter, and is copied to the global ::SimError
# note the errors are only forced every 100 connections in EACH task
# also note if we enable any of the extra putz calls, we cannot write to the putz text windows, only the console, since it would
# re-enter the event loop and this server uses the event loop for read ready. If we did re-enter, we'd get a flood of ready callbacks
# ----------------------------------
    tgroup r_server -tasks $Tasks  -import_tasks [list Time timeout on_read "# comment here" \
                    "-set ::dbg $dbg; set ::timeout $Timeout; set ::SimTime $SimTime; set ::SimTime2 $SimTime2; set ::SimError $SimError" \
                     {-set ::counts 0;set t_debug 0x2; putz "[Time] init"} ] {
#       this script runs at global level here but in a thread's private interpreter, unlike a Tproc script, which is inside a proc
        twait -> so ::client_ip ::client_port ::SimError            ;# a tasks sync call with these 4 args
        treturn [list $::t_name $so $::client_ip $::client_port ]   ;# return anything, so caller can proceed, so send some info incl our task name
#                                                                           putz "[Time] setting a timer for timeout $so of $::timeout"
        after $::timeout [list timeout $so]                     ;# set first timer for this connection
        unset -nocomplain ::closed_io                           ;# to avoid possible race condition
            thread::attach $so
            chan configure $so -buffering line -encoding utf-8 -blocking 0 -translation lf
            chan event $so readable [list on_read $so $client_port]
#                                                                           putz "so= |$so| client_ip= |$client_ip| client_port= |$client_port| " green ;# this might do an update
        tvwait ::closed_io                                      ;# a vwait that first tests if the variable is set/unset
#       go get next job and get new connection, we're in a forever loop here (tasks add that loop)

########### main thread code, sets up 1 listener, which calls on_accept for each new connection
    proc on_accept {so client_ip client_port} { ;# see Ashok's book, on quirk, need to dismiss event before detaching socket
        after 0 [list  transfer_socket $so $client_ip $client_port] ;# pass it on to transfer_socket after the event was dismissed
    proc transfer_socket {so client_ip client_port} { ;#ok to now do the detach
#                                                                            puts  "             [Time] [incr ::connects] Accept - transferring: so= |$so| client_ip= |$client_ip| client_port= |$client_port| "
        thread::detach $so
        tgroup  r_server -run [list $so $client_ip $client_port $::SimError] ;# sync call to the reverse string server task
    set listener [socket -server on_accept  $SimPort] ;# now accepts from anywhere (removed the -myaddr)
                                                                            puts "listener= |$listener| on port $SimPort"
    after 0 [list task_monitor [expr {   $Tasks + 1   }] {} {} -1]
    after 1000 {thread::send [tset _taskmonitor tid] {wm geom .top 1695x500+1+1}}
#    send_command
#    thread::send [tset sendcmd tid]      {wm geom . 1000x69+2520+600        ; wm geom .taskdebug 1025x350+2520+900}
#    wm geom . +2320+800
#    catch {console eval {wm geom . 129x14+5+627}}
    vwait forever

Here is the client test for the above server.

# tasks socket server test program, can run multiple times

    tcl::tm::path add d:/stuff        ;# add to the list of module paths
    package require  tasks            ;# finds tasks-1.x.tm in d:/stuff - highest version
    namespace import tasks::*

    set ServerPort 10042      ;# port number to contact server on
    set ServerIp  ;# ipaddr of server - if on another computer, may need to enable incoming firewall (e.g. if on windows)
    proc Time {} {                             ;# import this to each task
        set ms [clock milliseconds]             ;# format current time with 3 digit ms
        set secs [expr {   $ms / 1000   }]
        set ms [string range $ms end-2 end]
        return " \u25b7 [string range [clock format $secs] 11 18].$ms \u25c1 "
    proc mirror {var} {
        thread::send -async $::main_tid [list set  ::$var [set ::$var]] ;# send to var in the main thread
    if [catch {
        console show
        console eval {wm geom . 131x14+1+1} 
    } err_code] {
    set ::ecount    0
    set ::counter   0
    set ::pause     0
    label .label1   -text           "Errors: "
    label .label2   -text           "Requests: "
    entry .path1    -textvariable   ecount
    entry .path2    -textvariable   counter
    checkbutton .path3 -variable pause  -text "pause"           -command send_dopause
    checkbutton .path4 -variable dotout -text "force timeout"   -command send_dotout
    button      .path5                  -text " Task Monitor "  -command {catch {task_monitor {} {} {} 2}} ;# we just ignore trying to run more than once
    pack .path5 .label1  .path1 .label2 .path2 .path3 .path4  -fill both -expand true -side left
    wm protocol . WM_DELETE_WINDOW exit
    puts "[Time] starting" 
# we will use a Task here, so the client code runs in a separate thread, so when it is
# blocked trying to create a socket (e.g. if say the server is not running) it will
# not block the main thread. This allows one to freely use the console and move the
# gui window or click the task_monitor button. 
# For the gui -variable and -textvariable variables in the main thread, we also
# keep a mirrored copy in the task (from when this was a single threaded example)
# Thus we need to do thread::send's from the task to the main thread to keep them in sync

Task client  -once -import_tasks [list Time mirror] { ;# import the Time proc used here and in the main
    twait -> main_tid ServerIp ServerPort   ;# so we can send updates back to the main thread
    set ::t_debug 2                         ;# output to console or stdout
    puts "main_tid= |$main_tid| ServerIP= |$ServerIp| ServerPort= |$ServerPort| "
    set ::pause     0
    set ::dotout    0
    set ::step      1
    set ::counter   0
    set ::ecount    0
    while 1 {
        incr ::counter
        mirror counter                                          ;# send to counter in the main thread, to update gui
        tset $::t_name user "counter=$counter ecount=$ecount"           ;# so we can see this in the task monitor if run
        while { $pause } {
            wait 1000   ;# this is our own local pause
        tpause_check    ;# this is the task's pause (we do this so task monitor can pause us) need this since we don't use twait again
        if [catch {
            set so [socket $ServerIp $ServerPort]
        } err_code] {
            set so none
            puts "[Time] socket: $so $err_code (server running + firewall enabled?)"
            incr ecount
            mirror ecount 
        if { $so eq "none" } {
            wait 10000      ;# might need time for some time_wait's to vanish
        chan configure $so -buffering line -encoding utf-8 -translation lf ;# windows bug if using crlf at 4095 and incrs of 4096
        set output "[string repeat xyz [expr {  50000 + $counter   }]]" ;# here's what we send into server, which echo's a reverse
        if [catch {
            puts $so  $output   ; update      ;# send this to server
        } err_code] {
            puts "[Time] error writing to server: $err_code"
            catch {close $so}
        if { $dotout } {
            wait 11000  ;# this is long enough to cause a timeout if set at 10 secs
        set tm [time {set len [gets $so result]}] ;# here's the reply, with a call to time
        if { $result ne [string reverse $output] } {
            puts  "[Time] $counter  not equal [string length $output] : [string length $result]" ;update
            incr ecount
            mirror ecount 
            wait 5000
        } else {
#           puts  "$counter [Time] [comma [lindex $tm 0]]  len= |$len| result= |[string range $result 0 50]| " ;update
        close $so
        tset $::t_name user "counter=$::counter ecount=$::ecount"       ;# so we can see this in the task monitor if run
#       wait 50     ;# need a delay or we may chew up all the sockets allowed (in 32 bit) uncomment to use

proc send_dopause {args} {
    thread::send -async $::client [list set pause $::pause]             ;# send mainthread's pause to the task's pause
proc send_dotout {args} {
    thread::send -async $::client [list set dotout $::dotout]           ;# send mainthread's dotout to the task's dotout

tcall $client -async rGlobal <- [thread::id] $::ServerIp $::ServerPort  ;# send in the main tid plus ip:port of server
tvwait rGlobal                                                          ;# we never wake up, since tasks does not do a treturn     

 version history

Version 1.13h some bug fixes and ability to close send_command and task_manager windows

Version 1.13c (the c is found only in a comment) includes (for windows only) a puts wrapper. Get it only at the github site. putz now includes, like puts, a -nonewline first arg option (no abreviations in either puts or putz). The puts wrapper saves the old puts and defines a new one. The new one will send any puts output to putz, which is further dependent on the t_debug parameter. Note, the tk bug is said to be fixed in 8.6.12, so tk windows can now be used safely on linux. I've not tested this however.

Version 1.13 includes the new tgroup sub-command, -add_tasks. This works in conjunction with Tproc and tgroup. It can dynamically add additional tasks that will use the same shared queue as the existing task group. tasks-1.13.tm is only found at the github site.

Version 1.12 includes new features in the debugging utilities. send_command has tab expansion (for commands and variables) and escape expansion for finishing brackets. Browser lookup is also supported using Ashok's magicsplat docs page.

Version 1.11 includes new utilities for debugging tasks (tlg, tla, twidgets, and sendcmd). tasks::send_command can be run from the program or using the button in the task monitor. The task monitor is run via: tasks::task_monitor.

Version 1.10 includes a few changes to putz windows (now their own toplevels) and the new Task Monitor and repos commands to line up putz windows.

Version 1.9 includes a few bug fixes - in particular, tgroup groupnames are always in the global or namespace scope and tgroup can now be called from a local scope (e.g. proc).

Version 1.8 includes a new global to modify the twait timeout when checking a tasks queue: t_twait_timeout (default 50 ms). There are also some performance improvements. Currently, a minimal Tproc call is about 3.5x longer than doing a simple thread::send with a value returned. Tproc overhead is approximately 65 usec where a proc is about .7 usec (4gz intel i7 timing). Of course, this overhead is not important if the Tproc does any substantial work.

Please place any user comments below:

RLH - 20220523 - This would be a nice addition to tcllib.

ET - 20220523 - While I would be honored to have this part of tcllib, I would hope that if it's truly useful, that others would test it and provide some feedback.

arjen - 2022-05-26 09:38:12

The "simple" idea I want to implement is this:

  • I have four tasks (or whatever number) that produce a result
  • For this they require input from the other tasks
  • The tasks represent some iteration, but if they have the input from the other tasks, they can do the job independently. I can illustrate this with a script:
   while { $time < 10 } {
       task1 $state1 $input_for_task1
       task2 $state2 $input_for_task2
       # All is done, some now determine the input for the next step
       set input_for_task1 [get_input ...]
       set input_for_task2 [get_input ...]

My question: how can I achieve this with the Tasks package?

ET not sure what this is doing. But it does seem that the threads will be communicating with one another, and probably the way this would be done would be by using the tsv part of the threads package that allows for shared variables between threads.

Tasks are designed to implement a single-queue multiple server setup, much like tpool, where each thread or task does the same thing as requested by the main thread. Tasks try to do it simpler by placing the accent on arglists instead of sending scripts. Tasks also save the results in an array, so they don't need to deal with job ids. And they have an import feature that is simpler than ttrace. Their best use is with the Tproc command to change a single proc into a set of tasks that can run that proc in multiple threads. But they are still called just like calling a proc (only you can call it N times concurrently).

But in those cases, the threads don't talk to one another, or even know others exist. I think in your case, you would probably be better off using standard threads where you could get much more community support.

saito Arjen, what you describe sounds more like a "data flow" problem. In the sense that you can connect task1, task2, etc. in certain ways to identify the priorities or depencencies of among them, which also helps resolve input/output data flows. Do you have more details you can share? I have done a lot of work in this area and would be happy to help if it turns out to be relevant.

ET I didn't quite follow what Arjen wanted to do, but it got me thinking of a fun weekend project, a racing simulation using plotchart. While the tasks/threads don't communicate, they run in parallel. And so it might not be that far of a stretch where they would talk to each other using tsv variables. At each interval, each could post it's current progress in the race for the others to look at. Maybe that's what Arjen is trying to do.

Anyway, here's how one can use tasks in a somewhat different way. They are each given some parameters, including a starting time. This way the order of being called shouldn't matter. I ran this on my linux system and also windows. The windows system also does the console show.

Copy to some file.tcl and run with wish

 racing plotcharts
    package require tasks      ;# finds tasks-1.1x.tm in directory /usr/share/tcltk/tcl8.6/tcl8 where I copied it x is latest
    namespace import tasks::*
    catch {console show}
set horse_script { ;# use a script variable since each task will use identical code, then can use Task to run several, no queues are shared here
    lappend auto_path . ;# I have plotchart from somewhere, so copied it here, it's pure tcl, wasn't in the tcl release
    package require Plotchart
    set t_debug 2 ;# all putz output to the console or terminal window on linux, winner is first to write to it
    twait -> color num name fcolor strength start     ;# first and only call is our parameters
    treturn ok ;# return anything since we're being called sync, this let's caller continue
    set seed [expr rand()] ;# this should start the random number generator, do it now not when we each start the race
    set width 1200  ;# width of each window
    set height 90 ;# height of each chart
    set hdelta 130 ;# space between horses vertically
    set hextra1 90 ;# extra height per window
    set hextra2 50 ;# extra height per canvas
    set x 100   ;# x offset from left side of screen
    set y [expr {   ($height + $hdelta) * $num +  10   }] ;# offset from top per horse number
    wait 200
    set gheight [expr {   $height + $hextra1   }]
    set geom ${width}x${gheight}+${x}+$y
    wm geom . $geom
#   putz "color= |$color| num= |$num| x= |$x| y= |$y| ->  $geom "
    frame  .f
    label  .f.b1    -text $name -bg $color -fg $fcolor
    label  .f.b2    -text "[expr {   $num+1   }]"
    canvas .c -width $width -height [expr {   $height + $hextra2   }]
    pack .f .c -side top -fill x
    pack .f.b1 .f.b2 -side left -fill x -expand 1
    set s [::Plotchart::createStripchart .c {0 5280 660} {0 10 5} ] ;# mile in feet, or 8 furlongs
    $s yticklines black
    $s dataconfig a -colour $color -filled down -fillcolour grey90
    $s ytext speed
    $s xtext furlongs/feet
    $s xconfig -format {%4.0f}
    $s yconfig -format {%2.0f}
    set position 0
#   set seed [expr {   srand($num)   }]
    set basespeed 1.0
    set interval 0
    set factor 5
    putz "We start at [comma $start], it's now [comma [clock milliseconds]]"
    while { 1 } {
        if { [clock milliseconds] > $start } { ;# we should all start at same time for a fair race
        wait 1
    putz "starting at [comma [clock milliseconds]]"
    #each task runs independently using it's own rand seed
    while { 1 } {
        incr interval
        set delay 50
        wait $delay
        set random [expr rand() * (10.- $basespeed) ]
        set speed [expr {  ( $basespeed + $random ) * $strength  }]
        set position [expr {    $position + $speed * $factor  }]
#       putz "delay= |$delay| random= |$random| speed= |$speed| position= |$position| interval= |$interval| "
        $s plot a $position $speed
        if { $position > 5280 } {
    putz "done intervals = $interval     [format %-20s $name ] number [expr {   $num + 1   }]  strength: $strength"
    vwait ffff ;# we're done, just stop here just kill with a control-c

wm withdraw .

set colors {red green blue orange}
set horsesname {"Man o' War" "Seabiscuit" "Seattle Slew" "Secretariat"}
set fcolor {white white white black}
set strength {.951 .952 .950 .953}

set start [clock milliseconds]
incr start 2000 ;# start 2 seconds after we call them, should be a fair start

set n -1
foreach color $colors { ;# each task gets it's own parameters, this is not a single-queue multi-server program, so uses low level Task etc.
    incr n  
    Task horseT($n) -import_tasks $horse_script ;# create the tasks and call each sync with their parameters
    tcall  $horseT($n) ok <- $color $n [lindex $horsesname $n ] [lindex $fcolor $n ] [lindex $strength $n ] $start

arjen - 2022-05-29 09:06:21

My description of the experiment I envision is indeed less than crystal clear :). But the suggestions should be helpful. I will give it a try.

ET Wow, I just noticed at the end of the Plotchart docs that arjen is the author! Great package.

arjen - 2022-06-30 07:46:51

Quick update: I am making progress with my first experiments. I have seen one or two curious things: on Windows, if you have a runtime error or messages are shown in Tk, then closing them does not end the program, even if it should (the event loop still running?)

ET 2022-07-21 Messages that occur from errors in separate threads do not cause an exit of the entire program. After the ok button is clicked in a tk_MessageBox the thread will issue a vwait forever.

This is intentonal. One might want to see what's going on in the other still running tasks (or the failed one) before quiting, perhaps using the send command utility (or thread::send from the console). Dismissing the dialog frees up the screen a bit. If you want to truly exit, there's several buttons available that can be used instead of the ok button.

update on 1/15/23 The Ok message box is now a yes/no, where yes will exit, no will only suspend the thread, so it can be examined if desired

 task errors

On Windows, if a simple thread gets an error there are no notification dialogs. Tasks extend a thread by adding a last chance catch with a message box, but won't do a full exit, as would occur in a normal single threaded program.

One can see the raw threads behavior:

% package require Thread
% set tid [thread::create {expr 1/0 ; thread::wait}]
% thread::exists $tid

The thread exited on the div/0 error (if no error, it would just be waiting, but still exist). But you don't get any notification of this.

Task (uncaught) errors notify with a Tk message box which once dismissed, the task is disabled, but the task does not exit. See the note at the end.

One can click the close box on the main tk window to do a program exit. Each putz window has an exit button as well. The tasks::task_monitor also has a program full exit button. From the console window, the command tasks::tdump will output a status of all the tasks as well.

Below is an example Tproc with 2 errors in different tasks, and 4 putz debug windows 1 for each of the 4 tasks. The monitor will show the result for the ones that succeeded, and the error for the ones that failed. Here's a good time to use the Reposition button, after setting the x/y spinboxes, e.g. 925x200 (1080p monitor assumed). Also, it becomes clear why one might want to close the message boxes. I have 3 montitors, so it's less of an issue with so many windows.

Note, the calls that have an error, will not return a result, since that task will be disabled (via a vwait forever) before the return command is called. However, the failed tasks are still able to receive thread::send commands (which the send command utility uses).

For example, one can send the failed task a command to display (all or some of) its global variables. When choosing a command to show globals, one will see argc and argv, which have the last arglist received. With send_command, using the task name * will send a command to all the tasks in the task menu. To see just the globals created after startup, either enter the tasks::tlg command with no args, or set the wildcard pattern to ** (if set to just * it will show ALL variables, not just recent ones).

Notice how this example adds a script initializer, by setting the variable script and then using [list -$script] as a list initializer; the list command is required instead of just braces so the $script will be expanded correctly.

The leading - is the indicator that this is a script to insert; without the leading dash, it would be taken as a proc to lookup and insert. And lastly, as a list, there can be any number of scripts or procs (including wild cards) specified.

#   tcl::tm::path add d:/stuff              ;# add to the list of module paths as needed
    package require tasks   1.13            ;# finds tasks-1.13.tm in d:/stuff
    namespace import tasks::*
    console show 
    set script {
                                             ;# can also insert proc defs, even TCLOO code here
        set ::t_debug 0x0                    ;# no trace, 0x -> closes extra tk . windows
#       set ::t_debug 0x1                    ;# uncomment to get a tasks trace too
        set __the__globals [lsort [info globals]]
    Tproc errtest {{n 1000} {m 0} } {
        putz "hello n=$n m = $m"
        putz "[lsort [info globals]]"
        set ::foo [expr {   1000/$n   }]
        putz "returning foo=$::foo"
        return $::foo
    } -tasks 4 -import_tasks [list -$script]
    tgroup errtest -foreach {100000 55} {2} {3} {4} {5} {} 0 { 1 2 3} ;# 2 errors, div/0 and wrong # args
    wait 1000                     ;# give it time to run the 8 calls
    parray errtest

NOTE: After some testing with this error, an update was sent to github so the task will not exit after the dialog is dismissed (the old behavior). The new behavior is as described above (a vwait forever).

Since the task is waiting forever, it will not return a value, and so any tvwait or tgroup -wait all commands will hang as well. To be robust, the program must catch its own errors and can't rely on these last chance errors if it is going to recover from an error. Consider an error caught this way to be a fatal error.

ET 2022-08-09 Here's a small startup example showing how to re-position and size the task_monitor and send_command windows using thread::send - before creating 4 tasks named foo0 .. foo3 that each open a debug window and write a message. It also repositions the 4 debug windows. This is setup to run on windows and assumes you have at least a 1080p monitor.

 positioning windows

Run this and then in the send command window, try entering * for the taskname, and enter tlg for the command to send (then send it). This will list recently created globals in each task. Note that argv and argc are correct as the last arglist sent to each task.

Next, try tla _<tab> in the send command entry box. It will see the * for task name, and so choose the first task (foo0) in the task menu to lookup the symbols beginning with _ Next, choose __repos__ from the popup menu which is a global array used by the repos command. Send it to see the contents of that array. This is an example of some of latest task tool enhancements (be sure to get the latest code at the github site).

tcl::tm::path add d:/stuff              ;# add to the list of module paths - adjust as needed
package require tasks   1.13            ;# finds tasks-1.13.tm in d:/stuff
namespace import tasks::*

console show

thread::send [tset _taskmonitor tid] {wm geom .top 1095x404+720+570}
thread::send [tset sendcmd tid]      {wm geom . 1000x69+720+4        ; wm geom .taskdebug 1025x350+720+148}

Tproc foo {args} {
        putz "my name is $::t_name args = $args"
        return 666

} -tasks 4 -import_tasks

tgroup foo -run {1 1} {2 2} {3 three} 4