Traces

Summary

How to use traces in Tcl

Description

Read-Only Variable

DKF: A simple way to get a read-only global variable is to use a suitable trace:

set ROglobal someValue
trace add variable ROglobal write "[list set ::ROglobal $ROglobal];error read-only;#"

Notice the time- and hair-saving tricks:

  1. Using [list] to construct a guaranteed safe command for later execution.
  2. Using colon notation to force a reference to a global variable, whatever the context.
  3. Inserting the global name of the variable in the trace command, instead of working with its local referent.
  4. Using a trailing ";#" to trim the undesirable extra arguments from the trace command.

RWT: This is great code, but hard to find if you're thinking of a constant or constants or C's #define. (There! Now the Wiki search will find this page!)

Variable aliases

JCG: I remember having had trouble when the trace was set on ::var, yet usage was based on global var, or the other way around. Does anyone know what the exact behavior is when mixing these two approaches?

DKF: The thing to remember is that traces are always called in the context of the operation that is performing the read, write or unset, and the standard parameters passed in from the Tcl system refer to the way in which the variable was referred to in that context. Whenever you are setting up a trace on a variable that exists independently of the stack (i.e. a global or namespace variable - the two are really the same thing anyway) then it is much less hassle to pass a fixed name for that variable in as one of your own parameters to the trace (you can sort-of form closures in Tcl using appropriate scripts, and some of the more cunning lambda-evaluation schemes I've seen have been based on this.) The only time this is not going to work (in standard Tcl) is when you are dealing with variables that only exist on the call-stack - procedure locals. With these, you have to carefully build an [upvar] reference to the variable and then use that name to figure out what is going on. This is not easy, but it has been adequately covered in many books, like BOOK Tcl and the Tk Toolkit which is where I learnt all this stuff (if I can remember that far back!)

JC: Donal, you might be talking about the following:

trace add variable ::var write monitor 
proc monitor {name args} {
        upvar $name value
        puts "'[info level 1]' changes '$name' to '$value'"
}

### Change 1
proc change1 {} {
        set ::var bar
}
change1

### Change 2
proc change2 {} {
        global var
        set var foo
}
change2

### Change 3
proc change3 {} {
        upvar var local
        set local bar
}
change3

The above code produces the following output:

'change1' changes '::var' to 'bar'
'change2' changes 'var' to 'foo'
'change3' changes 'local' to 'bar'

The problem is that there is no way of knowing which variable is actually being modified, all one has access to is its alias name as used in the set command which triggers the trace, i.e., in monitor the parameter name is equal to local, how do you resolve that name back to the original ::var variable?

DKF: There is no standard mechanism for doing this. So I instead pass a globally-valid (fully-quantified) variable name as part of the trace script that I supply. Like that, I don't care about how the variable was accessed since I can always access it myself using a clearly defined route. And, because of the restrictions involved with the use of variables and Tk, I find sticking to only putting traces on globally-valid names to not be a problem.

Note that [namespace current] is very useful when you want to put in code to automatically figure out what the globally-valid variable name actually is.

ulis: If I understand well 'there is no way of knowing which variable is actually being modified', I disagree. Manual said:

''When the trace triggers, three arguments are appended to command so that the actual command is as follows: command name1 name2 op.

Name1 and name2 give the name(s) for the variable being accessed: if the variable is a scalar then name1 gives the variable's name and name2 is an empty string; if the variable is an array element then name1 gives the name of the array and name2 gives the index into the array; if an entire array is being deleted and the trace was registered on the overall array, rather than a single element, then name1 gives the array name and name2 is an empty string. Name1 and name2 are not necessarily the same as the name used in the trace variable command: the upvar command allows a procedure to reference a variable under a different name. Op indicates what operation is being performed on the variable, and is one of read, write, or unset as defined above.''

You know exactly under which name the variable is modified and you can use this name in combination with upvar.

UK Ah i think you missunderstand. The issue lies in another detail:

trace add variable <myarray> write mytraceproc
mytraceproc {_a e op} {
      switch $e \
        limits,max {
      } limits,min {
      } current {
      } name {
      } unit {
}

if you now do something like

upvar ::myarray(name) name
set name "Temperature in Reactor"

all your nice planning is broken for good

PYK 2012-12: So don't do that :)


DKF: If you want to set up a trace on an expression, then you implement this using the style advocated in the following example:

Suppose you want to have a trace on a pair of variables such that some callback (which I'll imaginatively describe as callback here) is called whenever some complex set of conditions (either $VAR_I>0 && $VAR_B=="true" or $VAR_I>=10 && $VAR_B=="false") then you would do it like this:

trace add variable VAR_I write doTest
trace add variable VAR_B write doTest
proc doTest args {
    upvar #0 VAR_I i VAR_B b
    if {
        ($i>0 && [string equal $b "true"]) || 
        ($i>=10 && [string equal $b "false"])
    } then {
        upvar #0 callback
    }
}

You can't extend this to a general watch of an expression, since an expression can include a call to a general command, and solving exactly what variables a general command accesses is tough. Even if we restricted ourselves to procedures, we would still need to solve the Halting Problem (a famously insoluble thing in Computer Science, where there is a proof of insolubility...) Since you can usually tell straight off what variables in an expression actually matter, you can shortcut all that stuff and just tell Tcl exactly what to watch yourself...


DKF: Come on people, this page isn't just "Ask Dr. Donal" - fill in more of this stuff yourselves!


KCH: Here is an idea I have been batting around for using traces to create a kind of super-set of events. Yes, Tcl gives you the ability to define virtual events, but they must be defined in terms of existing X-type events. I was looking for the ability to fire off an event that is completely unrelated to any existing Tcl events, that says, for example, "you have a new meeting scheduled and the details are attached". Something like the following seems like it will probably work, but my gut tells me there is much room for refinement.

The basic idea is to use a namespace to wrap the "Uber-events", use a set of namespace variables with write traces attached to track the events and the "details" attached to an event, and namespace procs to register for, trigger, or delete the events.

namespace eval ::UberEvents {
    namespace export RegisterForEvent \
                     DeleteEvent      \
                     GetEventDetails  \
                     TriggerEvent

    set Events(List) {}

    #######################################################################
    #
    # RegisterForEvent
    #
    # Registers for the UberEvent named $EventName, so that Command is
    # called when the event is "triggered."
    #
    #######################################################################
    proc RegisterForEvent {EventName Command} {
        variable ::UberEvents::Events
        set ID [trace add variable ::UberEvents::$EventName {write} "[list eval $Command];#"]
        lappend ::UberEvents::Events(List) [list $EventName $ID $Command]
        return $ID
    }

    #######################################################################
    #
    # DeleteEvent
    #
    #######################################################################
    proc DeleteEvent {EventName} {
        unset ::UberEvents::$EventName
    }

    #######################################################################
    #
    # GetEventDetails
    #
    # Fetches the details of the event.
    #
    #######################################################################
    proc GetEventDetails {EventName} {
        upvar #0 ::UberEvents::$EventName Details
        return $Details
    }

    #######################################################################
    #
    # TriggerEvent
    #
    # Triggers the UberEvent named $EventName, giving it $Data
    #
    #######################################################################
    proc TriggerEvent {EventName {Data ""}} {
        set ::UberEvents::$EventName $Data
    }
}

Then you can import the namespace and use

RegisterForEvent NewMeeting MyMeetingCallback

to set up to be informed whenever the NewMeeting event triggers, and

TriggerEvent NewMeeting "Details Of The Meeting"

will trigger the NewMeeting event, and everyone registered for that event will be notified. GetEventDetails will return the data associated with the event.

DeleteEvent

is used to remove the event and all associated callbacks.

This can easily be extended to have a common "event handler" which is set up to be called periodically, triggering the event when the handler detects it has occurred. The event details could be passed automatically to each of the callbacks, similar to the way that Tcl handles events, but then you would probably have cases where you would end up putting ";#" at the end of your callback the same way as was demonstrated in code at the beginning of this page. Making the extra data optional but retrievable is imho preferable....

Feedback?

jmn: While using trace here seems fine, I'm not sure exactly what it buys you. You have the command stored in a list, why not just uplevel #0 it from within TriggerEvent? (or use 'after 0' if TriggerEvent needs to return first...(?))

KCH: I want to support multiple "sinks" for an event, just as you can have multiple traces running on a variable simultaneously. I created a version which uses the following for TriggerEvent. The only differences between V1 & V2 is V2 doesn't create the traces, and V2 uses a foreach loop to find and execute each matching registered callback.

proc TriggerEvent {EventName {Data ""}} {
    set ::UberEvents::$EventName $Data

    foreach Event $::UberEvents::Events(List) {
        if {$EventName eq [lindex $Event 0]} {
            eval [lindex $Event 2]
            }
        }
    }

Note that I keep the set of the variable so that the commands can still access the data passed to TriggerEvent. The timing (Tcl/Tk 8.4.6.0, ActiveState build, WinXP, 2.8GHz, 1G Ram) shows the trace implementation to be faster, though I haven't done enough to figure out if the relationship is linear or something other. I suspect that there may be something faster than "eval", but for now the simplicity makes trace more atractive to me. The timing I got follows:

UberEvents V1
-------------
1   628
2  1263
3  1871
4  2494
5  3124
6  3757
7  4420
8  5009
9  5622
10 6149

UberEvents V2
-------------
1   647
2  1268
3  1994
4  2524
5  3288
6  3832
7  4525
8  5371
9  5779
10 6269

I might be able to get a speedup using a nested foreach to extract the sublist elements, but....


Ken: Just wondering, trace allows us to monitor a variable and if write, can help us to execute commands. But the one I did can only execute once. Is there any reason why is it so? If no, how must I code it to execute multiple times wheever the variable is been written?

Donald Arseneau It should work each time. There must be a problem with your specific code. See my next addition for an example, though not using exec.


Donald Arseneau Here is a toy example for synchronizing variables with external data. This method is most useful for interacting with a database server, or with instrumentation, but for this sample let's synchronize with file modification times

proc trace_mtime { var file op } {
    upvar $var v
    switch $op {
        read {
            # read the variable, so synchronize to the file's mtime
            set v($file) [file mtime $file]
        }
        write {
            # Set the variable means to set the file's mtime
            if { ![file exists $file] } {
                # No file, create an empty one
                close [open $file w]
            }
            # Set its modification time
            file mtime $file $v($file)
        }
    }
}

trace add variable mtime {read write} trace_mtime

Report actual file modification time:

puts "File foo.bar was modified at [clock format $mtime(foo.bar)]"

Pretend file log.log was modified yesterday:

set mtime(log.log) [clock scan yesterday]

Note that there is very little error checking. Reading the mtime for a non-existant file, or setting an invalid mtime value result in reasonable error messages:

can't set "mtime(log.log)": expected integer but got "foobar"
can't read "mtime(dddd)": could not read "dddd": no such file or directory

cstephan A simple example of how to utilize trace to provide event processing

# some implementations with 'interp -safe' utilize tcl_trace inside the interp.
set traceCommand [info command *trace]

namespace eval eventTest {

   global traceCommand
   variable processComplete {}   
   
   namespace export whenDone
   
   proc whenDone {procName} {
   
       variable processComplete
           global traceCommand
           
           set callingNamespace [uplevel {namespace current}]
                
       $traceCommand variable processComplete w \
             [join [list $callingNamespace :: $procName] {}]
   
   }
   
   proc finally {name1 name2 op} {
   
     puts "proc finally() executed"
          if {[catch {puts "op:$op -- $name1($name2): [subst $[list $name1]($name2)]"}]} {        
                # IS SCALAR:
                puts "op:$op - $name1:[subst $[list $name1]]"
          }
          puts {}
   }
   
   $traceCommand variable ::eventTest::processComplete w ::eventTest::finally
   
}

proc doFirst {name1 name2 op} {
  puts "proc doFirst() executed"
  # TRY RETURNING INDEX NAME2 OF ARRAY NAME1
  if {[catch {puts "op:$op -- $name1($name2): [subst $[list $name1]($name2)]"}]} {        
    # NAME1 IS NOT AN ARRAY, OUTPUT NAME1 AS SCALAR
    puts "op:$op - $name1:[subst $[list $name1]]"
  }
  puts {}
}
proc doSecond {name1 name2 op} {
  puts "proc doSecond() executed"
  # TRY RETURNING INDEX NAME2 OF ARRAY NAME1
  if {[catch {puts "op:$op -- $name1($name2): [subst $[list $name1]($name2)]"}]} {        
    # NAME1 IS NOT AN ARRAY, OUTPUT NAME1 AS SCALAR
    puts "op:$op - $name1:[subst $[list $name1]]"
  }
  puts {}
}
proc doThird {name1 name2 op} {
  puts "proc doThird() executed"
  # TRY RETURNING INDEX NAME2 OF ARRAY NAME1
  if {[catch {puts "op:$op -- $name1($name2): [subst $[list $name1]($name2)]"}]} {        
    # NAME1 IS NOT AN ARRAY, OUTPUT NAME1 AS SCALAR
    puts "op:$op - $name1:[subst $[list $name1]]"
  }
  puts {}
}

# register test events 
#(enclosed this in a proc because from tclsh they halt paste)
proc regEvents {} {
  ::eventTest::whenDone ::doFirst
  ::eventTest::whenDone ::doSecond
  ::eventTest::whenDone ::doThird
}
regEvents

Watch procedures get notified of the transaction against the processComplete variable:

$ set ::eventTest::processComplete {true}

proc doThird() executed
op:w - ::eventTest::processComplete:true

proc doSecond() executed
op:w - ::eventTest::processComplete:true

proc doFirst() executed
op:w - ::eventTest::processComplete:true

proc finally() executed
op:w - ::eventTest::processComplete:true

true

glennj See also http://www.rosettacode.org/wiki/Defining_Primitive_Data_Types#Tcl for a demonstration of creating a "primitive data type" using traces.


See Also

trace