One caching strategy is to retain the X most-recently-referenced items and discard any additional items. The examples on this page show a way of implementing with a simple proc: a [TclOO] solution would encapsulate the MRU operation a little more tidily. Another approach might use a write [trace] to limit the MRU's size. These alternatives are left as an exercise for the interested reader (.. who might want to add them to this page!) ** MRU List ** An MRU list is straightforward .. the only decision is whether you want items stored in chronological or reverse-chronological order. ====== # most-recent-first proc mrulist {listName limit args} { upvar 1 $listName list incr limit -1 ;# because list indexing counts from 0 set list [lrange [linsert $list 0 {*}$args] 0 $limit] } # most-recent-last proc mrulist {listName limit args} { upvar 1 $listName list incr limit -1 ;# because list indexing counts from 0 lappend list {*}$args set list [lrange $list end-$limit end] } # Example: while {[gets $chan line] > 0} { mrulist history 10 $line } puts "Last 10 entries:" puts [join $history \n] ====== ** MRU Dict ** In [https://groups.google.com/d/msg/comp.lang.tcl/_3KpgMHU6jM/EOueixG0NN8J%|%"high performance" MRU-list ...], [comp.lang.tcl], 2015-01-21, [Christian Gollwitzer] suggests an implementation of '''most-recently used''' that takes advantage of the order-preserving operation of [dict%|%dicts]. Here is the suggestion, slightly modified to be more general: ====== proc pushmru {name limit args} { upvar $name dict foreach key $args { dict unset dict $key dict set dict $key {} while {[dict size $dict] > $limit} { dict unset dict [lindex [dict keys $dict] 0] } } return $dict } ====== '''Example''' ====== set i 0 while {[incr i] < 20} { pushmru cache 4 key$i } ====== <> algorithm | caching | data structure