'''[Philip Quaife] 23 Feb 2004 '''. I am presently working on a constraint package and investigated [memoizing] as a possibility for dynamic caching. When I first saw it, my response was ''Woo Hoo'', another example of the essence of scripting. My first look at some efficiency gains in the memoize proc. I thought that: catch {return -code $memo($cmd)} would be the most efficient way to dispatch if the cache item existed. This is when I discovered that [catch] prevents return from working even though no error has been generated. This may be the designed semantics of the ''catch'' command , but this case is definitely suboptimal. So as a compromise I settled for the following: if { ! [catch {set x $::memo($x)} ] } { return -code return $x } ... This avoids the expensive ''info exists'' command as well as only requiring one lookup for the array variable. The above tested out as twice as fast as the original. My thoughts then turned to automaticaly specialising the code by automatically inserting the code for the memoizing directly into the code. Something like: memoize fibq That gives (for example): proc fibq x { if { ! [catch {set x $::memo($x)} ] } { return $x } set ::memo($x) \ [expr {$x <=1? 1 : [fibq [expr {$x-1}]] + [fibq [expr {$x-2}]]}] } ''by automatically inserting the cache and saving the expression result''. At this point The realisation that this is not the way you would code up this type of function an generated: proc fibq-nr x { set i-2 1 set i-1 2 for {set i 3} {$i <= $x } {incr i} { set new [expr wide(${i-1}) + ${i-2}] set i-2 ${i-1} set i-1 $new } set new } As we see , the above needs no specialising or memoization to achive acceptable performance speeds. The above runs 30% faster than the ''fibq'' proc. So what have we learned: 1. That ''fib'' is the definitive example of the fallacy of functional programming. 1. We all post trivial examples for brevity that don't show the full potential of the technique. 1. TCL is not fast with recursive algorithms. 1. Memoization is a technique that makes up for bad programming practices. Personally I go for the last option. With well designed routines there is no need to cache partial results in the way originally demonstrated. ---- [WHD] -- Ah! But you're assuming that the only value to caching fib(n) is to aid in computing fib(n+1). But if you're needing fib(n) many times in one program, then caching the result rather than recomputing it each time still makes sense. In other words, [memoizing] (or [result caching]) might still be useful even in your non-recursive version. - [RS]: See for instance [St John's chapel], where 3D points are projected to 2D, but as polygons very often share points, it brings quite a speed gain to cache the projection result... [AK]: Another place where I have seen memoization recently is the ''packrat'' parsing technique (http://www.pdos.lcs.mit.edu/~baford/packrat/). It is derived from regular top-down parsing with backtracking, which can be exponential-time, because of reparsing everything after backtracking. Through the memoization of all test-results using during parsing any further tests at the same location done after some backtracking do not need to actually parse things again, and the technique become linear-time in the length of the input string. ---- [Category Performance] | [Category Algorithm] | [Category Example]