Philip Quaife 23 Feb 2004 .
I am presently working on a constraint package and investigated memoizing as a possibility for dynamic caching.
When I first saw it, my response was Woo Hoo, another example of the essence of scripting. My first look at some efficiency gains in the memoize proc. I thought that:
catch {return -code $memo($cmd)}
would be the most efficient way to dispatch if the cache item existed.
This is when I discovered that catch prevents return from working even though no error has been generated. This may be the designed semantics of the catch command , but this case is definitely suboptimal.
So as a compromise I settled for the following:
if { ! [catch {set x $::memo($x)} ] } { return -code return $x } ...
This avoids the expensive info exists command as well as only requiring one lookup for the array variable.
The above tested out as twice as fast as the original. My thoughts then turned to automaticaly specialising the code by automatically inserting the code for the memoizing directly into the code. Something like:
memoize fibq
That gives (for example):
proc fibq x { if { ! [catch {set x $::memo($x)} ] } { return $x } set ::memo($x) \ [expr {$x <=1? 1 : [fibq [expr {$x-1}]] + [fibq [expr {$x-2}]]}] }
by automatically inserting the cache and saving the expression result.
At this point The realisation that this is not the way you would code up this type of function an generated:
proc fibq-nr x { set i-2 1 set i-1 2 for {set i 3} {$i <= $x } {incr i} { set new [expr wide(${i-1}) + ${i-2}] set i-2 ${i-1} set i-1 $new } set new }
As we see , the above needs no specialising or memoization to achive acceptable performance speeds. The above runs 30% faster than the fibq proc.
So what have we learned:
Personally I go for the last option. With well designed routines there is no need to cache partial results in the way originally demonstrated.
KPV My take on the above is instead that a better algorithm is often the best optimization. If I wanted the utmost speed to compute Fibonacci numbers, I'd go with the following code (taken from Fibonacci numbers):
proc Fib4 n { expr {round(pow(1.61803398875,$n)/2.2360679775)} }
WHD -- Ah! But you're assuming that the only value to caching fib(n) is to aid in computing fib(n+1). But if you're needing fib(n) many times in one program, then caching the result rather than recomputing it each time still makes sense.
In other words, memoizing (or result caching) might still be useful even in your non-recursive version. - RS: See for instance St John's chapel, where 3D points are projected to 2D, but as polygons very often share points, it brings quite a speed gain to cache the projection result...
AK: Another place where I have seen memoization recently is the packrat parsing technique (http://www.pdos.lcs.mit.edu/~baford/packrat/ ). It is derived from regular top-down parsing with backtracking, which can be exponential-time, because of reparsing everything after backtracking. Through the memoization of all test-results using during parsing any further tests at the same location done after some backtracking do not need to actually parse things again, and the technique become linear-time in the length of the input string.
PWQ Thanks for the examples, I could eloborate my position on both those but instead I will leave you with this thought.
Real world problems involve much larger computation spaces than most examples. A Real 3D application would more likely have 300,000 vertices A graph search space could contain 30,000 nodes and millions of arcs. This means that the caches could occupy 20-100Meg or more. While this is still less than the Java VM requires just to print "Hello World!", it can impact on performance. If implemented naively (as is the case with memoization), they could consume all available memory becore the processing has completed.
When caching is applied to these situations you will need to consider:
RHS Real world examples also go to the other extreme. While there is always a memory/speed tradeoff, the cost in memory can be small for a huge gain in speed. I recently had a real like example where my dataset had hundreds of thousands of elements. When I added caching, it turned out only to have ~650 distinct values. By memoizing the results, I was able to realize huge speed gains for that portion of the program with relatively little memory cost. (Note: the memory cost may even have been a memory gain, had I been able to use a dict for caching instead of an array, due to object re-use)
NEM One area from my experience where caching (in a similar manner to memoizing) is definitely a big win is in large-scale pattern matching algorithms, as used in Expert Systems (and dozens of other application areas). All the main algorithms (Rete, Treat, Leaps etc) do some sort of caching to a greater or lesser extent. If you have a large number of patterns, and a large number of objects to match (i.e. a combinatorial problem), but only a few changes per cycle, it really makes sense to cache partial matches. Designing better algorithms will only get you so far: if your computation is fundamentally expensive, then you need to think about caching/memoization.
AMG: Philip, Tcl recursion isn't fast? Look at [ftw_8] in Directory recursion. It outperforms all the other depth-first traversal procedures.