Version 2 of Memoizing - Is it a good idea

Updated 2004-02-23 07:32:35

Philip Quaife 23 Feb 2004 .

I am presently working on a constraint package and investigated memoizing as a possibility for dynamic caching.

When I first saw it, my response was Woo Hoo, another example of the essence of scripting. My first look at some efficiency gains in the memoize proc. I thought that:

        catch {return -code $memo($cmd)}

would be the most efficient way to dispatch if the cache item existed.

This is when I discovered that catch prevents return from working even though no error has been generated. This may be the designed semantics of the catch command , but this case is definately suboptimal.

So as a compromise I settled for the following:

        if { ! [catch {set x $::memo($x)} ] } {
                return -code return $x
        }
        ...

This avoids the expensive info exists command as well as only requiring one lookup for the array variable.

The above tested out as twice as fast as the original. My thoughts then turned to automaticaly specialising the code by automatically inserting the code for the memoizing directly into the code. Something like:

        memoize fibq

That gives (for example):

 proc fibq x {
        if { ! [catch {set x $::memo($x)} ] } {
                return $x
        }
        return [set ::memo($x) \
       [expr {$x <=1? 1 : [fibq [expr {$x-1}]] + [fibq [expr {$x-2}]]}]]
 }

by automatically inserting the cache and saving the expression result.

At this point The realisation that this is not the way you would code up this type of function an generated:

 proc fibq-nr x {
    set i-2 1
    set i-1 2
        for {set i 3} {$i <= $x } {incr  i} {
                set new [expr wide(${i-1}) + ${i-2}]
                set i-2 ${i-1}
                set i-1 $new
        }
    set new
 }

As we see , the above needs no specialising or memoization to achive acceptable performance speeds. The above runs 30% faster than the fibq proc.

So what have we learned:

  1. The fib is the definative example of the fallacy of functional programming.
  2. We all post trivial examples for brevity that don't show the full potential of the technique.
  3. TCL is not fast with recursive algorithms.
  4. Memoization is a technique that makes up for bad programming practices.

Personally I go for the last option. With well designed routines there is no need to cache partial results in the way originally demonstrated.


Category Performance | Category Algorithm | Category Example