Arjen Markus (10 may 2004) Playing with mathematical expressions, I thought I'd make a small procedure to deal with mathematical *limits*:

sin(x) lim ------ = 1 x->0 x

As I have never seen a textbook handling that kind of things, here is a home-made **numerical** method. Not tested extensively and the parameter values are not the result of a thorough analysis - quite the opposite: just figured they would be nice values ...

GWM 14 Oct 04 Of course a numerical method is very welcome, but you may only need the L'Hôpital rule L'Hopital Rule to produce a better formula. [L1 ]

# limit.tcl -- # Tentative method to determine the limit of some mathematical # expression with one variable nearing a limit value # # Note: # This is very much a do-it-yourself method - I have # never seen a treatise on the subject of numerically # calculating limits. # # General idea: # We can use this in the script for evaluating mathematical # expressions: # math { # y = lim x->0 sin(x)/x # z = lim x->inf (sqrt(x+2)-sqrt(x+1))/(sqrt(x+1)-sqrt(x)) # } # # Other "special" forms: # y = deriv x=0 log(1+x*x) # y = int x=0 to 1 sin(x) # y = sum x=1 to 10 1.0/pow(x,2) # namespace eval ::math::calculus { namespace export limit namespace eval v {} ;# Private namespace } # limit -- # Determine the limit of an expression as the variable nears # the given limit # # Arguments: # varname Name of the variable to reach the limit # limval The limit value ("inf" is infinite) # expression The expression to evaluate (make sure it # is enclosed in braces!) # level Optional (private use only): number of # levels to go up # Result: # Estimated limit # Note: # The expression is evaluated in the caller's scope, # so that a call like: # limit x 0 {sin($a*$x)/$x} # with "a" as a "free" parameter in the calling procedure # # Todo: # It is not yet dealing with the possibility that the # numerical evaluation may fail due to the large value # of an argument (say: exp(1.0e30)) # proc ::math::calculus::limit {varname limval expression {level 1}} { # # Prepare the expression # set expression [string map [list \$$varname \$::math::calculus::v::$varname] $expression] # # It is easiest to just evaluate the expression # in the caller's scope and do all logic here # set dvar 0.25 set factor 0.5 set base 1.0 set limv2 $limval if { $limval == 0.0 } { set limv2 1.0 set base 0.0 } if { $limval == "inf" } { set limv2 1.0 ;# start carefully set factor 16.0 } if { $limval == "-inf" } { set limv2 -1.0 ;# start carefully set factor 16.0 } set count 0 set fvalues {} set vvalues {} while { $count < 10 } { set v::$varname [expr {$limv2*($base+$dvar)}] if { [catch { lappend fvalues [uplevel $level [list expr $expression]] lappend vvalues [set v::$varname] }] } { break } incr count set dvar [expr {$dvar*$factor}] } # # Next step: # Estimate the limiting value via extrapolation # (use a quadratic polynomial: y = a + b x + c x**2) # foreach {x1 x2 x3} [lrange $vvalues end-2 end] {break} foreach {y1 y2 y3} [lrange $fvalues end-2 end] {break} if { $limval == "inf" || $limval == "-inf" } { set x1 [expr {1.0/$x1}] set x2 [expr {1.0/$x2}] set x3 [expr {1.0/$x3}] set limval 0.0 } set yy [expr {($y1-$y2)*($x1-$x3)-($y1-$y3)*($x1-$x2)}] set c [expr {$yy /(($x1-$x2)*($x1-$x3)*($x2-$x3))}] set b [expr {(($y1-$y2)-$c*($x1*$x1-$x2*$x2))/($x1-$x2)}] set a [expr {$y1-$b*$x1-$c*$x1*$x1}] return [expr {$a+$b*$limval+$c*$limval*$limval}] } # # Test # namespace import ::math::calculus::* set a 2.0 puts "[limit x 0 {sin($a*$x)/$x}] (expected: $a)" puts "[limit x inf {atan($x)}] (expected: [expr {acos(-1)/2.0}])" puts "[limit x 0 {log($x)*$x}] (expected: 0.0)" puts "[limit x 1 {log($x)/($x-1)}] (expected: 1.0)" set b 10 puts "[limit x inf {(sqrt($x+$b)-sqrt($x+1))/(sqrt($x+1)-sqrt($x))}] (expected: [expr {$b-1.0}])" puts "Likely failure:" puts "[limit x inf {exp($x)/(exp($x)-1)}]"

KBK 10 May 2004 - There is a wealth of literature on the topic. If you look on a search engine for "Richardson extrapolation," the first few results are likely to be highly relevant.

The following code (which isn't quite as efficient as it should be, but if you want efficiency, Tcl isn't your language) is similar to AM's example above, although its interface is a little bit different. It uses Richardson extrapolation to estimate the value of a function *f(x)* as *x* approaches zero from above. It's easy to make it use arbitrary limits by making simple changes of variable; the test examples show how, since they copy AM's examples. Note that the last example, which is categorized as "likely failure" above, not only does not fail but returns nearly the requested eight digits of significance. Only the *f3* example requires more than ten function evaluations to converge to the requested precision.

I'll let someone else clean up the API for ::math::calculus. I'm too lazy today.

AM I must confess not to be familiar with this method: I did mean it when I said I had not come across the topic (just finished two books on numerical methods, from two different backgrounds and neither mentioned Richardson's method).

One thing though: approaching zero is much easier than approaching any other number if you are dealing with floating-point arithmetic. The reason is of course the a-cursed epsilon: 1+eps/2 is indistinguishable from 1. With 10 evaluations, this is probably not really a problem... but if you are serious about numerical analysis, then you have to consider the oddest possibilities :)

I do think it is a worthwhile addition to ::math::calculus!

set tcl_precision 17 # richardson -- # # Use Richardson extrapolation to estimate the value of # a function f(0) by evaluating f(1), f(0.5), f(0.25), ... # # Parameters: # f - Function to evaluate. # abstol - Desired absolute error in the result # reltol - Desired relative error in the result # maxiter - Maximum number of times to evaluate f # # Results: # Returns an estimate of the limit of f(x) as x approaches zero # from above. # # Side effects: # None. proc richardson {f { abstol 1e-08} {reltol 1.e-08} {maxiter 100}} { set l [list] set x 1.0 for { set i 0 } { $i < $maxiter } { incr i } { set y [$f $x] set l2 [list $y] set pow2 1. foreach oldy $l { set pow2 [expr { $pow2 * 2. }] set y [expr { $y + ( $y - $oldy ) / ( $pow2 - 1. ) }] lappend l2 $y } if { $i >= 5 } { set err [expr { abs( $y - [lindex $l2 end-1] ) }] if { $err < $abstol } break if { $err < abs($y) * $reltol } break } set l $l2 set x [expr { $x / 2. }] } return $y } # Test cases proc f1 { x } { expr { sin( 2. * $x ) / $x } } puts "[richardson f1] should be 2." proc f2 { x } { expr { atan( 1.0 / $x ) } } puts "[richardson f2] should be [expr { acos(-1) / 2. }]" # The following function causes the extrapolation to misestimate the # error term (returning its square root instead of the term itself). proc f3 { x } { expr { log($x) * $x } } puts "[richardson f3 1e-20 1e-20] should be 0.0" proc f4 { x } { expr { log( 1. + $x ) / $x } } puts "[richardson f4] should be 1.0" proc f5 { t } { set x [expr { 1.0 / $t }] set b 10 return [expr { ( sqrt( $x + $b ) - sqrt( $x + 1 ) ) / ( sqrt( $x + 1 ) - sqrt( $x ) ) }] } puts "[richardson f5] should be 9.0" proc f6 { x } { expr { exp( 1. / $x ) / ( exp ( 1. / $x ) - 1. ) } } puts "[richardson f6] should be 1.0"

TV *(May 11, 2004)* Ehm, ehr, one wants to 'try' the function near the extremum, and see what happens, right? I mean richardson or not, one would sit down with a calculator of sufficent number accuracy and suspected reliability and try to compute function values for near the extremum and see if they make sense, converge to some number, or such.

For the general wiki reader maybe something to be pointed at, I'd say either the theoretic description or simply the method is a bit shallow though suggests to casual readers (as I was) a lot of heavily named math, while in fact not *that* much happens...

Not that it isn't fun to try function extremas and figure out where the machine math goes wrong, or do a Tailor development (a local estimate of a function through n-th order approximation, using a number of derivatives ) around an extremum and see what that gives, or use the Limit Rule or something.

One with calculator in hand wants to make sure a function doesn't resonate it's way out of a decent limit, and if there are no Dirac-like pulses near the extremun (by visual inspection of the formula, or function analysis)!

KBK - Well, yes. I didn't put the thory here because the Wiki is an atrocious medium for typing mathematics. You can find a basic development at [L2 ]. I did suggest using a search engine, since "Richardson extrapolation" is a specific enough term that there are few false positives among the returned results.

Essentially, what Richardson does is successively reduce the distance to the limit by a factor of Q (Q==2 in the implementation). It then uses finite differences to fit a polynomial to the function values. This polynomial is used as an implicit Taylor series; its value at zero is returned as the limiting value. The precision of the result is estimated as the difference between the results for a polynomial of degree N/2 and those for one of degree N.

Richardson extrapolation is most commonly encountered in the contexts of:

- Romberg integration (which uses Richardson to extrapolate successive doublings of the number of trapezoids in the Trapezoid Rule to an infinite number).
- numerical differentiation (one common technique for evaluating a derivative is to evaluate (f(x+h)-f(x-h))/2h, successively halving h, and extrapolate to h==0).
- the Bulirsch-Stoer algorithm for ODE's.

I've also seen it used to accelerate the convergence of series approximations to special functions. For example, it can make the naive computation of pi as 4-4/3+4/5-4/7... practicable. It yields 15 digits of significance after evaluation of 1024 terms of this series of notoriously slow convergence (at which point simple summation of the series has given only four digits of significance).

I find simple, shallow, routines like this useful to handle the common, well-behaved cases. I prefer to save my time as a numerical analyst to solving the problems where the simple techniques break down.

TV As a side remark, the sinc function is well known in (electrical) engineering and related fields, as the laplace (or fourier) transform of a rectangular wave, or the back transform of a rectangular (continuous) frequency distribution. See [L3 ]. Very important in signal sampling/(perfect) reconstruction. And probably fairly easy to analyse around x=0 using traditional math.