Newton-Raphson Equation Solver

Newton-Raphson iterative equation solver

This technique can be used to solve many equations of the form f(x)=constant There is a math procedure called [::math::calculus::newtonRaphson func deriv initval] this is bulky in that you need to provide 2 function names to the procedure, and create 2 procs. Also you need to be able to differentiate the function you are solving. Here is a simpler code which defines the function in a 'mathematical expression' format for example [newtrap pow(x,4) 256] or [newtrap x*x*x*x 256] will solve x*x*x*x=256

Newton Raphson uses an iterative scheme to solve the equation, with a better guess at the solution found from guess = guess - (f(guess)-rhs)/(derivative f(guess)) Newton-Raphson breaks down if a very small derivative is found (e.g. at x=0 in the x*x*x*x example). This method uses a numerical differentiation proc (which requires multiple evaluations of the function) so is less efficient, but does not require writing a proc to evaluate the derivative and will work for any function that can be numerically differentiated. NB I have discovered that this technique is called the Secant Method although I think it is morally equivalent to the Newton-Raphson. Mathematicians will immediately produce functions where the actual derivative is not the limit of dy/dx just to upset me of course, but these functions are rarely involved in Physics.


 proc derivative {f x} { ;# calculate a numerical derivative for f
        set guess [expr {0.999*$x}]
        set y1 [expr $f]
        set guess [expr {1.001*$x}]
        set y2 [expr $f]
        return [expr {($y2-$y1)/(0.002*$x)}] 
 }

 proc newtrap {f rhs} {
        # convert the lhs of the equation into Newton-Raphson form (lhs-rhs=0)
        # and convert all occurrences of x to the adjustable variable 'guess'
        set f [regsub -all "x" "$f-$rhs" "\$guess"]
        set guess $rhs ;# one possible first guess, not suitable for all equations
        set dd 9999 ;# dd is the change in solution on each iteration; a large value means it has not converged
        set nit 0 ;# limit the number of iterations
        while {$dd>0.00001 || $dd<-0.00001} {
                set dd [expr double([expr $f])/[derivative $f $guess]] ;# change the guess by this
                set guess [expr {$guess-$dd}] ;# new guess
                incr nit
                if {$nit>50        } break ;# time to end
        }
        return $guess
  }

 # Examples, solving 3 different equations

  foreach f {"x*x" "x*x-3*x" "pow(x,3)-2*pow(x,2)-5*x" } {
        set x 1
        while {$x<32} {
            if {[catch {
                set as [newtrap $f $x ]
                puts " solution of $f =$x is at $as"
            } errmsg]} { puts "at $x $f ($x) $errmsg"}
            set x [expr {$x+1}]
        }
  }

 # more examples solving some inverse trig functions.
  foreach f {asin(x) atan(x) acos(x)} {
          set x 0.02
          while {$x<1.000} {
            if {[catch {
                set as [newtrap $f $x ]
                puts " solution of $f =$x is at $as"
            } errmsg]} { puts "at $x $f ($x) $errmsg"}
            set x [expr {$x+.02}]
        }
  }

AM Interesting idea - but rather than an arbitrary 0.001, it is better to use a factor that depends on the accuracy - sqrt(epsilon) is a much advocated value - and you need to take care of "zero". If the starting estimate is 0, your script will fail.

GWM Good points - I was mainly interested in getting solutions to equations involved in map projections such as the Mollweide projection of the earth [1 ] which need a solution to the equation:

2.θ + sin(2*θ) = π.sin(latitude).

There are a number of other projections requiring similar non-linear equations, and in my case accuracy of 0.001 is fine for an angle in the range 0-π/2 (latitude never goes above +/-90°). I have used this technique successfully for maps of the world, and will wikify the maps when I am happy with them. The projections never (I think) have a zero slope in these equations since zero slope would imply 2 latitudes at the same paper (or canvas) coordinate which would be confusing.

The other errors (derivative calculation at x=0, and some equations require a different choice of starting value to ensure that the equation does not explode in the first few iterations) should be attended to at some point. "Let the User beware!"

I think it would be useful to have a number of mathematical procedures which used a similar 'natural language' interface to defining their inputs. Perhaps a simplified mathematica or maple?

AM Don't make me think of the possibilities! :) I have dreamed about something like that for a long time now ...


EKB Here's a version that does a couple of the things AM suggested. It uses a smaller (but user-adjustable) eps and it jitters around the current guess if the derivative is too small. (I also went ahead and called it "secant" even thought I agree -- morally it's Newton-Raphson. But it helps to distinguish from the Newton-Raphson procedure, and let's users know that they don't need to supply a derivative.)

 proc derivative {f x {eps 1.0e-7}} { ;# calculate a numerical derivative for f
        set deltax [expr {$eps * $x}]
        set guess [expr {$x - $deltax}]
        set y1 [expr $f]
        set guess [expr {$x + $deltax}]
        set y2 [expr $f]
        return [expr {($y2-$y1)/(2.0 * $deltax)}]
 }

 proc secant {f rhs {eps 1.0e-7}} {
        set bigeps [expr {sqrt($eps)}]
        # convert the lhs of the equation into Newton-Raphson form (lhs-rhs=0)
        # and convert all occurrences of x to the adjustable variable 'guess'
        set f [regsub -all "x" "$f-$rhs" "\$guess"]
        set guess $rhs ;# one possible first guess, not suitable for all equations
        # Have we made a lucky guess?
        if {abs([expr $f] - $rhs) < $eps} {
            return $guess
        }
        set dd 1.0 ;# dd is the change in solution on each iteration; a value greater than eps means it has not converged
        set nit 0 ;# limit the number of iterations
        while {abs($dd) > $eps} {
               # Jump back and forth, increasing step each time, if we're too close to a small derivative
               set step $bigeps
               while {abs([set deriv [derivative $f $guess]]) < $bigeps * abs([expr $f] - $rhs)} {
                   set guess [expr {$guess + $step}]
                   set step [expr {-2.0 * $step}]
               }
               set dd [expr double([expr $f])/$deriv] ;# change the guess by this
               set guess [expr {$guess-$dd}] ;# new guess
               incr nit
               if {$nit>50} break ;# time to end
        }
        return $guess
  }

 # Problem Examples 
 puts [secant "x * (x-1.0)" 0.5]

 puts [secant "x * x" 0.0]

AM (10 july 2007) I can not resist a challenge like this, so here is a version that is capable of solving systems of equations:

 # nrsolve.tcl --
 #     Solving systems of equations using Newton-Raphson
 #
 package require math::linearalgebra

 namespace eval ::math::newtonraphson {
 }

 # nrsolve --
 #     Primary procedure for setting up the system and solving it
 #
 # Arguments:
 #     coords         Names of the coordinate variables (e.g. {x y})
 #     equations      List of expressions representing the equations
 #     initial        Initial vector for the solution procedure
 #
 # Result:
 #     Solution vector or error if no convergence was achieved
 #
 proc ::math::newtonraphson::nrsolve {coords equations initial} {

     #
     # Sanity check
     #
     set dim [llength $coords]
     if { [llength $equations] != $dim } {
         return -code error "Number of equations must be equal to the number of coordinates"
     }
     if { [llength $initial] != $dim } {
         return -code error "Initial estimate must match the coordinates"
     }

     #
     # Set up auxiliary procedures
     #
     set body "set result {}\n"
     foreach eq $equations c $coords {
         append body "lappend result \[expr {$eq}\]\n"
     }
     append body "return \$result"
     proc func $coords $body

     proc jacobian $coords [string map \
         [list COORDS $coords VALUES \$[join $coords " $"]] {
         set eps 1.0e-7
         set reps [expr {1.0/$eps}]
         set mat {}
         foreach c {COORDS} {
             set v1 [func VALUES]
             set cc [set $c]
             set $c [expr {(1.0+$eps)*[set $c]}]
             set v2 [func VALUES]
             lappend mat [::math::linearalgebra::scale $reps \
                             [::math::linearalgebra::sub $v2 $v1]]
         }
         return $mat
     }]

     #
     # Solve the system
     #
     set previous $initial
     set current  $initial
     set count    0
     while {1} {
         set curvalue [eval func     $current]
         set curjac   [eval jacobian $current]
         set current  [::math::linearalgebra::sub $current \
                          [::math::linearalgebra::solveGauss $curjac $curvalue]]
         set diff     [::math::linearalgebra::norm \
                          [::math::linearalgebra::sub $current $previous]]
         if { $diff < 1.0e-6 || $count > 100 } {
             break
         } else {
             puts $current
             set previous $current
             incr count
         }
     }
     return $current
 }

 #
 # A simple example
 #
 puts [::math::newtonraphson::nrsolve {x y} {{$x*$y-1} {$x*$x+$y*$y-3}} {0.1 1.}]

EKB That is nifty! I liked seeing the trajectory of the guess as it iterates toward the solution. Actually, given how difficult it can be finding roots to multidimensional nonlinear functions, it could be useful to see the trajectory of guesses if it fails to converge.


Lars H, 2007-07-13: Regarding the secant method versus Newton-Raphson terminology, you're right to call this NR. The secant method takes the slope from two successive guesses [2 ] instead of the derivative. It requires less computations per iteration step than Newton-Raphson does, but requires more steps to achieve the same accuracy.


uniquename 2012oct27 - Perhaps the 'secant' proc should be renamed 'newtraphkempben' --- for Newton-Raphson-Kemp-Benedict :-) --- This is a reference to the statement above of EKB (Eric Kemp-Benedict) ... that his 'secant' proc is more like a Newton-Raphson procedure.

In cases where one wants to be sure to converge to a solution, it may be good to consider the 'bisection method' [5 ] or Brent's method [4 ] as an alternative to the 'secant method' [3 ].

P.S. I think I have a use for the EKB procedure. ... And I think I would have use for an extension of these kinds of single-function procs for scalar f(x)=0 (bisection or secant method) to a proc for solving pairs of scalar functions f(x,y)=0, g(x,y)=0 in scalar variables x,y. I.e. I am talking about a vector function f(x) with vector variable x, where the vectors are 2-dimensional. I would like to avoid all the calls to external procs that one sees in nrsolve.tcl, above.

RLE (2012-10-27): You might consider taking a look at the Sugar macro system. You might be able to make use of it to keep your code clear, but have all the proc calls become inline code at runtime automatically.