Arjen Markus (1 october 2020) "Algorithmic differentiation" is a technique that is used with numerical programs to determine the sensitivity of the solution to one or more parameters. There are several methods you can use to do this, for instance:

- Calculate the solution with the nominal values for the parameters
- Calculate the solution with slightly different values for the parameters
- Calculate the difference

As a trivial example, consider the linear equation: 10*x - b = 0. The solution is of course x = b/10. And you can see that a change in the parameter "b" leads to a change in the solution. What you could also do is differentiate the equation with respect to b:

10*dx/db - 1 = 0 ==> dx/db = 1/10

So the sensitivity of x wrt b is 1/10: a change in b leads to a change in x of 1/10 times that change in b.

Things get more complicated when it is not a linear equation, but a set of differential equations or a system of linear equations. The principles of algorithmic differentiation hold also in this case - and we can automate the process of constructing a "derivative" of the program. That is the purpose of the simple experiment below: how can we achieve this in Tcl?

# autoderiv.tcl -- # # SEE: notes # # Simple experiment with algorithmic differentiation: # the tangent-linear mode, as that is much easier to implement # # Note: Inspired by a series of master classes on the subject # organised by NAG. # # Consider the differential equation: # dx/dt = 1 - kx # with: # x = x0 at t = 0 # # We have two parameters: k and x0 and we want to know how # sensitive the solution is to each of these parameters. # A naïve numerical approach would be to solve it several # times with slightly different values of the parameters # and then determine the difference. # # Problems with this approach: # - it may be very hard to determine the right sort of # difference for the two parameters. With a linear system # like the above, a deviation from an initial value x0 = 1 # may not be a real problem, using x0 = 1.00001 or x0 = 1.1 # as alternative values will still give the same result, # but that is certainly not true if the equation gets # a trifle more complicated. Even for the parameter k you # should handle the choice with care. # # - it is possible to enter a different régime of solutions # with non-linear differential equations. And you also # have an approximation at best. # # The technique of algorithmic differentiation makes it possible # to get the results without having to chose the small # difference for each of the parameters involved. It comes in # two mode: tangent-linear and adjoint. The latter is preferrable # when you have a lot of input parameters and a relatively small # number of output parameters. But it is also quite complicated # to implement. # # The experiment here simply uses the tangent-linear mode. The # philosophy is: # - select the parameter or parameters of interest # - differentiate the equations with respect to these parameters # - solve the problem (not necessarily an ordinary differential # equation!) using the ordinary equation and the differential # form # - you then have a result with "exact" derivatives # # In this implementation the Tcllib module "symdiff" is not used, # but it would certainly reduce the amount of tedious and error-prone # work. That step is reserved for a next implementation. # # The model equation has the solution: # # x = (x0 - 1/k) exp(-kt) + 1/k # # so we can (in principle) determine the sensitivity of the value of x # at time t = T with respect to k. Unfortunately it is a rather complicated # expression. The sensitivity wrt x0 is much easier: # # d(x(T))/dx0 = exp(-kT) # # So let's try it. # # deriv -- # Calculate the derivative (tangent-linear mode) # # Arguments: # t Time # x Dependent variable (plus derivative) # k Decay coefficient (plus derivative) # # Result: # Time derivative of x as its derivative wrt x and k) # proc deriv {t x k} { lassign $x xt dx lassign $k kt dk # # Derivative of x wrt t - the differential equation itself # set dxdt [expr {1.0 - $kt * $xt}] # # Derivatives of the above wrt x and k # set dxdx [expr { - $kt * $dx }] set dxdk [expr { - $xt * $dk }] return [list $dxdt [expr {$dxdx + $dxdk}]] } # step -- # Perform an Euler step in the solution of the differential equation # # Arguments: # deltt Time step # t Time # x Dependent variable (plus derivative) # k Decay coefficient (plus derivative) # # Result: # New vector {xt dx} # proc step {deltt t x k} { set d [deriv $t $x $k] lassign $x xt dx lassign $k kt dk lassign $d dxdt ddx set xnew [expr {$xt + $deltt * $dxdt}] set dxnew [expr {$dx + $deltt * $ddx}] return [list $xnew $dxnew] } # main -- # Solve the equation over a certain period # set dt 0.01 set x0 {1.0 1.0} ;# First we look at the sensitivity of the result for x0 set k {0.3 0.0} ;# The derivative for k should therefore be 0 set k0 [lindex $k 0] set period 3.0 # # Since we have a simple formula for the sensitivity of x wrt x0, # we can check the result # puts "Sensitivity with respect to the initial condition:" set t 0.0 set x $x0 set cnt 0 while { $t < $period } { set x [step $dt $t $x $k] set t [expr {$t + $dt}] incr cnt if { $cnt % 25 == 0 } { puts "$t $x [expr {exp(-$k0*$t)}]" } } # # The sensitivity of x wrt k is a bit more complicated ... # So just solve the equation and report # # NOTE: # If you let it run for a longer period (say period = 50), then # the solution becomes almost constant and the sensitivity wrt k # becomes 11.11107... This is in line with the expectation: # the solution becomes 1/k, hence differentiation wrt k gives # a sensitivity of 1/k**2. With k = 0.3, that would be 11.111... # puts "Sensitivity with respect to the parameter k:" set x0 {1.0 0.0} ;# x0 is fixed now set k {0.3 1.0} ;# The derivative for k should now be unequal 0 set k0 [lindex $k 0] set t 0.0 set x $x0 set cnt 0 while { $t < $period } { set x [step $dt $t $x $k] set t [expr {$t + $dt}] incr cnt if { $cnt % 25 == 0 } { puts "$t $x" } } # # Here is a solution using the symdiff package # puts "Sensitivity determined via the Jacobian (automatically determined)" package require math::calculus::symdiff # # We use the Jacobian to determine the sensitivity of the solution to the parameter k # Because the parameter itself remains constant, the equation for k is: dk/dt = 0 # And we use a fixed initial value. # set equation {1.0 - $k*$x} set jacobian [::math::calculus::symdiff::jacobian [dict create x $equation k {0.01}]] set t 0.0 set k 0.3 set x 1.0 set dk 1.0 set ddx 0.0 set dvector [list $ddx $dk] set cnt 0 while { $t < $period } { set dx [expr $equation] # We have two parameters, x and k, so the Jacobian is a 2x2 matrix, # but the second parameter is a constant, so ignore the second row. foreach row $jacobian { foreach col $row v $dvector { set dxcol [expr $col] set ddx [expr {$ddx + $dxcol * $v * $dt}] } break ;# We have only one equation ... } set x [expr {$x + $dx * $dt}] set t [expr {$t + $dt}] incr cnt if { $cnt % 25 == 0 } { puts "$t $x $ddx" } lset dvector 0 $ddx } # # NOTE: # We can also do it the pedestrian way ... # Run with k = 0.3 and k = 0.3001 #

Note that the program is only an experiment, it shows the principle. From this an easy-to-use package can be constructed ...