Matrix determinant

Richard Suchenwirth 2004-03-14 - The determinant of a square matrix is a scalar number that can be used for characterizing it (see http://mathworld.wolfram.com/Determinant.html : "If the determinant of a matrix is 0, the matrix is said to be singular, and if the determinant is 1, the matrix is said to be unimodular"). Here's some weekend fun code to compute the determinant of a matrix, represented as a list of lists:

proc det matrix {
if {[llength \$matrix] != [llength [lindex \$matrix 0]]} {
error "non-square matrix"
}
switch [llength \$matrix] {
2 {
foreach {a b c d} [join \$matrix] break
expr {\$a*\$d - \$b*\$c}
} default {
set i 0
set mat2 [lrange \$matrix 1 end]
set res 0
foreach element [lindex \$matrix 0] {
if \$element {
set sign [expr {\$i%2? -1: 1}]
set res [expr {\$res + \$sign*\$element* [det [cancelcol \$i \$mat2]]}]
}
incr i
}
set res
}
}
}
proc cancelcol {n matrix} {
set res {}
foreach row \$matrix {
lappend res [lreplace \$row \$n \$n]
}
set res
}

if 0 {Some tests, to see whether this code agrees with the examples in my math book:

% det {{5 -3 2} {1 0 6} {3 1 -2}}
-88
% det {{1 0 0} {0 1 0} {0 0 1}}
1
% det {{1 2} {3 4}}
-2
% det {{2 4} {1 3}}
2
% det {{1 2 -1} {-2 1 3} {3 0 -2}}
11
% det {{1 2 -1 0} {-2 1 3 1} {3 0 -2 2} {1 2 3 4}}
-20
% det {{1 1 1} {1 1 1} {1 1 1}}
0

TV Not wanting to impose, the above can be seen true by taking the determinant as the product of the eigenvalues.

Artur Trzewik many years ago I have also plaed with linear algebra. This algorithm is correct but need n!. Because many det are computed multiple times it is posible by saving the results to get 2^n. There is my old tk program that compute also determinates (also for big precision n/m numbers). http://www.xdobry.de/tkmatrix/index.html It can compute det for matrices bigger then 30. The math part is programmed in c++ as tcl-extension.

But this one is of course very elegant (as everything from RS)

Thank you :) Re performance - if the matrix isn't very sparse or repetitive, I expect that most of the sub-matrices used in calls to det will be different. But I'm only experimenting with small toy matrices anyway... Re eigenvalues: My math dictionary gives only a brief mention of them, too short for me to serve as "cooking recipe". Who knows better, please teach us, on a Wiki page!

Sarnold Eigenvalues[1 ] l_i from matrix A have the following property:

For each i = 1, n
There is a vector v_i such that A.v_i = l_i.v_i

the eigen values are the values along the diagonal of a diagonalised or triangle matrix as described below, example:

{ {1 0 0}
{0 2 0}
{0 0 1} }

has eigen values 1, 2 and 1 with two of the values being degenerate

GWM 25.10.06 there are very few algorithms for finding all the Eigenvalues of a matrix and they are very intensive compared to Gauss elimination. Mathematica can calculate the eigenvalues, but in doing this it effectively solves the system of equations N times. The largest eigenvalue can be found by iteratively applying the matrix to a trial vector - gradually the vector will become constant in direction, and will be "eigenvalue" times longer each time the iteration is applied. If you are very lucky, you may find another eigenvector by chance of course!

package require math::linearalgebra
set m1  { {1 0 0}  {0 2 0}  {0 0 1} }
set v {1 1 0}
set i 0; while {\$i<200} { incr i
set v [::math::linearalgebra::matmul \$m1 \$v]
set ev [::math::linearalgebra::norm  \$v]
set v [::math::linearalgebra::unitLengthVector   \$v]
}
puts "Eigenvector of \$m1 is approx \$v, largest eigenvalue is \$ev"

Result: Eigenvector is approx 9.53674316406e-007 1.0 0.0, largest eigenvalue is 2.0

If you use starting vectors v such as "set v {0 0 1}" or "set v {1 0 1}" you will see that there are other eigenvectors, but because the eigenvalues are degenerate there are infinitely many eigenvectors. Try set v {24 .0001 13} to get

Eigenvector of  {1 0 0}  {0 2 0}  {0 0 1}  is approx 1.49352366679e-055 1.0 8.08991986122e-056, largest eigenvalue is 2.0

Even though the original vector is near an eigenvector the algorithm still finds the largest eigenvalue (as this is the most amplified by the matrix multiplication).

Lars H: The fastest[1,2] way of computing the determinant of a matrix (that isn't very small or sparse) is actually to use good old Gaussian elemination. The determinant of a triangular matrix is simply the product of the diagonal elements. Every matrix can be reduced to a triangular matrix through elementary row operations, and all of these change the determinant in an easily predictable manner -- adding a row multiple to another row doesn't change the determinant at all, row exchange change the sign of the determinant, and multiplying a row by a scalar multiplies the determinant by that same scalar. The asymptotic complexity is thus O(n^3).

 OK, you need to invert nonzero matrix elements, so if they're not from a field then it's not quite this simple. That is however not an issue when dealing with Tcl numbers.

 Actually, one shouldn't quite do Gaussian elemination (which is an algorithm, and thus has a fixed complexity), but LU factorization (which is a problem, and thus able to take advantage of new and better algorithms). The asymptotic complexity of LU factorization happens to be the same as that of matrix multiplication (Bunch & Hopcroft: Triangular factorization and inversion by fast matrix multiplication. Mathematics of Computation 28 (1974), 231-236) and thus it is in theory possible to compute the determinant of a matrix in O(n^2.376) operations. Although for reasonable matrix sizes you're better off with Gaussian elimination.

GWM LU is very useful if the matrix is to be used again with a different set of data - Gauss elimination is OK for one off solutions to a set of linear equations, but an LU decomposed matrix can be applied many times. One solution method for PDE's involves solving

M.v = r

where M is a (usually sparse) N*N matrix, v a 'vector' of N values, and r a vector of right hand sides which may depend on v. A solution method is to apply the inverse of M to r repeatedly:

giving a value for v, which is
inserted into r and
repeated...

Inverting a huge sparse matrix is slow, but if we decompose M to become L*U then invert U (as it is triangular this is trivial) we can solve by repeatedly applying: L.v = Uinv.r

GWM Recent Tcl provides a linearalgebra package LinearAlgebra for solution of equations.