Markov Matrices

A Markov Matrix, a construct in linear algebra, is a matrix whose values are all non-zero and add up to 1.

See Also

Finite state machines
Markov
Markov algorithms

Description

Arjen Markus 2003-01-17: The script below implements (without any argument checking) a type of stochastic process due to A.A.Markov. The idea is that a system changes from one state to another (in case the states are discrete) based on the current state only and the probability with which it changes to that other state.

For instance: the example resembles a traffic light. Such a system cycles through three states: from green to orange to red to green (in some countries there is a variation: from red to orange to green, but not in The Netherlands). Of course the time it remains in any one state will depend on the circumstances. So, the example assumes that it will remain in either the green or the red state for a while, making a quick transition when in the orange state. This is reflected by the magnitude of the probabilities.

One thing: the matrix' rows must add up to 1. Otherwise it could drop out of the set of valid states.

A reference for this type of mathematical techniques is: Murray Spiegel et al. Schaum's outline of probability and statistics.

Here [L1 ] is a pretty extensive introduction to the whole concept of Markov chains, stochastic matrices (as the matrices used here are called) and so on.


# markov.tcl --
#    Simple implementation of Markov chains as a model for stochastic
#    processes
#

namespace eval ::markov {
    namespace export makeMarkov
}

# makeMarkov --
#    Create a command that behaves as a Markov chain
#
# Arguments:
#    name     Name of the command
#    matrix   Matrix of transition probabities (list of lists)
#    init     Initial value
#
# Result:
#    Name of the command
#
# Side effect:
#    A new command is generated that returns a new state at each call
#
# Note:
#    There is NO argument checking whatsoever in this simple
#    implementation
#
proc ::markov::makeMarkov {name matrix init} {
    interp alias {} $name {} ::markov::MarkovImpl $name $matrix
    $name $init

    return $name
}

# MarkovImpl --
#    Determine a new state for the given Markov chain
#
# Arguments:
#    name     Name of the command
#    matrix   Matrix of transition probabities (list of lists)
#    init     Initial value (optional, to reset)
#
# Result:
#    New state
#
# Side effect:
#    The state of the chain is updated
#
proc ::markov::MarkovImpl {name matrix {init {}}} {
    variable state_$name

    #
    # (Re)initialise or determine the next state via the
    # transition matrix
    #
    if { $init != {} } {
        set state_$name $init
    } else {
        set probabilities [lindex $matrix [set state_$name]]
        set prob          [expr {rand()}]

        set next 0
        foreach trans $probabilities {
            if { $prob < $trans } {
                break
            } else {
                set prob [expr {$prob-$trans}]
                incr next
            }
        }

        #
        # Simple precaution - "should not be necessary"
        #
        if { $next >= [llength $probabilities] } {
            set next 0
        }

        set state_$name $next
    }
    return [set state_$name]
}

#
# Demonstration: a traffic light
#
if { [file tail [info script]] == [file tail $::argv0] } {

    #
    # The state changes from 0 to 1 to 2, but may remain the
    # same for a while.
    # The numbers in each row must add up to 1.
    #
    ::markov::makeMarkov light {
         {0.8  0.2   0.0}
         {0.0  0.2   0.8}
         {0.3  0.0   0.7}
    } 0

    for { set i 0 } { $i < 20 } { incr i } {
        puts [light]
    }

    puts "Counting the states' frequencies:"

    set count(0) 0
    set count(1) 0
    set count(2) 0
    for { set i 0 } { $i < 2000 } { incr i } {
        incr count([light])
    }

    puts "State 0: $count(0)"
    puts "State 1: $count(1)"
    puts "State 2: $count(2)"
}