Elman Networks are a form of recurrent Neural Networks which have connections from their hidden layer back to a special copy layer. This means that the function learnt by the network can be based on the current inputs plus a record of the previous state(s) and outputs of the network. In other words, the Elman net is a finite state machine that learns what state to remember (i.e., what is relevant). The special copy layer is treated as just another set of inputs and so standard back-propagation learning techniques can be used (something which isn't generally possible with recurrent networks).
As an example (adapted from that in BOOK Artificial Intelligence: A New Synthesis), consider an intelligent agent that is navigating a grid-world of cells. It can sense the cells to the north, south, east, and west of it directly, and then can move in one of these directions. However, in order to know what is in the diagonally adjacent cells (i.e., north-east etc) then the agent will need to remember the values from its last position (which will give some of the missing information, but not all). An Elman network can be devised that will accomplish this. Consider the following (non-recurrent) network for deciding on an action based on the inputs from the sensors:
We can convert this into an Elman network that remembers previous state by adding a a new set of inputs which are fully connected by recurrent links to the hidden layer outputs (but delayed by one unit of time):
This should then (with proper training) be able to learn a suitable function from inputs and stored state to action.
See also Hopfield networks for another form of recurrent network.