Intelligent Agents

An Intelligent Agent is, broadly, a computer system situated in some environment, that can act with some degree of autonomy in order to achieve some design objectives (or goals), usually on behalf of human individuals, corporate entities or organisations. Research into the development of Intelligent Agents (IAs) integrates and builds on many strands of work from Artificial Intelligence (AI), but often with more of an emphasis on building complete autonomous entities and societies of such entities (as in multi-agent systems). This page will attempt to collect information on techniques and resources revelant to the development of intelligent agent software using Tcl.

IA technologies can be divided up into broad categories based on the relative abilities of the resulting agents:

Reactive Agents

The most basic ability of any intelligent agent is to be able to react to events occurring within the environment in which it is situated. Tcl's wonderful event loop is a useful mechanism for notifying an agent when an interesting event has occurred in the environment that it is situated in. However, once an event has been detected the agent has to decide on an appropriate response. A reactive (or reflexive, or stimulus-response) agent chooses its action as a function of the current sensed state of the environment, possibly together with some stored description of previous state(s) of the environment and previous actions taken (e.g., to ensure consistency of actions or to smooth readings from noisy or inaccurate sensors). There are many techniques that can be used to compute this function, including:

Reactive agents can exhibit surprisingly complex behaviour; partially this is a reflection of the complexity of the environments within which they are situated. See Braitenberg Vehicles for a demonstration of the variety of behaviours that can be implemented by simple reactive agents.

Deliberative Agents and Planning

Deliberative agents extend reactive agents by attempting to plan ahead to predict future states of the environment and develop long-term courses of action to achieve given design objectives. We can also make a distinction between deliberation -- choosing which goals to commit to -- and planning/ means-end analysis where an agent contemplates how to achieve those goals. Another name for a deliberative agent would be a proactive agent. An agent can be deliberative without doing planning (e.g., if it just selects a pre-canned plan for achieving the goal from a plan library). Choosing between goals may accomplished by feasibility analysis (i.e., adopt those goals that the agent has resources to actually achieve), or by some reasoning using some higher-level goals (e.g., an agent may have some fixed higher-level goals, and deliberates to choose sub-goals) or motives, or by some decision theoretic notion of utility of goals (i.e., a real-valued function that maps goals to some measure of how useful achieving that goal would be to the agent's overall aims). Such deliberation usually involves having an internal model of the world and predicting what the effect of achieving a goal will be. Planning can then use this model to determine what sequence of actions will achieve the goal.

There are problems associated with deliberation and planning, such as being able to map complex and often noisy information about the environment to an often idealised internal representation, and being able to keep such an internal model up-to-date while the environment may be constantly changing in unpredictable ways. Rodney Brooks summed up these problems with the slogan "the world is its own best model" in a series of influentual papers (e.g. Intelligence without Reason [L1 ], PostScript). Any goal we adopt is based on information that is becoming increasingly out of date, and plans made far into the future have little hope of being suitable when we actually come round to implementing them. For some tasks, however, such as those where careful management of resources over the long-term is required, deliberative planning techniques can be worth the effort. A common means of planning is to formulate a description of the environment or problem as a directed graph structure of states (a state-space) in which we can search for a solution (a path to a goal-state) by applying operators which connect states. A number of pages discuss different search strategies:

Beyond deliberative agents, there are ideas such as meta-reasoning/reflection where an agent employs reactive or deliberative methods to reason about its own operation (e.g. to avoid repetitive behaviour), and hybrid/three-layer architectures which consist of a low-level behaviour-based (reactive) layer that takes care of particular behaviours (e.g., "find food", "avoid obstacles" etc), a sequencing/control layer that decides which behaviours should be active, and a planning/deliberative layer which can be called on to plan longer-term activity. Such 3-layer architectures differ in which layer is in overall control of the agent.

Knowledge Representation and Reasoning

Some intelligent agent applications can involve representing and reasoning with large amounts of complex information about a problem domain, or general "common-sense" knowledge:

Knowledge-based planning mechanisms include those based on the Situation Calculus and STRIPS-style planning operators.

Multi-Agent Systems

Often multiple agents may have to interact or even actively cooperate and coordinate in order to achieve tasks in a shared environment. Development of multi-agent systems is an area of much current research effort. Topics include:

  • Modal Logic to reason about the beliefs and states of other agents;
  • Speech Acts to influence the beliefs of other agents;
  • Use of Game Theory and economic and political theory to reason about interactions with other agents;
  • Language techniques, such as a Parser using recursive descent, to understand messages from other agents;
  • General communication protocols and representations, such as XML and SOAP;
  • Agent communication languages: KIF, KQML, and the FIPA agent languages;
  • Natural Language Processing (NLP);
  • Cooperation: Task Sharing, Joint Intentions, Partial Global Planning, etc.

Software|%PS-I%|% (originally standing for “Political Science—Identity) is a sophisticated but easy to use tool-kit for the production of powerful agent-based or computational simulation models.

SensorWare is an implementation of a mobile agent environment for wireless sensor networks. Scripts can move their code and data from node to node, autonomously. The distributed algorithms are realized as Tcl scripts that are autonomously replicated or migrated to the “proper” sensor.

In SensorWare, based on the application requirements, nodes are configured as sets of services to be exploited by agents. Services are abstraction layers over the operating system and hardware resources which offers pre-defined operations in multiple single packages. Services can be re-configured at runtime based on new incoming requirements from the applications. Applications’ requests are distributed through the network in form of agents that can execute their tasks using the available services in each node. Multiple agents can reside on a single node, waiting for execution, as state machines

Swarm is a platform for agent-based models (ABMs) that includes:

  • A conceptual framework for designing, describing, and conducting experiments on ABMs;
  • Software implementing that framework and providing many handy tools; and
  • A community of users and developers that share ideas, software, and experience.

Swarm includes a Tcl/Tk GUI and a copy of the Objective C / Tcl library that allows interaction with Objective C code from the Tcl interpreter.

Available on Linux, Windows, Mac

Books, Journals and Websites

In addition to the resources on the Artificial Intelligence page, there are also some resources dedicated to intelligent agents and multi-agent systems in particular: