Paolo Petta | Carlos Pinto-Ferreira | Rodrigo Ventura |
Austrian Research Institute for Artificial Intelligence Vienna, Austria |
Instituto de Sistemas e Robótica Institudo Superior Técnico Lisboa, Portugal |
Instituto de Sistemas e Robótica Institudo Superior Técnico Lisboa, Portugal |
Paolo.Petta (at( ofai.at | cpf (at( isr.ist.utl.pt | yoda (at( isr.ist.utl.pt |
Fundamental requirements for long-lived autonomous systems include the ability to cope with unexpected external or internal events (which e.g. may call for prompt response) and with long-term changes of their environment. In addition, they have to manage efficiently the pursuit of multiple goals with limited resources.
To achieve reliability under such circumstances is obviously a very challenging task: although traditional controllers can be fine-tuned to achieve optimal performance under specific circumstances, they generally lack long-term robustness. In contrast, biological systems manage to survive in a complex, unpredictable and aggressive (in the sense of threatening their survival) environment. In some cases this is achieved while resorting to full individual autonomy (e.g., spiders), while in others organisms live in a community (e.g., ants, although an ant colony as a whole can be in turn considered as a single autonomous being). It seems reasonable to assume that principles (if not the very mechanisms) lying at heart of biological autonomy could be successfully transposed to the AI field.
Natural intelligence has evolved
maintaining two distinct modes of interaction with the
environment. The older mode (from the perspective of species
evolution) exists since the first animals, and corresponds to
reactive behavior. In this mode there are no world representation
nor reasoning abilities, the sensor space has small dimensional
complexity, and the actuators respond reactively to external
stimuli: behavior is driven mostly by external environment
solicitations.
To cope with
increasingly complex, competitive and agressive environments,
evolution followed simultaneously two parallel paths: the first is
that of sensor complexity (e.g., the
appearence of vision systems). But as sensor space grew
considerably, and it was not physically viable to evolve reactive
systems able to cope with this complexity, a second, alternative
route had to be searched for:
intelligence1.
One of the difficulties when applying the lessons learnt from Nature is that a shift of perspective has to be undertaken: from a descriptive representation --- how things are supposed to happen in biological agents --- to a prescriptive model --- what processes should be implemented in order to reach similar agent competence, at least from a behavioral point of view.
When designing autonomous systems, special attention has to be paid to the issue of what shall be prescribed in the system. If long-term autonomy is sought after, the control system has to be capable of dealing with dramatic changes in the controlled one: this means that if the controller contains a model of the controlled system, this model may loose validity unexpectedly. Consequently it would be the preferrable approach for the control system to acquire a model by itself, rather than to rely on the designer. This consideration seriously constrains the kind of knowledge2 that can be usefully built-in into the controller.
For this reason we propose to shift the topic of what to model with the built-in knowledge, away from the controlled system and towards modeling of issues that concern the controler itself.
In [Ventura:Pinto-Ferreira:1998, Ventura:etal:1998a, Ventura:etal:1998b] we discussed the idea of bootstrapping an agent with some built-in associations between certain environental situations and the corresponding responses, thereby ensuring that the agent is fit for survival (in terms of staying clear/getting away from predators, getting food, and reproduction). These associations are analogies to what Damasio calls "primary emotions":
To what degree are emotional rections wired in at birth? ... One possibility ... is that we are wired to respond with an emotion, in preorganized fashion, when certain features of stimuli in the world or in our bodies are perceived, alone or in combination. Examples of such features include size (as in large animals); large span (as in flying eagels); types of motion (as in reptiles); certain sounds (such as growling); certain configurations of body state (as in the pain felt during a heart attack) ... Note that in order to cause a body response, one does not even need to "recognize" the bear, snake, or eagle, as such, or to know what, precisely, is causing pain. All that is required is that early sensory cortices detect and categorize the key feature or features of a given entity (e.g. animal, object), and that structures such as the amygdala receive signals concerning their conjunctive presence. A baby chick in a nest does not know what eagles are, but promptly responds with alarm and hiding its head when wide-winged objects fly overhead at a certain speed. [Damasio 1994] pp.131-132On top of this built-in basic knowledge the agent should learn new associations which will "tune" the controller in order to adapt itself to changes in the environment. These two complementary approaches to cope with complexity bear similarity to the distinction between genotype and phenotype in living beings.
For that reason, the built-in knowledge has to address issues that do not depend on the predicted dynamics of the system, but rather on its external restrictions (e.g., stability regions, safety) and performance measures (e.g., energy consumption, efficiency). It can be assumed that, at least in a first approach, it is easier to characterize these variables, as they form a limited set of necessary associations between world features and the corresponding responses.
In other words, a mapping between certain features extracted from what is sensed of the world to a small set of relevant situations. These situations could be, for instance, when the system is approaching an unstability region in state space. An agent could learn to anticipate this situations, but the basic pre-wired evaluation is required to bootstrap the agent.
Furthermore, within the limitations of this short abstract we just point out that such an architecture also meets the requirements for implementation of "functional representations", addressing in particular the capability of detection of error by the system (see e.g., [Bickhard:1993,1998] and the discussion of cognizant failure in [Firby:1989] and [Gat:1992]).
In agents of non-trivial
functionalities, most ``interesting'' behaviors are not carried
out synchronously under full-time supervision, e.g. because of
limitations in available resources which makes it necessary to
continuously redirect them or because of real-time requirements
that make it necessary to employ automatic fast reactive servo
systems.
In this sense, the agent
(i.e., the controller) is effectively separated from its
environment, and mostly exerts recessed high-level ``regulatory''
control (as opposed to direct steering). This is just one out of a
number of reasons why, as e.g. succinctly observed in
[McFarland:Bösser:1993], the pursuit of multiple goals
and actions is the rule, not the exception. This in turn raises
the issue of how to cope with the critique of goal-directed
behavior that follows directly:
"The result is that behavior is directed, not by any goal representation, but in a holistic manner that takes into account all relevant aspects of the animal's internal state and of the perceived external situation." [ibd. p.184]
In this context the functional (i.e., prescriptive, see Introduction) appraisal theory of emotions offers an interesting model, introducing levels of indirection between sensorial uptake and interpretation3 as "action tendencies" as well as the translation of the current set of action tendencies into a actual action. [Frijda:1986] [Staller:Petta:1998,Petta:Staller:1998] Among other facets, this architecture opens up additional dimensions of plasticity (adaptivity, learning) beyond learning (identification) of new behaviors and learning of applicability and frequency of behaviors, namely adaptation of the strength of these action tendencies. [Frijda:1986,Rolls:1995]
Bickhard M.H.: Representational Content in Humans and Machines, Journal of Experimental and Theoretical Artificial Intelligence, 5(4), 1993.
Bickhard M.H.: Robots and Representations, in Pfeifer R., et al.(eds.), From Animals to Animats 5: Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior, MIT Press/Bradford Books, Cambridge/London, pp.58-66, 1998.
Damasio A.R.: Descartes' Error, Grosset/Putnam, New York, 1994.
Firby J.R.: Adaptive Execution in Complex Dynamic Worlds, Department of Computer Science, Yale University, New Haven, CT, Ph.D.Thesis, Yale University Technical Report, YALEU/CSD/RR #672, 1989.
Frijda N.H.: The Emotions, Cambridge University Press, Editions de la Maison des Sciences de l'Homme, Paris, 1986.
Gat E.: Integrating Planning and Reacting in a Heterogeneous Asynchronous Architecture for Controlling Real-World Mobile Robots, in Proceedings of the Tenth National Conference on Artificial Intelligence, AAAI Press/MIT Press, Cambridge/Menlo Park, pp.809-815, 1992.
Horswill I.: Visual Architecture and cognitive architecture, Journal of Experimental and Theoretical Artificial Intelligence, Special Issue: Software Architectures for Hardware Agents, 9(2/3), 277-292, 1997.
LeDoux J.E.: The Emotional Brain, Simon & Schuster, New York, 1996.
McFarland D., Boesser T.: Intelligent Behavior in Animals and Robots, MIT Press/Bradford Books, Cambridge/London, 1993.
Petta P., Staller A.: TABASCO: a Tractable Appraisal-Based Architecture for Situated Cognizers, AAAI Fall Symposium: Emotional and Intelligent: The Tangled Knot of Cognition, Orlando, FL, Poster, 1998.
Pfeifer R.: Building "Fungus Eaters": Design Principles of Autonomous Agents, in Maes P., et al.(eds.), From Animals to Animats 4, MIT Press/Bradford Books, Cambridge/London, pp.3-12, 1996.
Rolls E.T.: A Theory of Emotion and Consciousness, and Its Application to Understanding the Neural Basis of Emotion, in Gazzaniga M.S. (ed.): The Cognitive Neurosciences, MIT Press, Cambridge, MA, 1091-1106, 1995.
Staller A., Petta P.: Towards a Tractable Appraisal-Based Architecture for Situated Cognizers, in Canamero D., et al.(eds.), Grounding Emotions in Adaptive Systems, Workshop Notes, 5th International Conference of the Society for Adaptive Behaviour (SAB98), Zurich, Switzerland, August 21, pp.56-61, 1998.
Ventura R., Pinto-Ferreira C.: Emotion-based Agents, Proceedings of AAAI-98, AAAI Press, p.1204, 1998.
Ventura R., Custodio L., Pinto-Ferreira C.: Emotions--The Missing Link? in Emotional and Intelligent: The Tangled Knot of Cognition, Proc. of 1998 AAAI Fall Symposium, Orlando, FL, AAAI Technical Report FS-98-03, pp.170-175, 1998.
Ventura R., Custodio L., Pinto-Ferreira C.: Artificial Emotions --- Goodbye Mr. Spock!, Progress in Artificial Intelligence, Proc. of IBERAMIA'98, Lisbon, Portugal, Colibri, p.395-402, 1998.
1 The term "intelligence" is used here in a
broad sense, as the ability to interact and cope with a given
environment of some non-trivial complexity.
back to main text
2
``Knowledge'' is taken to include models of architectures and laws of
how to adapt them in time.
back to main text
3
Including recoding of relevant information,
e.g. "suddenness" of events into more persistent
physiological states.
back to main text