Artificial Intelligence (AI) aims at synthesizing intelligence in artefacts. However two families of approaches exist disagreeing in their notion of what intelligence actually means [Ziemke1997], [Franklin1995]. On the one hand, Top-Down AI considers intelligence as the capacity to form and manipulate internal representational models of the world. On the other hand, Bottom-Up AI (or Autonomous Agents) considers intelligence as a biological feature [Maturana and Varela1980]; this notion is often referred to as Enactivism.
There is a vast number of papers dealing with Autonomous Agents. Our aim is not to go through all of them in depth, but rather to briefly introduce the necessary notions with which our work is related. Autonomous agents are by definition considered to be embodied systems (for the different forms of embodiment, see for instance [Brooks1991], [P. Lerena and M. Courant1996], [Robert et al.] and [Nwana1996]). They are designed to fulfill internal or external goals by their own actions in continuous long-term interaction with the environment (possibly unpredictable and dynamical) in which they are situated. Dealing with interactions leads naturally to the concept of emergence of behavior and/or functionality. Emergence offers indeed a bridge between the necessity of complex and adaptive behavior at a macro level and the mechanisms of multiple competences and situation-based learning at a micro level. A system's behavior can be considered emergent if it can only be specified using descriptive categories which are not to be used to describe the behavior of the constituent components.