Skip to main content
European Commission logo print header

Systemic Intelligence for GrowiNg up Artefacts that Live

Deliverables

Workshop: On Growing up Artifacts that Live, Basic Principles and Future Trends One of the most challenging features of living artifacts is the ability to grow. One of the most interesting features of growing artifacts is the special capability to grow up. Aim and Scope The aim of the workshop is to enlighten basic principles and fundamental requirements to enforce artifacts that can grow up. To "Grow Up" means, that the system starts with a basic, pre-structured set of functionalities and develops its individual capabilities during livetime in close interaction with the environment. A schedule for temporal development will drive the artefact through a well-defined sequence of stages from the infancy state to an individually matured entity. Along this sequence the artefact will learn with respect to, and in interaction with the environment, thus piling up experience, and leading to new qualitative stages of behaviour. Besides adequate learning and adaptation rules, the organisation of the memory and the modular structure of the system must be featured to enable this ontogenetic process of development. Below you will find a brief summary of theses and principles that are said to lead to a living, up-growing artefact: - One of the most challenging features of living artifacts is the ability to grow. - One of the most interesting features of a growing artefact is the special capability of growing up. - Growing up means the evolution from an infant-like pre-defined state to a fully matured entity. - Growing up requires a special organisational structure of the entire artefact, that allows to grow up. - Growing up requires interaction with the environment, including the interaction with other "living artifacts". - Growing up requires the capability of learning from the experience acquired in interaction with the environment. - Learning from experience requires a specialised structure of the underlying system. - The specialised structure (e.g. systemic architecture) is covering: adaptive structures, learning schemes, organisation of memory and reasoning Fundamentals from psychology, from memory organisation, from theory of learning (machine learning and psychology), underlying systemic architectures enabling the required capabilities, cognition science and behavioural knowledge and further principles are within the scope of the workshop. The workshop will envisage, but not be limited to the topics listed below: - Internal models and representation; - Architectures for autonomous agents; - Behavioural sequencing; - Learning and development; - Psychology of learning; - Motivation and emotion; - Emergent structures and behaviours; - Evolutionary and co-evolutionary approaches; Not only the state of the art, but actual and novel ideas and future trends are focused by this workshop. Especially unconventional, Blue-Sky like ideas are welcome, and will be considered valuable for presentation and discussion within the workshop. Therefore an open, hopefully, brain storming discussion will be part of the workshop. The talks and the posters will be on an open basis, encouraging scientists to present even unusual ideas. Programme / Scientific Committee - Alois Knoll, Technical University Munich (TUM), Germany - Andy M. Tyrell, The University of York, United Kingdom - Horst-Michael Gross, Ilmenau Technical University, Germany - Tim Pearce, University of Leicester, United Kingdom - Ulrich Rückert, University of Paderborn, Germany - Giulio Sandini, University of Genova, Italy - Thomas Christaller, Fraunhofer Institute AiS, Germany - Bruno Apolloni, University of Milan, Italy - Peter Ross , School of Computing, Napier University, Edinburgh, Scotland (UK) - Georg Dorffner, Austrian Research Institute for Artificial Intelligence (ÖFAI), Austria - Erich Prem , Austrian Research Institute for Artificial Intelligence (ÖFAI), Austria - David Willshaw, Institute for Adaptive and Neural Computation, The University of Edinburgh, Scotland (UK) - Giovanna Morgavi , Istituto per i Circuiti Elettronici, National Research Council (ICECNR), Italy - Nils Goerke , Neuroinformatics, University of Bonn, Germany
Biological systems live and grow. Many aspects are inherent to the concept of living, such as the adaptation, the interaction with the environment, and the ability to deal with limited resources. Living systems present multiple levels of organization, with elements at one level interacting and aggregating to create more complex behaviour at a higher level. In recent years, many new techniques used to investigate the spatio-temporal activity in living being have demonstrated the presence of features common to the behaviour of self organizing dynamical systems. Thus a question arise: is this chaos useful to model living beings? The answer is very difficult to find. Many experimental data support the dynamic chaotic modelling of living systems. Complex behaviours such as perceiving, intending, acting, learning, and remembering arise as metastable spatio-temporal patterns of brain activity that are themselves produced by the cooperative interactions among neural clusters. In this paper we present and discuss that question, and we try to give indication for a possible answer, with the aim of defining the basic features of a behavioural kernel for living artefacts. In the paper some analysis on the literature on self-organization in living being has being done. Living organisms displayed dynamic phenomena that are essential aspects of self- organization such as self-maintenance, self-transformation and self-transcendence. Many studies showed that living organisms show chaotic behaviour when they create new structures and new patterns of behaviour. Every living system functions as a whole, manifesting properties that are not evident in its parts. The whole is more than the sum of its parts. A human being is something more than just a conglomerate of carbon, oxygen, and water, mixed in with a few other minerals. A human is even more than a conglomerate of cells and tissues. Emergent phenomena have features that are not previously observed in the complex system under observation. This novelty is the source of the claim that features of emergent phenomena are neither predictable nor deducible from lower or micro-level components. This overview suggests that Nature has many advantages using chaotic processes. Chaos seems to be essential for this creation of information. It may have an important neurological function: some researchers have speculated that it could provide a flexible and rapid means for the brain to discriminate between different perceptual stimuli. Some experiments showed that chaotic processes allows a certain level of elasticity during the recognition and the learning phases. Unluckily chaos models are very difficult to be treated. The numerous tentative of building chaotic Artificial Neural Networks, even if were very fascinating, were really difficult to be trained. Any way all the self organizing phenomena found in nature suggest that Dynamic Chaotic Models show characteristic essential to explain and build emergent behaviours and growing up in living artefacts.
Learning Robot Control based on an Internal Value System: The underlying architectural concept behind the control structure is the Systemic Architecture Approach developed within the SIGNAL reserach project (ist-2000-29225). This approach implements the controller by a set of modules with distinct functionality. These modules are organised in layers, where higher layers are designed to implement higher functions. In addition, the architecture is organised into two main branches: one branch going from low level sensory data, upwards to higher sensory capabilities (sensory upstream). The other branch is directed from high level, highly sophisticated control schemes, down to low level motor functions (actuatory downstream). Although this architecture is not coping to cover all possible control schemes, a wide variety of modern controller designs can be easily realised using this approach. The hierarchical structure of the robot control system is divided into five major functional sections: - The robot in its environment; this can be a real robot or a simulation of both. - The sensory upstream; with stages of sensory functions with increasing complexity. - The actuatory downstream; with stages of decreasing action or behavioural complexity. - An Internal Value System IVS, ("drives","emotions") based on sensory values to govern the action selection mechanism. The IVS values serve as input to the action selection and as input to the action selection learning modules. - The action selection mechanism; activating actions, action programs and complete behaviours of different complexity. The action selection is designed to be learnable. Getting autonomous robots to do the things we want them to do is a challenge. Even the definition of parameters for a given controller type is very hard or realise for interesting robotic tasks. Designing controllers that are easily configured in an adequate way is far more complicated. The idea to make a controller learn an adequate set of parameters and functions for a given task is not completely new, but still not solved sufficiently. The developed system is a widely useable approach for intelligent control of autonomous robots and autonomous agents.
The result presents an approach to symbol anchoring that is based on mapping sequences of distance measurements from simple sensors. The sensory space of the mobile robot is pre-structured according to its experiences when it first moves around in unexplored environments. Such pre-structuring depends not only on environmental features, but also on the type of behaviour the robot exhibits. Object representations correspond to streams of sensory signals that are mapped onto this sensory space and classified by a sequence detection mechanism. We report on novel experimental results using this technique where we compare variants of the approach and other more simple methods. We present data from experiments with varied parameters and input data types such as motor and distance sensor information. In our real mobile robot scenario the robot successfully discriminates a number of objects that can then be anchored in a second step using input from a human supervisor.
The paper describes some concepts, which are important for the creation of autonomous agents capable of growing up. Growing up means that living systems, starting from a pre-structured set of functions, develop competence to better adapt to the environment all life long, from childhood to maturity. A living artefact grows up when its capabilities, abilities/knowledge, shift to a further level of complexity, i.e. the complexity rank of its internal capabilities performs a step forward. We want to define an architecture containing mechanisms, which play the same role for autonomous agents as the mechanisms that make humans so successful [1]. In the attempt to define architecture for autonomous growing up agents, we have been investigating the abstraction process in children as natural parts of a cognitive system. We studied deliberative and non-deliberative (emergent) mental adaptive and growing up mechanisms. A list of functional requirements based on these concepts is then proposed. Key-Words: - growing up, living artifacts, abstraction mechanisms, intelligent architecture, adaptable agents, epigenetic robotics.
A fast and simple robot simulator (version 1.8, http://www.dcs.napier.ac.uk/~peter/sw/rs1.8.tar.gz) in C++, using the Fast Light Tool Kit for the user interface. It runs under Linux or Windows. Think of it as a step forward from Olivier Michel's old Khepera simulator. This one has a separate world editor and allows circular and coloured objects; so, for example you can draw objects in the background colour and thus bite arcs and holes out of other objects. The robot has noisy IR sensors and also colour sensors, and can sense the colour it is standing on. Anything grey (that is, equal R, G and B) counts as carpet that the robot can cross. It can affect its world by dropping, detecting and picking up grey blobs, and the user can drag such blobs around too. On my 2Ghz PC the robot will do around 10,000 steps per second, where a step involves a movement, collision detection, any necessary skidding or changes due to collisions and then sensor updating.
The ROLFs: Regional and Online Learnable Fields clustering algorithm combines the advantages of the clustering algorithms k-means and k-neighbouring. In addition the following improvements are realized: - Representatives are used to reduce memory: The learnt neurons represent the input data. Only the neurons are stored, not the original input data. - Contiguous areas are detected: Since the clustering algorithm detects contiguous areas within the input space, it is useful to detect partitions, well separated clusters, within the input space. - Online adaptable: The net learns while presenting pattern by pattern. - Perceptive fields detect new input patterns: The algorithm is capable to detect patterns belonging to new clusters. The ROLFs use special artificial neurons that have a perceptive area defined by its position or centre (C) and by its width (sigma). During the learning phase the centre C adapts towards the mean of the input data covered by the perceptive area while sigma adapts towards the standard deviation. Adaptation rules known from self-organising maps are transferred to the k-means clustering algorithm to make it online-adaptable. The perceptive area of the neurons serves as a novelty detector for patterns. Thus the net is able to grow and to build representatives to reduce the input data for the epsilon-nearest neighbour. The ROLFs are a clustering method for unsupervised data clustering, information management and data mining.
Within this structure several RLAs are connected within a network. Each RLA consists of a sensory condition C, a robot action command A and an expectation of the forthcoming sensory input E. Therefore each RLA represents a possible state of the robot within its environment described by the current sensory input C. The robot state directly results in a possible action A connected to the sensory input C within the same RLA node. To generate RLAs a short time memory is needed to store short sequences of recent and interesting (i.e. significantly changing) sensory and motor events. This short term memory contains raw material to built new RLAs. But since a RLA node contains an expectation E concerning the consequences of the action A RLA nodes can only be built at certain moments, when the system got the information about the consequence E of its action A. Each RLA node contains short term knowledge of the robots environment. This knowledge contains the actual robot state C and a possible action A and the sensor expectation E. Transferring the RLA nodes from the RLA pool into the RLA network and connecting them in a meaningful way extends the knowledge the robot has about its environment. After the learning process of the RLA network has finished, the network stores the long term knowledge of the environment in the whole network distributed as short term knowledge in the nodes. Depending on the training method the RLA structure can show different kind of behaviours like wall following or obstacle avoidance. Figure 45 shows the result of such a training process done with a Khepera simulator. Based on such basic first level movements the RLA structure can be arranged in a hierarchical manner where second level RLAs can switch the behaviour or extract more complex information about the robots environment.
M-SOMs, Multi Self Organising (feature) Maps: The Multi-SOM (M-SOM) approach is a newly variant of the Self Organizing Maps (SOM) which has the intriguing capability to allow a combination of supervised and self organised learning: Self-organised-supervised M-SOM learning. Multi-SOMS consist of a set of partner SOMs, that are trained simultaneously and in concurrence to each other. The different partner SOMs adapt to different classes. M-SOMs are perfectly designed for Data-Mining and self organised data clustering. The underlying properties of the data provided, is processed by the M-SOM and thereby classified. The size, shape and location of these classes is determined by the self organising features of the M-SOM. Each detected class is represented by a symbol of its own. Now, the M-SOM can be used to classify a given state of the system.
SAM: The Sensory-Actuatory Map for autonomous robots, is an alternative approach to build topological maps. SAM is based on the Systemic Architecture approach of the SIGNAL project (IST-2000-29225, www.ist-signal.org). SAM is a mapping method, which is based on the combination of topologically connected sensory and action information. This map has been realised within the SIGNAL project for an autonomous robot, and has been successfully evaluated using the real six wheeled autonomous robot within a testing environment. The proposed sensory-actuatory map (SAM) represents the environment by linking successive sensory items (SI) together using those actions (A) that lead to the respective (SI) sequence. This interconnection graph contains the topology of the environment in a way that can be used for planning and navigation. The sensory information is acquired while the robot is moving through the world to be mapped. Thus, the robot gains experience and information about the environment. Within the stream of sensory information, the algorithm identifies self-contained fractions of it as special Sensory Items (SI). These sensory items should stand for the real world items to be mapped. They must be easy to recognise and clearly distinguishable from each other. The action commands (A) are the result of the robot controller that is guiding the robot trough the environment. A wall-following algorithm (for example) is well suited for this task, but other control algorithms that provide reproducible movements apply as well. SAM implements the following properties: - It must contain and represent spatial knowledge: e.g. distances, interconnections, - It must contain information about special items: e.g. real world objects, metric positions, landmarks, -It must associate the spatial knowledge with these items, and vice versa. - It must reflect the existing topological properties between the items. -It must be as specific as the mapped region is. - The content of the map must be somehow accessible. - The map must allow navigation tasks. - There must be an algorithm to build the map. - Map making must work, even if the circumstances are not ideal. SAM is a widely useable approach to build maps, not only for autonomous robots, but as well for all other thinkable, active agents (web browsers, ...)
The approach presents a system theoretic framework inspired by biosemiotics for the study of semantics in autonomous artificial sign users. The approach uses an ethological view of analysing animal-environment interaction. We first discuss semiotics with respect to the meaning of signals taken up from the environment of an autonomous agent. We then show how semantic issues arise in a similar way when studying adaptive sign users. Anticipation and adaptation play the important role of defining purpose, which is a necessary concept in ALife semiotics. The proposed focus on sign acts leads to a semantics in which meaning is defined as the anticipated outcome of sign-based interaction. Finally, we argue that a novel account of semantics based on indicative acts is compatible with merely indicative approaches in more conventional semiotic frameworks.
The ideas of systemic intelligence provide a set of methodologies and paradigms that are, beside other advantages, suitable for constructing control systems that are capable of growing up. In particular the promising methods of Systemic Architecture, Schedule of Structural Development, Memory Organization and Rules for Learning and Adaptation are presented and discussed with respect to grow up an artifact. Of special interest is the concept of "growth" in the sense of "growing up" from a kind of infantile stage to a fully matured entity. To grow up an artifact from an infantile stage via a sequence of learned abilities to a fully matured entity is still a feature of life not yet sufficiently transposed onto technical systems. To enable the capability to "grow up" artifacts, a set of methodologies and principles is presented in this paper. The developed methodologies are already implemented into physically existing test beds that operate, adapt (and grow up) in real time and in the real world to prove that the proposed approach is feasible under real conditions. Two realizations (robot control, audio signal processing) of a systemic architecture for an up-growing system are working. Systemic intelligence is an idea to build the bridge between subsymbolic representation of knowledge (neural networks, fuzzy control, fuzzy logic, rules, differential equations, ...) and symbolic described capabilities (goals, reasoning, behaviour, intention, ...). The intelligent behaviour shown by living systems, and some technical artifacts is neither the consequence of the symbolic description of their tasks, nor the consequence of the subsymbolic representation of information. We postulate that intelligent behaviour arise only if the design of the system has been performed adequate. The major key features of systemic intelligent system design are listed below: - Systemic architecture, - Adaptive building blocks, - Schedule of structural development, - Rules for learning and adaptation, - Memory organization for knowledge and reasoning. Although there is a high interaction between the building blocks, each one is responsible for a specialized task and will therefore be trained using an individually shaped learning- and adaptation-scheme with respect to the knowledge and the experience acquired by the artifact. The complete process is governed by the {schedule of structural development} with respect to interaction with the environment the artifact resides in and with respect to the task the artifact is supposed to learn. The initial design of this Systemic Architecture together with the schedule of development is the "genotype" defining the potential of the artefact. Individual experience during the process of growing will lead to individual "phenotypes" of that entity. Growing up means that the artefact has a process of ontogenesis throughout its operation time. The concept of Systemic Intelligence, and it’s realisation for control (Systemic Architecture) is a novel and powerful approach to building intelligent designed systems.
A Supervised, Edge Adaptive Map (SEAM) is defined similar to a Self Organizing Map (SOM), but in a SEAM the neighbourhood building edges are provided together with the input data and only the length of the edges and not the positions of the neurons are provided with input data. Therefore, a SEAM will not learn a representation of input data points but it will learn a topological representation of the provided distances. The adaption algorithm of a SEAM can be used to learn lower dimensional representations of high dimensional input data or it can be applied to determine the positions of landmarks while only the distances between the landmarks are known.

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available