The Action aims to obtain a better understanding of sub-symbolic artificial intelligence and sensorimotor tasks. Sub-symbolic AI uses representations which do not require programmer-specified relationships to link them in an arbitrary (conceptual) way to the represented objects. Two examples are:
-Analogical representations: In these, relationships between the parts of a problem domain are captured directly in the representation. An example is a visual buffer, which is literally just a picture of a scene. The difference is that the picture is 'ac tive' - dynamic processes such as diffusion and reaction with objects enables an active object (moving car in a traffic scene, moving robot arm) to plan movements, This is because it 'knows' about the scene from the results of the initiated dynamic processes.Analogical representations are therefore explored in the form of a visual buffer which is used both to store expectations about a visual scene and to act as the medium in which sensorimotor actions are planned and monitored.
-Neural representations: In these, learning algorithms are used to change the weights in a network in order to execute some task (classification, choosing a direction to move, finding features in an image). The ressulting computation is sub-symbolic because although the system performs the task, no-one told it how to, and the way in which it does it is not often easy to understand, as with real neural systems.Neural mechanisms are therefore used to learn and compute generalisations about scenes with moving or still objects, so that concepts can be used to describe the scene and hence give verbal instructions, or so that expectations can be generated by recognising that the scene is an instance of a general class.
Methods of sub-symbolic computing are explored, with applications that include robotics and vision. The sub-symbolic paradigms come from neural network learning algorithms and analogical representations mapped directly from the problem domain. The project employs a number of approaches. Firstly, a robot arm connected to a workstation, which grabs and moves target objects under the control of an internally maintained map. Reaction diffusion dynamics on the map define the relationships of the different objects and the arm, thus performing the sub-symbolic computations necessary for performing the tasks. Secondly, autonomous robots on wheels have been developed. These are grounded in the real world, receiving a richness of dynamical sensory input unattainable in simulation. This provides a new framework for exploring adaptive algorithms and situated intelligence. Thirdly, an internal map of a street scene is maintained and used to plan trajectories for traffic. The computational operations performed on the map include those of generalising, from examples, the generic forms of spatiotemporal events in order to build up sub-symbolic schemata for planning purposes. Fourthly, a reaction diffusion dynamics is executed on images to perform spatial filtering. Fifthly, a 2-dimensional network of coupled oscillators is used for path planning. Sixthly, a reinforcement connectionist learning algorithm is used, also for robotic path planning. Seventhly, a novel learning algorithm has been developed, based on a proposal for subcellular and circuit learning in real neurons. In addition, self-organising maps are explored, with application to scene analysis. The maps are used to evolve basic feature detectors appropriate to the images, and also in the construction of higher level descriptions. Multilayer maps are developed for representations with a hierarchy of resolutions.
APPROACH AND METHODS
The Action employs a number of approaches:
-A robot arm connected to a workstation grabs and moves target objects under the control of an internally maintained map. Reaction-diffusion dynamics on the map define the relationships of the different objects and the arm, thus performing the sub-symbol ic computations necessary for performing the tasks.
-Autonomous robots on wheels have been developed. These 'agents' are 'grounded' in the real world, receiving a richness of dynamical sensory input unattainable in simulation. This provides a new framework for exploring adaptive algorithms and 'situated'intelligence.
-An internal map of a street scene is maintained and used to plan trajectories for traffic. The computational operations performed on the map include those of generalising, from examples, the generic forms of spatio-temporal events in order to build up s ub-symbolic schemata for planning purposes.
-A reaction-diffusion dynamics is executed on images to perform spatial filtering. Edges can be found and noise removed using this novel technique.
-A 2D network of coupled oscillators is used for path planning. The "waves" interact with openings in "walls" in order to enable the simulated robot to choose a path suitable for its size.
-A reinforcement connectionist learning algorithm is used for robotic path planning. Two stages are involved: the building of an internal model of the workspace, and the construction of plans from particular starting configurations.
-A novel learning algorithm is developed, based on a proposal for sub-cellular and circuit learning in real neurons. This algorithms enables circuits to self-organise to expect certain inputs and conduct 'internal simulations': dynamical processes analog ous to the input dynamics.
-Self-organising maps are explored, with application to scene analysis. The maps are used to evolve basic feature detectors appropriate to the images, and also in the construction of higher-level descriptions. Multi-layer maps are developed for represent ations with a hierarchy of resolutions.
-The theoretical foundations of the approach are being analysed and clarified. The ULB group has ongoing work on the application of dynamics for computation.
PROGRESS AND RESULTS
The results of the Action consist of the above 9 tasks developing in parallel, with continuous cross-fertilisation of ideas. The diverse problem domains (image processing, navigation) are united by a bottom-up approach of allowing the complexity of the world to create the complexity of the organism.
The VUB have had robots on display in various places, notably at the Seville Expo 92 and the Esprit Conference. The adaptive dynamics algorithm is becoming known to neuroscientists as a rationale for learning at the level of ion channels in the brain. Hamburg has implemented software which plans trajectories in traffic scenes, as well as feature extraction in images. The ULB has a Transputer-based demo of path-planning using the wave-method. The Spanish group have a program showing a simulated robotlearning complex path-planning using reinforcement learning. The Finnish group have developed a sophisticated scene-processing environment and can show their self-organising algorithms learning to extract regularities.
A number of papers describing various of these methods have started to appear, some in journals and some in conference proceedings.
The short-term potential lies in contributions of new techniques for robotics, planning and scene analysis which emphasise the generation, rather than programming, of solutions. In the longer term, approaches like this are the only ones that are expected to pass a certain level of sophistication. Only by understanding how dynamic computation can interact with a world in flux can one hope to build robust adaptable machines.