Objective
A system for memory-guided perception on the basis of neuromorphic VLSI technology will be built. It will combine multiple VLSI chips with a unique communication infrastructure (address-event-representation, AER) that provides a technical realisation of biological many-to-many connectivity. The aim of the system is to perform classification and match-to-sample tasks in real time on selected types of natural images and sounds. The system will emulate the mutually beneficial interaction between attention and learning found in humans: attention accelerates learning ('attend-to-learn') and learning guides attention ('learn-to-attend'). To verify these and other emergent behaviours, synthetic stimuli and benchmark tests adapted from visual and auditory psychophysics will be used. Thus it will be possible to directly compare at every stage system and human performance. A system for memory-guided perception on the basis of neuromorphic VLSI technology will be built. It will combine multiple VLSI chips with a unique communication infrastructure (address-event-representation, AER) that provides a technical realisation of biological many-to-many connectivity. The aim of the system is to perform classification and match-to-sample tasks in real time on selected types of natural images and sounds. The system will emulate the mutually beneficial interaction between attention and learning found in humans: attention accelerates learning ('attend-to-learn') and learning guides attention ('learn-to-attend'). To verify these and other emergent behaviours, synthetic stimuli and benchmark tests adapted from visual and auditory psychophysics will be used. Thus it will be possible to directly compare at every stage system and human performance.
OBJECTIVES
- Design visual and auditory feature spaces that efficiently represent certain types of dynamic natural images and sounds (e.g. rippling wave patters, speech);
- Conduct psychophysical experiments with human observers to demonstrate human 'object attention' in the context of synthetic and natural stimuli that are well represented by the chosen feature spaces (visual and auditory);
- Develop a saliency network that approaches human performance on figure-ground segregation and multiple-object tracking with the saliency network;
- Develop an associative network whose memory performance suffices for complex synthetic and natural images as represented by the chosen feature spaces;
- Couple saliency and associative networks to achieve emergent behaviours 'attend-to-learn', 'learn-to-attend', and 'multi-stable perception' with synthetic benchmark stimuli;
- Achieve and disseminate a stable and versatile communication infrastructure for neuromorphic, analogue VLSI devices.
- Reach new level of complexity and performance with neuromorphic, analogue VLSI, and demonstrate the technological promise of this approach;
- Approach human performance on classification and match-to-sample tasks with plurality of natural images and sounds that are well represented by the chosen feature spaces.
DESCRIPTION OF WORK
Management and documentation (WP0): Coordination and documentation.
Visual (WP1) and auditory (WP2) input and feature spaces: Collect natural images and sounds, prepare benchmark tests, measure human performance. Visual features are sensitive to orientation and direction of motion (~300 total). Auditory features are sensitive to pitch, formants, and spectral drift.
Saliency network - simulation, testing (WP3) and hardware (WP7): Simulate, test, and build 'saliency network' of ~1000 integrate-and-fire neurones and ~16000 static synapses to represent feature activity. Recurrent excitation and inhibition mediate 'saliency', 'attention tracking' and 'onset binding'. Produce 2 or 3 chip generations, joining 4+ chips with AER communication.
Associative network - simulation, testing (WP4) and hardware (WP8): Simulate, test, and build 'associative network' of ~1000 integrate-and-fire neurones and ~30000 dynamic synapses. Hebbian potentiation and homo-synaptic depression associates features of sensory objects. Produce 2 or 3 chip generations.
Link module and emergent behaviours: simulation, testing (WP5) and hardware (WP6): Develop, build, and test a reciprocal link between saliency and associative networks. Test alternative architectures for emergent behaviours ('attend-to-learn', 'learn-to-attend', 'multi-stable perception'). Improve AER infrastructure and consider alternative architectures.
Natural images and response decoding (WP9): Compare system and human performance on natural images and sounds in classification and match-to-sample tasks. Predict system performance and information flow from response distributions.
Technology dissemination (WP11): Provide support, documentation, and hardware components to other groups. Improve AER infrastructure and increase bandwidth.
Topic(s)
Call for proposal
Data not availableFunding Scheme
CSC - Cost-sharing contractsCoordinator
39106 MAGDEBURG
Germany