Skip to main content
European Commission logo print header
Content archived on 2024-04-16

Innovative Architectures for Neurocomputing Machines and VLSI Neural Networks

Objective

Neurocomputing algorithms are very efficient at information processing but consume large amounts of computational resources. The key research questions address the problems of how to design neurocomputers suited for simulations and application-specific integrated neurocircuits for real-time applications: which architecture for which algorithm.
All the questions are addressed through concrete application problems by means of classical or original neural network models: Robot path planning, Computer Vision, Blind separation of sources, Classification, High-dimensional data analysis for industrialprocesses.
The theoretical tools and technical means for designing algorithms, machines and very large scale integration (VLSI) circuits for neurocomputing have been developed. Research is proceeding into connectionist algorithms and architectures for data processing with learning and recognition capabilities; the design of high speed parallel neurocomputing machines for distributed algorithms; and the design of application specific integrated neurocircuits with analogue and/or digital features. Results of the following tasks are available:
visual processing of text (the general architecture has been specified, and the front end processing of bigrams through retinotopic learning maps has been assessed and a model of eye movements has proved to be a key feature for word perception enhancement);
Silicon implementation constraints (the main problem of accuracy in the sum of products operation has been evaluated in various application problems, ranging from content addressable memories to multilayer perceptrons giving results scaling in relaxation and learning phases for each kind of network);
High level specification language (definition and specification of the language has been completed and the grammar will followshortly);
architectures for neurocomputers (both small and efficient neuroaccelerator boards and a more general purpose neurocomputer, SMART, have been specified and prototypes are available);
application specific integrated neurocircuits (a number of test chips have been realized for associative memories or source separation and both analogue and digital techniques are available);
cells and technnology (a library of techniques are available as building blocks for pulse stream synapses, cascadable synaptic matrices and analogue synaptic memories).
APPROACH AND METHODS
The work has been divided into 5 research packages:
-Neural architectures and algorithms, looking at the visual processing of text, silicon implementation constraints and implications, effect of limited accuracy.
-Language and software tools, focusing on developing a high-level specification language suited for parallel machines such as T-Node and SMART.
-Neuro-coprocessors and architectures for neurocomputers, examining systolic circuits, general-purpose neurocomputers, and neural network implementations on massively parallel computers.
-Application-specific integrated neurocircuits, studying the design of associative memories, source separation circuits, and self-organising maps.
-Cells and technology, addressing: the design of pulse-stream synapses; building blocks for analogue implementation; and design technology for analogue synapse memory.
Each task results in state-of-the-art reports and demonstration products.
PROGRESS AND RESULTS
Deliverables and publications on the results of the following tasks are available:
-Visual processing of text: the general architecture has been specified, and the front-end processing of bigrams through retinotopic learning maps has been assessed. A model of eye movements has proved to be a key feature for word perception enhancement. -Silicon implementation constraints: the main problem of accuracy in the sum-of-products operation has been evaluated in various application problems, ranging from content-addressable memories to multi-layer perceptrons. Results give scaling in relaxatio n and learning phases for each kind of network.
-High-level specification language: definition and specification of the language has been completed, and the grammar will follow shortly.
-Architectures for neurocomputers: both small and efficient neuro-accelerator boards and a more general-purpose neurocomputer, SMART, have been specified. Prototypes are available.
-Application-specific integrated neurocircuits: a number of test chips have been realised for associative memories or source separation; both analogue and digital techniques are available.
-Cells and technology: a library of techniques are available as building blocks for pulse stream synapses, cascadable synaptic matrices and analogue synaptic memories.
POTENTIAL
The NERVES Action has proved to be a source of techniques and knowledge suitable for researchers in the field of artificial neural networks. Each of these techniques is illustrated by demonstrations and applications in various fields: signal and image processing, control of industrial processes, robotics and data analysis.
A number of applications have been addressed in different fields such as: Pattern recognition, Speech signal enhancement, Colour Image processing, Path planning, Monitoring of complex industrial processes.

Topic(s)

Data not available

Call for proposal

Data not available

Funding Scheme

Data not available

Coordinator

Institut National Polytechnique de Grenoble
EU contribution
No data
Address
46 avenue Félix Viallet
38031 Grenoble
France

See on map

Total cost
No data

Participants (8)