Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

A Low-Power Artificial Intelligence Framework based on Vector Symbolic Architectures

Periodic Reporting for period 2 - devSAFARI (A Low-Power Artificial Intelligence Framework based on Vector Symbolic Architectures)

Periodo di rendicontazione: 2022-01-09 al 2023-01-08

The project aims at addressing two challenges in Artificial Neural Networks (ANNs) that form the major approach to Artificial Intelligence (AI). The focus is on two AI challenges: ANNs require significant computational resources and the lack of transparency in ANNs. The project aims at addressing these challenges via the development of Hyperdimensional Computing aka Vector Symbolic Architectures (HD/VSA): a transparent, bio-inspired framework for AI with potential for implementing algorithms at low-power consumption on emerging computing hardware.

In terms of the AI challenges, the overall aim is to improve the understanding of computing principles in high-dimensional spaces with HD/VSA, and to advance the theory and design principles of simple AI algorithms implementable on low-power computing hardware. This research aim comprises the following research objectives (see "MSCA_scheme.png"):
O1. To evaluate the effect of using HD/VSA as a computational paradigm and prove their universality for emerging low-power computing hardware; (1st AI challenge). Implemented by work package 1;
O2. To advance the capacity theory of HD/VSA and recurrent ANNs by introducing new computational approaches into the theory and expanding the theory to include methods for decoding information from HD/VSA; (2nd AI challenge). Implemented by work package 2;
O3. To study the connections between ANNs and VSAs via the mathematics of high-dimensional spaces; (2nd AI challenge). Implemented by work package 2;
O4. To systematize state-of-the-art methods and develop new methods for mapping data from original representations into VSAs; (VSAs Encoding problem). Implemented by work package 3;
O5. To originate a novel concept of computing in superposition. Implemented by work package 3.

The innovative aspect of the action is in facilitating the vision of a digital society where the obtained results provide important information supporting the design of novel intelligent devices needed to achieve this vision.
Four main directions were pursued within the project:

1. Exploring computational universality of HD/VSA and collecting the primitives for representing data structures (work package 1; addresses O1).
1.1: The main result for computational primitives is a comprehensive collection of ways of representing such data structures as sets, sequences, graphs, trees, finite state automata, stacks, and histograms.
1.2: Computational universality was explored by demonstrating the Turing completeness of HD/VSA using elementary cellular automata and Turing machines as a computational models to be emulated. It is possible to emulate the elementary cellular automaton (rule 110) and well as a Turing machine using HD/VSA. It is even possible to emulate them in the presence of strong noise.

2. Extension of the capacity theory of HD/VSA, which also allows predicting the expected accuracy of deep ANNs and echo state networks (work package 2; addresses O2 & O3).
2.1: The taxonomy of decoding techniques for distributed representations formed via HD/VSA was proposed. The best performing techniques allowed increasing the information rate of the representations to up to 1.4 bits per dim.
2.2: We have explained numerous variants of echo state networks via the lenses of the "capacity theory".
2.3: a) The "capacity theory" was extended such that we are able to predict the expected accuracy of deep ANNs (image "ImageNet_Norm.png").
2.3: b) Demonstration of the connections between feed-forward and recurrent randomly connected ANNs and HD/VSA.

3. Computing in superposition and mappings (work package 3; addresses O4 & O5)
3.1: Taxonomy of mapping were presented in Part I of two-part survey on HD/VSA.
3.2: A Torchhd software library includes both standard HD/VSA primitives and models as well as methods for mapping data from original representations into HD/VSA space.
3.3: The conceptual principle behind the computing in superposition was formulated. It was applied to integer factorization and to computing higher-order features.

4. Webportal for HD/VSA (www.hd-computing.com).
This effort is much broader and ambitious than project's website. It is highly necessary as it will act as the unification platform connecting groups in the area and supporting newcomers.

The dissemination activities included: a dedicated course "Computing with High-Dimensional Vectors" and guest lectures in similar courses at Sapienza University of Rome and UC San Diego; involvement in the final session of "Applied Artificial Intelligence" course; tutorial at DATE22; a key note at a workshop at DATE23; numerous talks at "Online Speakers' Corner on HD/VSA"; organization of the "Midnight Sun Workshop on VSA"; participation in IJCNN and NICE conferences, and other activities.
Regarding the above objectives, the following improvements beyond the state of the art were achieved:
O1:
- "Cookbook" of primitives for representing data structures with HD/VSA was reported in "Vector Symbolic Architectures as a Computing Framework for Nanoscale Hardware";
- An approach for efficiency rematerializing random codebooks used in HD/VSA was presented in "Cellular Automata Can Reduce Memory Requirements of Collective-State Computing" (image "Scheme.png");
- Two Turing complete systems were emulated with HD/VSA (reported in "Vector Symbolic Architectures as a Computing ...").

O2:
- “Efficient Decoding of Compositional Structure in Holistic Representations” is an article that a) taxonomizes techniques for decoding information from HD/VSA (attached image), b) introduces novel decoding methods with up to 1.4 bits per dim, and c) presents tradeoff between the information capacity and required amount of computation;
- “Towards a Comprehensive Theory of Reservoir Computing” is a manuscript under preparation that a) extends the “capacity theory” to analytically predict the behavior of HD/VSA models & reservoir computing models and b) uses the extended theory to characterize a working memory of echo state networks (family of recurrent ANNs).

O3:
- The various types of interlay between ANNs and HD/VSA were overviewed in “A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II”;
- Connections between HD/VSA and randomized ANNs were presented in “Density Encoding Enables Resource-Efficient Randomly Connected Neural Networks” and “Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware” demonstrating designs of computationally efficient ANNs;
- Prediction of the expected accuracy of deep ANNs using the “capacity theory” was described in “Perceptron Theory Can Predict the Accuracy of Neural Networks”.

O4:
- Taxonomy of HD/VSA mappings was presented in "A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I";
- Fractional power encoding for numeric data was presented in "Computing on Functions Using Randomized Vector Representations (in brief)".
- A new mapping for sequences was presented in "Recursive Binding for Similarity-Preserving Hypervector Representations of Sequences".

O5:
- Concept of "computing in superposition" was laid out in "Vector Symbolic Architectures as a Computing ...";
- It was used for integer factorization problems in "Integer Factorization with Compositional Distributed Representations".
- The concept was also applied to efficiently compute feature spaces to forecast dynamical systems (under preparation).
Basic scheme for expanding distributed representations with CA90 from some initial short seed.
The taxonomy of techniques from decoding from distributed representations.
Overview of the research objectives of the action.
Accuracy of 15 deep neural networks on the ILSVRC 2012 dataset against the theory predictions.
Il mio fascicolo 0 0