Skip to main content
Weiter zur Homepage der Europäischen Kommission (öffnet in neuem Fenster)
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS

Algorithmic Bias Control in Deep learning

Periodic Reporting for period 1 - A-B-C-Deep (Algorithmic Bias Control in Deep learning)

Berichtszeitraum: 2022-06-01 bis 2024-11-30

The "Algorithmic Bias Control in Deep Learning" project addresses significant challenges in the field of artificial intelligence (AI). AI, through Deep learning (DL) models, have achieved remarkable performance across various domains, but their success comes with substantial costs, including large datasets, extensive training times, and high resource consumption. Additionally, modifications to improve DL models often result in unexplained degradations in performance on unseen data ("generalization") due to changes in algorithmic bias.
Our project aims to develop a rigorous theory of algorithmic bias in DL and apply it to alleviate these practical bottlenecks. The overall objectives are:
1. Identify the algorithmic biases affecting DL training.
2. Understand how these biases influence the functional and generalization capabilities of DL models.
3. Control these biases to enhance various practical aspects, such as training efficiency, credibility, and applicability of DL models in new domains.
By achieving these objectives, we expect to significantly advance the field of DL, making it more efficient and reliable for real-world applications. This project will contribute to reducing the resource demands of DL, improving model robustness, and expanding the applicability of DL to new areas, ultimately enhancing the impact of AI on society.
Our ERC project, "Algorithmic Bias Control in Deep Learning," aimed to improve artificial intelligence (AI) systems by understanding and managing hidden preferences in AI algorithms. We made significant progress in three main areas:
1. Identifying Hidden Preferences in AI Systems
- Discovered how the training process of AI models can unintentionally favor certain outcomes.
- Showed that common training methods naturally lead AI models to find simpler solutions.
2 Understanding the Impact of These Preferences
- Explored how these hidden preferences affect the AI's ability to make predictions and solve problems.
- Found that even when if we randomly draw model parameters until we fit the data, AI models tend to work well on unseen data.
3. Improving AI Systems Based on These Insights
- Enhanced AI training on multiple computers: Specifically, we created DropCompute, a method that makes AI training faster and more reliable when using many computers at once.
- Made AI models more efficient. Specifically, we developed ways to reduce the size and complexity of AI models without losing performance, and found how to improve how AI models classify objects in the real world, by better encode their 'class' information.
- Taught AI new skills: Specifically, we created AI models for image recognition that are not affected by any shifts in of the image.
- Helped AI learn multiple tasks over time and studied how AI can learn new tasks without forgetting previous ones.
- Applied our findings to new areas and used our discoveries to improve AI in areas like reinforcement learning and noise reduction in images.

Our project has significantly advanced our understanding of AI systems, leading to improvements in their efficiency, accuracy, and usefulness across various fields. These innovations contribute to making AI more reliable and effective, addressing key challenges in the field and paving the way for smarter, more capable AI systems that can better serve society's needs.
Key Results and Potential Impacts:
1. Understanding and Controlling Algorithmic Biases:
- By identifying and characterizing various algorithmic biases, our project has provided new insights into how these biases influence DL model performance. This understanding is crucial for developing more reliable and efficient DL models.
- The finding that neural networks typically generalize well (with narrow teachers), despite having many more parameters than datapoints, is a surprising new perspective on generalization.
2. Key Technological Advancements:
- DropCompute: Enhances the stability and efficiency of distributed DL training, making it easier to scale up models using distributed computing resources. This advancement is crucial for training large models more efficiently and reliably.
- Lower Bit-Width Accumulators: Optimizes inference, particularly in resource-constrained environments like edge computing and mobile devices, making DL applications more practical and scalable.
- 4-bit Matrix Multiplications: Demonstrates that accurate neural training can be achieved with low-precision arithmetic, significantly reducing computational costs and resource requirements.
- Alias-free convnets: a model that is completely invariant to shifts in its input image, even if it they are fractional (e.g. half a pixel).


To ensure the further uptake and success of these results, several key actions are needed:
1. Further Research: Continue exploring the implications of identified biases and developing methods to control them in more complex and diverse DL models.
2. Demonstration and Validation: Conduct extensive real-world testing and validation of the proposed methods to demonstrate their efficacy and reliability in various applications.
3. International Collaboration: Foster collaborations with international research institutions and industry partners to enhance the impact and adoption of the project's outcomes.
In summary, the "Algorithmic Bias Control in Deep Learning" project has made significant strides in understanding, controlling, and leveraging algorithmic biases to advance DL technology. These achievements not only push the boundaries of current DL research but also pave the way for more efficient, reliable, and scalable AI systems, with wide-ranging impacts on various industries and society as a whole.
abc-deep.png
Mein Booklet 0 0