Skip to main content

Robust, Explainable Deep Networks in Computer Vision

Project description

Helping computers see things better

The creation of convolutional neural networks (CNNs – a class of deep learning algorithms) has revolutionised computer vision by enabling computers to 'see' things and react to them. However, CNNs have not solved all issues. For instance, large amounts of labelled data are still required for training, and this is not possible in all potential application areas. Moreover, the majority of deep networks in computer vision are weak in terms of explainability. The EU-funded RED project will work to advance the robustness and explainability of deep networks in computer vision. It will explore structured network designs, probabilistic methods and hybrid generative/discriminative models. It will also advance the research on how to assess robustness and aspects of explainability through dedicated datasets and metrics, considering the challenges of 3D scene analytics.

Host institution

TECHNISCHE UNIVERSITAT DARMSTADT
Net EU contribution
€ 1 999 814,00
Address
Karolinenplatz 5
64289 Darmstadt
Germany

See on map

Region
Darmstadt Darmstadt, Kreisfreie Stadt
Activity type
Higher or Secondary Education Establishments
Other funding
€ 0,00

Beneficiaries (1)

TECHNISCHE UNIVERSITAT DARMSTADT
Germany
Net EU contribution
€ 1 999 814,00
Address
Karolinenplatz 5
64289 Darmstadt

See on map

Region
Darmstadt Darmstadt, Kreisfreie Stadt
Activity type
Higher or Secondary Education Establishments
Other funding
€ 0,00