CORDIS - Forschungsergebnisse der EU
CORDIS

Transparent, Reliable and Unbiased Smart Tool for AI

Leistungen

Management Report

This deliverable concerns a status report on the technical achievements of TRUST in the first nine months of the projects A brief description of the development of each task will be provided including documentation of procedures screenshots preliminary results and identified risks

Data management plan V2
Saliency measures for identifying causally variables of explanations

This report will present the saliency measures and code for identifying causally relevant variables of humanlike explanations The relevant variables are those to be considered during the discovery and communication of causal explanations These variables will be formalised and measurable in terms of their specificity insensitivity proximity and other characteristics known to be preferred by humans

User studies on the realization of explanations

The deliverable reports the result of the qualitative studies on the best way to present explanation content produced in WP3 and provides recommendations for Task 22

Framework requirements document

This deliverable will describe the functional and nonfunctional requirements of the framework as well as the interactions and dependencies between the building blocks The use case needs will also be detailed and reported in this document ensuring that the framework is adequate to different problems and sectors

Communication & Dissemination plan

This report will present the Communication Dissemination Plan of TRUST where the strategy to raise public awareness about the project outcomes will be detailed and scheduled In addition to academic publications and conferences the plan includes events promotion participation in working groups and online forums and educational content creation such as courseware and webinars The plan will include KPIs and their target values as well as the Partner responsible for each communicationdissemination method

Evaluation with healthcare experts of learned models

This deliverable concerns a formal validation of the AI models developed for the first simplified version of the healthcare problem These models will be designed by NWOI and validated by medical experts from LUMC The report will present the first insights on the models results and suggestions for modifications

Initial validation of the explainable AI models from business experts

This deliverable concerns a formal validation of the AI models developed for the first simplified version of the online retail problem These models will be designed by LTP and validated by practitioners from Sonae INESC will coordinate the development and validation process

Data management plan

This deliverable presents the Data Management Plan of TRUSTAI detailing the types of data generatedcollected how it will be exploited protected and the standards to be considered

Initial validation of the explainable AI models from energy experts

This deliverable concerns a formal validation of the AI models developed for the first simplified version of the energy problem These models will be designed by POLIS21 and validated by practitioners from the industry

Project website

This deliverable will present the specification, organization and features of TRUST-AI website. The DNS, URL to access and screenshots on each page will also be presented.

Veröffentlichungen

Multi-objective Genetic Programming for Explainable Reinforcement Learning

Autoren: Videau, Mathurin; Ferreira Leite, Alessandro; Teytaud, Olivier; Schoenauer, Marc
Veröffentlicht in: EUROGP - 25th European Conference on Genetic Programming, part of EvoStar 2022, Ausgabe 25, 2022, Seite(n) pp.278-293, ISBN 978-3-031-02055-1
Herausgeber: Springer Verlag
DOI: 10.1007/978-3-031-02056-8_18

Building data models and data sharing. Purpose, approaches and a case study on explainable demand response

Autoren: Nikos Sakkas, Ch. Chaniotaki, Nikitas. Sakkas, Costas Daskalakis
Veröffentlicht in: Emerging Concepts for Sustainable Built Environment, 2022
Herausgeber: SBEfin 2022 Conference

Multi-modal multi-objective model-based genetic programming to find multiple diverse high-quality models

Autoren: Sijben, Evi; Alderliesten, Tanja; Bosman, Peter
Veröffentlicht in: GECCO '22: Genetic and Evolutionary Computation Conference, 2022, ISBN 978-1-4503-9237-2
Herausgeber: Association for Computing Machinery, New York, NY, United States
DOI: 10.48550/arxiv.2203.13347

Memetic Semantic Genetic Programming for Symbolic Regression

Autoren: Alessandro Leite and Marc Schoenauer
Veröffentlicht in: 26th EuroGP - Part of EvoStar 2023, Ausgabe 26, 2023, Seite(n) 198–212, ISBN 978-3-031-29572-0
Herausgeber: Springer Verlag LNCS-13986
DOI: 10.1007/978-3-031-29573-7_13

Explanatory World Models via Look Ahead Attention for Credit Assignment

Autoren: Oriol Corcoll and Raul Vicente
Veröffentlicht in: Ausgabe 26403498, 2022, ISSN 2640-3498
Herausgeber: Proceedings of Machine Learning Research

Emergence of Adaptive Circadian Rhythms in Deep Reinforcement Learning

Autoren: Labash, Aqeel; Fletzer, Florian; Majoral, Daniel; Vicente, Raul
Veröffentlicht in: ICML'23: Proceedings of the 40th International Conference on Machine Learning, Ausgabe 18, 2023
Herausgeber: JMLR.org
DOI: 10.48550/arxiv.2307.12143

Real time Data and Application Sharing and Collaboration for the Building Energy Domain

Autoren: N. Sakkas, M. Papadopoulou, D. Sakkas
Veröffentlicht in: WDBE 2021, 2021
Herausgeber: World of Digital Built Environment WDBE 2021

Evolvability degeneration in multi-objective genetic programming for symbolic regression

Autoren: Dazhuang Liu, Marco Virgolin, Tanja Alderliesten, Peter A. N. Bosman
Veröffentlicht in: GECCO '22: Genetic and Evolutionary Computation Conference, 2022
Herausgeber: Association for Computing Machinery, New York, NY, United States
DOI: 10.1145/3512290.3528787

Mind the gap: challenges of deep learning approaches to Theory of Mind

Autoren: Aru, Jaan; Labash, Aqeel; Corcoll, Oriol; Vicente, Raul
Veröffentlicht in: Artificial Ingtelligence Review, Ausgabe 3, 2023, ISSN 0269-2821
Herausgeber: Kluwer Academic Publishers
DOI: 10.1007/s10462-023-10401-x

Open data or open access? The case of building data.

Autoren: Sakkas, N., Yfanti, S
Veröffentlicht in: Academia Letters, 2021, ISSN 2771-9359
Herausgeber: Academia.edu
DOI: 10.20935/al3629

Deep neural networks using a single neuron: folded-in-time architecture using feedback-modulated delay loops

Autoren: Stelzer, Florian; Röhm, André; Vicente, Raul; Fischer, Ingo; Yanchuk, Serhiy
Veröffentlicht in: Nature Communications, Ausgabe 20411723, 2021, ISSN 2041-1723
Herausgeber: Nature Publishing Group
DOI: 10.48550/arxiv.2011.10115

Quantifying Reinforcement-Learning Agent’s Autonomy, Reliance on Memory and Internalisation of the Environment

Autoren: Anti Ingel, Abdullah Makkeh, Oriol Corcoll and Raul Vicente
Veröffentlicht in: Entropy, Ausgabe 10994300, 2022, ISSN 1099-4300
Herausgeber: Multidisciplinary Digital Publishing Institute (MDPI)
DOI: 10.3390/e24030401

Interpretable Forecasting of Energy Demand in the Residential Sector

Autoren: Nikos Sakkas; Sofia Yfanti; Costas Daskalakis; Eduard Barbu; Marharyta Domnich
Veröffentlicht in: Energies, Ausgabe 1, 2021, ISSN 1996-1073
Herausgeber: Multidisciplinary Digital Publishing Institute (MDPI)
DOI: 10.3390/en14206568

Do Deep Reinforcement Learning Agents Model Intentions?

Autoren: Tambet Matiisen; Aqeel Labash; Daniel Majoral; Jaan Aru; Raul Vicente
Veröffentlicht in: Stats, Vol 6, Iss 1, Pp 50-66 (2022), Ausgabe 5, 2022, ISSN 2571-905X
Herausgeber: MDPI
DOI: 10.3390/stats6010004

Drivers of and counterfactuals for the final energy and electricity consumption in EU industry

Autoren: Sakkas, N., Athanasiou, N.
Veröffentlicht in: Academia Letters, Ausgabe 27719359, 2021, ISSN 2771-9359
Herausgeber: Academia.edu
DOI: 10.20935/al3451

Suche nach OpenAIRE-Daten ...

Bei der Suche nach OpenAIRE-Daten ist ein Fehler aufgetreten

Es liegen keine Ergebnisse vor