Skip to main content
European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

IMAGINE – Informing Multi-modal lAnguage Generation wIth world kNowledgE

Pubblicazioni

Multi3Generation: Multi-task, Multilingual, Multi-Modal Language Generation.

Autori: Anabela Barreiro, José C. de Souza, Albert Gatt, Mehul Bhatt, Elena Lloret, Aykut Erdem, Dimitra Gkatzia, Helena Moniz, Irene Russo, Fabio Kepler, Iacer Calixto, Marcin Paprzycki, François Portet, Isabelle Augenstein, Mirela Alhasani
Pubblicato in: Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, Numero 1-3 June, 2022, Pagina/e 347-348
Editore: European Association for Machine Translation

Latent Variable Model for Multi-modal Translation

Autori: Iacer Calixto, Miguel Rios, Wilker Aziz
Pubblicato in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Numero July 2019, 2019, Pagina/e 6392–6405
Editore: Association for Computational Linguistics
DOI: 10.18653/v1/p19-1642

Can Wikipedia Categories Improve Masked Language Model Pretraining?

Autori: Diksha Meghwal, Katharina Kann, Iacer Calixto, Stanislaw Jastrzebski
Pubblicato in: Proceedings of the The Fourth Widening Natural Language Processing Workshop, 2020, Pagina/e 78-78
Editore: Association for Computational Linguistics
DOI: 10.18653/v1/2020.winlp-1.19

VisualSem: a high-quality knowledge graph for vision and language

Autori: Alberts, Houda; Huang, Teresa; Deshpande, Yash; Liu, Yibo; Cho, Kyunghyun; Vania, Clara; Calixto, Iacer
Pubblicato in: Proceedings of the 1st Workshop on Multilingual Representation Learning, 2021, Pagina/e 138–152
Editore: Association for Computational Linguistics
DOI: 10.18653/v1/2021.mrl-1.13

Are Scene Graphs Good Enough to Improve Image Captioning?

Autori: Victor Siemen Janusz Milewski, Marie-Francine Moens, Iacer Calixto
Pubblicato in: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Numero Dec.2020, 2020, Pagina/e 504–515
Editore: Association for Computational Linguistics

Seeing past words: Testing the cross-modal capabilities of pretrained V&L models on counting tasks

Autori: Letitia Parcalabescu, Albert Gatt, Anette Frank and Iacer Calixto
Pubblicato in: Proceedings of the 1st Workshop on Multimodal Semantic Representations (MMSR), 2021
Editore: Association for Computational Linguistics

English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too

Autori: Jason Phang, Iacer Calixto, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Katharina Kann, Samuel R. Bowman
Pubblicato in: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Numero Dec.2020, 2020, Pagina/e 557–575
Editore: Association for Computational Linguistics

VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

Autori: Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, Albert Gatt.
Pubblicato in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022
Editore: Association for Computational Linguistics

Endowing Language Models with Multimodal Knowledge Graph Representations

Autori: Ningyuan (Teresa) Huang,Yash R. Deshpande,Yibo Liu,Houda Alberts,Kyunghyun Cho,Clara Vania,Iacer Calixto
Pubblicato in: Arxiv, Numero 27/06/2022, 2022
Editore: Arxiv

È in corso la ricerca di dati su OpenAIRE...

Si è verificato un errore durante la ricerca dei dati su OpenAIRE

Nessun risultato disponibile