Skip to main content

IMAGINE – Informing Multi-modal lAnguage Generation wIth world kNowledgE

Searching for OpenAIRE data...

Publications

Multi3Generation: Multi-task, Multilingual, Multi-Modal Language Generation.

Author(s): Anabela Barreiro, José C. de Souza, Albert Gatt, Mehul Bhatt, Elena Lloret, Aykut Erdem, Dimitra Gkatzia, Helena Moniz, Irene Russo, Fabio Kepler, Iacer Calixto, Marcin Paprzycki, François Portet, Isabelle Augenstein, Mirela Alhasani
Published in: Proceedings of The 23rd Annual Conference of the European Association for Machine Translation, 1-3 June, 2022
Publisher: European Association for Machine Translation

Latent Variable Model for Multi-modal Translation

Author(s): Iacer Calixto, Miguel Rios, Wilker Aziz
Published in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, July 2019, 2019, Page(s) 6392–6405
Publisher: Association for Computational Linguistics
DOI: 10.18653/v1/p19-1642

Can Wikipedia Categories Improve Masked Language Model Pretraining?

Author(s): Diksha Meghwal, Katharina Kann, Iacer Calixto, Stanislaw Jastrzebski
Published in: Proceedings of the The Fourth Widening Natural Language Processing Workshop, 2020, Page(s) 78-78
Publisher: Association for Computational Linguistics
DOI: 10.18653/v1/2020.winlp-1.19

VisualSem: a high-quality knowledge graph for vision and language

Author(s): Alberts, Houda; Huang, Teresa; Deshpande, Yash; Liu, Yibo; Cho, Kyunghyun; Vania, Clara; Calixto, Iacer
Published in: Proceedings of the 1st Workshop on Multilingual Representation Learning, 2021, Page(s) 138–152
Publisher: Association for Computational Linguistics
DOI: 10.18653/v1/2021.mrl-1.13

Are Scene Graphs Good Enough to Improve Image Captioning?

Author(s): Victor Siemen Janusz Milewski, Marie-Francine Moens, Iacer Calixto
Published in: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Dec.2020, 2020, Page(s) 504–515
Publisher: Association for Computational Linguistics

Seeing past words: Testing the cross-modal capabilities of pretrained V&L models on counting tasks

Author(s): Letitia Parcalabescu, Albert Gatt, Anette Frank and Iacer Calixto
Published in: Proceedings of the 1st Workshop on Multimodal Semantic Representations (MMSR), 2021
Publisher: Association for Computational Linguistics

English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too

Author(s): Jason Phang, Iacer Calixto, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Katharina Kann, Samuel R. Bowman
Published in: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Dec.2020, 2020, Page(s) 557–575
Publisher: Association for Computational Linguistics

VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

Author(s): Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, Albert Gatt.
Published in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022
Publisher: Association for Computational Linguistics

Neural Natural Language Generation: A Survey on Multilinguality, Multimodality, Controllability and Learning

Author(s): Erkut Erdem, Menekse Kuyu, Semih Yagcioglu, Anette Frank, Letitia Parcalabescu, Barbara Plank, Andrii Babii, Oleksii Turuta, Aykut Erdem, Iacer Calixto, Elena Lloret, Elena-Simona Apostol, Ciprian-Octavian Truică, Branislava Šandrih, Sanda Martinčić-Ipšić, Gábor Berend, Albert Gatt, Grăzina Korvel
Published in: Journal of Artificial Intelligence Research, 73, 2022, ISSN 1076-9757
Publisher: Morgan Kaufmann Publishers, Inc.
DOI: 10.1613/jair.1.12918

VisualSem: a high-quality knowledge graph for vision and language

Author(s): Alberts, Houda; Huang, Teresa; Deshpande, Yash; Liu, Yibo; Cho, Kyunghyun; Vania, Clara; Calixto, Iacer
Published in: 2020
Publisher: Arxiv