MultiMT is a project led by Prof. Lucia Specia, funded by an ERC (European Research Council) Starting Grant. The aim of the project is to devise data, methods and algorithms to exploit multi-modal information (images, speech, metadata etc.) for context modelling in Machine Translation and other crosslingual and multilingual tasks. The project is highly interdisciplinary, drawing upon different research fields as such NLP, Computer Vision, Speech Processing and Machine Learning.
[20/12/2017] A paper on Multimodal Lexical Translation accepted at LREC 2018!
[07/09/2017] A paper on using speech information for NMT accepted at ASRU 2017!
[08/07/2017] Our system paper for the WMT Multimodal Machine Translation Shared Task is available.
[16/06/2017] A paper on fine-tuning with auxiliary data accepted at Interspeech 2017!
[20/05/2017] A paper on investigating the contribution of image captioning for MMT accepted at EAMT!
Automatic Speech Recognition and Understanding Workshop (ASRUW), 2017
Conference on Machine Translation (WMT), 2017
We participated in the Multimodal Machine Translation (MMT) shared task of translating image descriptions from English to German/French given the corresponding image. Can the use of object posterior predictions instead of lower-level image features help MMT?
How do we fine-tune RNN Language Models with auxiliary features for a specific domain?
European Association for Machine Translation (EAMT), 2017
What are the contributions of image captioning and neural machine translation systems in Multimodal Machine Translation? Can we use the output of image captioning systems to rerank neural machine translation hypotheses?
For project/collaboration related queries, please contact Prof. Lucia Specia.
For website related issues please contact Josiah Wang.