MultiMT is a project led by Prof. Lucia Specia, funded by an ERC (European Research Council) Starting Grant. The aim of the project is to devise data, methods and algorithms to exploit multi-modal information (images, speech, metadata etc.) for context modelling in Machine Translation and other crosslingual and multilingual tasks. The project is highly interdisciplinary, drawing upon different research fields as such NLP, Computer Vision, Speech Processing and Machine Learning.
[1/10/2018] The project and the team are moving to Imperial College London. More soon!
[01/09/2018] Pranava will be at BMVC 2018 in Newcastle upon Tyne.
[29/05/2018] Pranava and Josiah will be at NAACL 2018. Please say hi and visit our talk (Sunday morning) and poster (Monday afternoon)!
[03/05/2018] Chiraag will be at LREC 2018. Please visit our poster!
[03/03/2018] Josiah and Pranava will be at the Conference for the European Network on Integrating Vision and Language (IVL 2018) in Tartu, Estonia (barring weather issues!). Come talk to us!
[01/03/2018] A short paper on ‘de-foiling’ foiled image captions accepted for an oral presentation at NAACL 2018!
[14/02/2018] A long paper on using explicit object detection outputs for image captioning accepted at NAACL 2018!
[20/12/2017] A paper on Multimodal Lexical Translation accepted at LREC 2018!
[07/09/2017] A paper on using speech information for NMT accepted at ASRU 2017!
[08/07/2017] Our system paper for the WMT Multimodal Machine Translation Shared Task is available.
[16/06/2017] A paper on fine-tuning with auxiliary data accepted at Interspeech 2017!
[20/05/2017] A paper on investigating the contribution of image captioning for MMT accepted at EAMT!
British Machine Vision Conference (BVMC), 2018
North American Chapter of the Association of Computational Linguistics: Human Language Technology (NAACL HLT), Long Paper, 2018
Natural Language Engineering, 24 (3): 415-439, 2018
Automatic Speech Recognition and Understanding Workshop (ASRUW), 2017
Conference on Machine Translation (WMT), 2017
We participated in the Multimodal Machine Translation (MMT) shared task of translating image descriptions from English to German/French given the corresponding image. Can the use of object posterior predictions instead of lower-level image features help MMT?
How do we fine-tune RNN Language Models with auxiliary features for a specific domain?
European Association for Machine Translation (EAMT), 2017
What are the contributions of image captioning and neural machine translation systems in Multimodal Machine Translation? Can we use the output of image captioning systems to rerank neural machine translation hypotheses?
For project/collaboration related queries, please contact Prof. Lucia Specia.
For website related issues please contact Josiah Wang.