About

MultiMT is a project led by Prof. Lucia Specia, funded by an ERC (European Research Council) Starting Grant. The aim of the project is to devise data, methods and algorithms to exploit multi-modal information (images, speech, metadata etc.) for context modelling in Machine Translation and other crosslingual and multilingual tasks. The project is highly interdisciplinary, drawing upon different research fields as such NLP, Computer Vision, Speech Processing and Machine Learning.

News

[05/09/2017] Lucia, Pranava and Chiraag will be at WMT/EMNLP 2017 in Copenhagen. Please visit our poster for the WMT Multimodal MT shared task!

[01/09/2017] Lucia will be giving a talk on Multimodal Machine Translation at the MT Marathon in Lisbon on 1st Sept 2017. The slides are available here.

[08/07/2017] Our system paper for the WMT Multimodal Machine Translation Shared Task is available.

[16/06/2017] A paper on fine-tuning with auxiliary data accepted at Interspeech 2017!

[20/05/2017] A paper on investigating the contribution of image captioning for MMT accepted at EAMT!

Team

Members


Lucia Specia

Leader

(Machine Translation)


Pranava Madhyastha

Postdoctoral researcher

(Machine Learning/NLP)


Josiah Wang

Postdoctoral researcher

(Computer Vision/NLP)


Salil Deena

Postdoctoral researcher

(Speech Processing)


Chiraag Lala

Ph.D. postgraduate

 


Thales Bertaglia

Ph.D. postgraduate

 

Former Members


Raymond Ng

Postdoctoral researcher

(Speech Processing)


Abigail Smith

Research Intern

 

Publications

Sheffield MultiMT: Using Object Posterior Predictions for Multimodal Machine Translation

Pranava Madhyastha*, Josiah Wang*, Lucia Specia (* denotes equal contribution)

Conference on Machine Translation (WMT), 2017

We participated in the Multimodal Machine Translation (MMT) shared task of translating image descriptions from English to German/French given the corresponding image. Can the use of object posterior predictions instead of lower-level image features help MMT?

Paper

Semi-supervised Adaptation of RNNLMs by Fine-tuning with Domain-specific Auxiliary Features

Salil Deena, Raymond Ng, Pranava Madhyastha, Lucia Specia, Thomas Hain

Interspeech, 2017

How do we fine-tune RNN Language Models with auxiliary features for a specific domain?

Paper

Unraveling the Contribution of Image Captioning and Neural Machine Translation for Multimodal Machine Translation

Chiraag Lala, Pranava Madhyastha, Josiah Wang, Lucia Specia

European Association for Machine Translation (EAMT), 2017

What are the contributions of image captioning and neural machine translation systems in Multimodal Machine Translation? Can we use the output of image captioning systems to rerank neural machine translation hypotheses?

Paper Poster

Contact

For project/collaboration related queries, please contact Prof. Lucia Specia.

For website related issues please contact Josiah Wang.