Title

Connectionist Multi-Sequence Modelling and Applications to Multilingual Neural Machine Translation

Abstract

Abstract

Deep (recurrent) neural networks has been shown to successfully learn complex mappings between arbitrary length input and output sequences, called sequence to sequence learning, within the effective framework of encoder-decoder networks. This thesis investigates the extensions of sequence to sequence models, to handle multiple sequences at the same time within a single parametric model, and proposes the first large scale connectionist multi-sequence modeling approach. The proposed multi-sequence modeling architecture learns to map a set of input sequences into a set of output sequences thanks to the explicit and shared parametrization of a shared medium, interlingua.

Proposed multi-sequence modeling architecture is applied to machine translation tasks, tackling the problem of multi-lingual neural machine translation (MLNMT). We explore applicability and the benefits of MLNMT, (1) on large scale machine translation tasks, between ten pairs of languages within the same model, (2) low-resource language transfer problems, where the data between any given pair is scarce, and measuring the transfer learning capabilities, (3) multi-source translation tasks where we have multi-way parallel data available, leveraging complementary information between input sequences while mapping them into a single output sequence and finally (4) Zero-resource translation task, where we don't have any available aligned data between a pair of source-target sequences.

Biography:

Supervisor(s)

Supervisor(s)

ORHAN FIRAT

Date and Location

Date and Location

2017-07-12;09:00:00-A101

Category

Category

PhD_Thesis