Two Novel Approaches for Unsupervised Machine Translation
Two recent publications, both in review at ICLR 2017, propose methods to do unsupervised machine translation. These works propose overlapping methods to translate between languages without any parallel corpora. To me, at least on first glance, this result seems highly unintuitive. So, I’m going to use this blog post to summarize these two approaches, and discuss their high-level differences.
Note that this post is not going to delve into a quantitative comparison of the results between these papers. Instead, I’m just trying to describe the method-level differences between the techniques.
This paper focuses on building models to do neural machine translation with monolingual corpora - meaning no parallel corpuses between languages. Below are the fundamental steps that are needed to build this model.
Unsupervised Cross-Lingual Embeddings
One of the key techniques that this approach uses is cross-lingual embeddings. To train these, you first train embeddings in distinct languages using monolingual corpora, and then a linear transformation maps them into a shared space based on a bilingual dictionary. The authors also discuss how adversarial training and self-learning extensions can make this process further unsupervised. Armed with these cross-lingual embeddings, the authors can embed an input sequence from several different languages into a shared representation space. This is crucial for the “encoder” step in their model.
Core Model: Encoder-Decoder with Global Attention, based on a Bidirectional GRU.
For the core model, the authors use a fairly standard encoder-decoder with attention. Here are some important attributes of this model:
Shared encoder: Only one encoder is needed, since the authors are using cross-lingual embeddings.
Fixed Pretrained Cross-Lingual Embeddings: The authors used pretrained unsupervised cross-lingual embeddings, and then fix them during training. This way, the cross-lingual mapping does not change due to training.
One decoder per language: While there is a shared encoder (due to the cross-lingual embeddings), there is a different decoder for each language.
Unsupervised Training
To actually do the training, the authors use a host of techniques. Here is a summary of them:
Denoising: This is a key aspect of this model. To make the model unsupervised, it is trained to reconstruct its own input. It is trained to take an input sequence, encode it, and then decode it back to the same language. Implemented naively, this is just a copying task. To prevent that from occurring, the authors add noise to the inputs, and the model learns to denoise the inputs. This is inspired by denoising autoencoders.
Backtranslation: The denoising model still only learns translations to itself, not between languages. To do this, they use backtranslation. In short, this means you encode a noisy input and then decode it (greedily) into another language. This process creates a “pseudo-parallel” corpus. Then, train the system to predict the original sentence from this predicted translation.
Iterative Training: They alternate the above training objectives. First they perform one batch of denoising for language 1 (L1), then one batch of denoising for language 2 (L2), then one batch of backtranslation from L1 to L2, and then one batch of backtranslation from L2 to L1.
Optional small parallel corpus: If you have a small parallel corpus, you can also alternate with training on that rather than just the pseudo corpus. This is makes overall approach semi-supervised rather than unsupervised.
And that’s basically it. Refer to the paper to see the implementation details, hyperparameters, optimization methods, and results. In short, this works pretty well, approaching the performance of supervised NMT. Adding semi-supervision means it does extremely well. Please refer to the paper for in-depth details into the results.
With that method defined, let’s now turn to the other publication.
Like the first paper, this one also seeks to perform unsupervised machine translation by using only monolingual corpora. However, the methods it uses are different. In this section, I will describe the overall approach that this paper uses, and then in the final sections of this post, I will assess the similarities and differences between the two models.
The authors use a single encoder and single decoder for all languages, with the difference between languages being the lookup table for the vocabularies of different languages.
No fixed embeddings: the authors use language-specific embeddings that are updated during the training process.
Unsupervised Training
Denoising autoencoder: The authors use the strategy inspired by denoising autoencoders. They add noise to the input sentences and then learn to reconstruct the input through the encoder-decoder models. This is similar to the method in Paper 1. For details of the noise model, see the Differences section below.
Cross-domain training: This is essentially the same as the backtranslation in the first paper. The authors sample a sentence \(x \in D_{l_1}\) any apply the current translation model \(M\) to it to generate \(y = D(x)\) where \(y \in D_{l_2}\). Then, they add noise to \(y\), and again pass it through the translation model to recover a reconstruction of \(x\). The difference between the original and the reconstruction is a loss term.
Adversarial training: Intuitively, we would like the distribution of encoder outputs to be the same regardless of the input language. To enforce this, the authors add an adversarial loss term to encoder outputs during the forward translation and backtranslation steps. Thus, they train a neural network (the discriminator) to distingiuish between the encodings of source and target sentences, and the encoder is additionally trained to fool the discriminator. This is analogous to the training process in a GAN (see this NIPS 2016 tutorial for a review on GANs).
Loss: Thus, the final objective function is simply the sum of the denoising auto encoder losses for each language, the cross-domain training losses between the two languages (in both directions), along with the adversarial loss. At the same time, the discriminator loss is minimized to update the discriminator. Equation (4) in the paper formally writes down this loss term.
Training Process: Iterative training from an initial translation model. Start by inferring a bilingual dictionary using monolingual data. The initial model is simply word-by-word translation using the parallel dictionary. Use the current model to translate each dataset and update the model and discriminator with the above loss function via gradient descent. Repeat until satisfied with the result.
Hyperparameter Selection: Translate from language 1 to language 2 and back, and then calculate BLEU score between original inputs and their reconstructions. Do that in both directions, and take the average. The authors show that this metric correlates well with actual BLEU score using a parallel validation corpora.
And that’s basically it. Again, there are details about the implementation that are worth looking into if you are trying to reimplement this result. In short, they get pretty good results, again approaching the level of supervised NMT. They also have a number of addendums to assess how well this is learning the translation.
Similarities Between the Papers
Now that I have described the two papers, we can compare them for similarities:
Obviously, they both use monolingual corpora.
Both use denoising autoencoders. However, they way they add noise is somewhat different. Check out the Differences section for more.
Both seem to use backtranslation. As far as I could tell, there were no major differences in the method of doing so.
Both use a bidirectional RNN in the encoder, and a unidirectional RNN in the decoder.
Paper 1 shows how semi-supervision can help dramatically. I would imagine this applies for Paper 2 as well.
Differences Between the Papers
While there are many similarities, there are some important differences:
Adversarial Training vs Cross-lingual Embeddings. Paper 2 uses adversarial training to make the encoder distributions match for each language, which I found pretty clever. Paper 1 circumvents the need for this by using pretrained and fixed unsupervised cross-lingual embeddings.
Denoising Method: The way that each of the papers do denoising is somewhat different. In Paper 1, the authors make random swaps of contiguous words, where for a sequence of N elements, they make N/2 random swaps. In Paper 2, the authors drop every word in the input sentence with some probability, and they “slightly” shuffle the input sentence, with the amount of shuffling tunable by a hyperparameter.
Iterative training: Though it’s not exactly clear, Paper 2 appears to not be as explicitly iterative about training the denoising step vs. the cross-domain step. Their pseudocode implies that they do it all in a single training step. Paper 1 indicates that they iteratively perform denoising and cross-domain training.
Single decoder vs language-specific decoder: Paper 1 learns one decoder per language, whereas Paper 2 learns a shared decoder which operates using a language-specific lookup table. My understanding is that this distinction is necessary because of the use of cross-lingual embeddings in Paper 1.
Conclusions
Overall, these papers present two highly overlapping but different methods for training a neural machine translation model without any parallel corpora at fairly high accuracy. On first glance, this result seems highly unintuitive. However, after diving into the methods, we see some fundamentals that make sense in this context: adversarial losses and cross-lingual embeddings to unify encoder distributions. For me, the surprising part remains that the denoising component works as well as it does, since both of these papers introduce relatively straightforward examples of noise.
Thanks for reading! If you have any thoughts or suggestions, please reach out via email or Twitter.