I love being a data scientist working in Natural Language Processing (NLP) right now. The breakthroughs and developments are occurring at an unprecedented pace. From the super-efficient ULMFiT framework to Google’s BERT, NLP is truly in the midst of a golden era.And at the heart of this revolution is the concept of the Transformer. This has transformed the way we data scientists work with text data – and you’ll soon see how in this article.Want an example of how useful Transformer is? Take a look at the paragraph below:
The highlighted words refer to the same person – Griezmann, a popular football player. It’s not that difficult for us to figure out the relationships among such words spread across the text. However, it is quite an uphill task for a machine.Capturing such relationships and sequence of words in sentences is vital for a machine to understand a natural language. This is where the Transformer concept plays a major role.Note: This article assumes a basic understanding of a few deep learning concepts:Sequence-to-sequence (seq2seq) models in NLP are used to convert sequences of Type A to sequences of Type B. For example, translation of English sentences to German sentences is a sequence-to-sequence task.Recurrent Neural Network (RNN) based sequence-to-sequence models
have garnered a lot of traction ever since they were introduced in 2014. Most of the data in the current world are in the form of sequences it can be a number sequence, text sequence, a video frame sequence or an audio sequence.The performance of these seq2seq models was further enhanced with the addition of the Attention Mechanism
in 2015. How quickly advancements in NLP have been happening in the last 5 years – incredible!These sequence-to-sequence models are pretty versatile and they are used in a variety of NLP tasks, such as:Lets take a simple example of a sequence-to-sequence model. Check out the below illustration:German to English Translation using seq2seqThe above seq2seq model is converting a German phrase to its English counterpart. Let’s break it down:Despite being so good at what it does, there are certain limitations of seq-2-seq models with attention:The Transformer in NLP is a novel architecture that aims to solve sequence-to-sequence tasks while handling long-range dependencies with ease. The Transformer was proposed in the paper Attention Is All You Need. It is recommended reading for anyone interested in NLP.Quoting from the paper:The Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.Here, transduction means the conversion of input sequences into output sequences. The idea behind Transformer is to handle the dependencies between input and output with attention
and recurrence completely.Lets take a look at the architecture of the Transformer below. It might look intimidating but dont worry, we will break it down and understand it block by block.The Transformer – Model Architecture
(Source: https://arxiv.org/abs/1706.03762)The above image is a superb illustration of Transformer’s architecture. Let’s first focus on the Encoder
parts only.Now focus on the below image. The Encoder block has 1 layer of a Multi-Head Attention
followed by another layer of Feed Forward Neural Network
. The decoder, on the other hand, has an extra Masked Multi-Head Attention.
The encoder and decoder blocks are actually multiple identical encoders and decoders stacked on top of each other. Both the encoder stack and the decoder stack have the same number of units.The number of encoder and decoder units is a hyperparameter. In the paper, 6 encoders and decoders have been used.
Lets see how this setup of the encoder and the decoder stack works:
An important thing to note here – in addition to the self-attention and feed-forward layers, the decoders also have one more layer of Encoder-Decoder Attention layer. This helps the decoder focus on the appropriate parts of the input sequence.You might be thinking – what exactly does this Self-Attention layer do in the Transformer? Excellent question! This is arguably the most crucial component in the entire setup so lets understand this concept.According to the paper:Self-attention, sometimes called intra-attention, is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence.