Frage

I'm doing some research for the summarization task and found out BERT is derived from the Transformer model. In every blog about BERT that I have read, they focus on explaining what is a bidirectional encoder, So, I think this is what made BERT different from the vanilla Transformer model. But as far as I know, the Transformer reads the entire sequence of words at once, therefore it is considered bidirectional too. Can someone point out what I'm missing?

War es hilfreich?

Lösung

The name provides a clue. BERT (Bidirectional Encoder Representations from Transformers): So basically BERT = Transformer Minus the Decoder

BERT ends with the final representation of the words after the encoder is done processing it.

In Transformer, the above is used in the decoder. That piece of architecture is not there in BERT

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit datascience.stackexchange
scroll top