문제

I'm doing some research for the summarization task and found out BERT is derived from the Transformer model. In every blog about BERT that I have read, they focus on explaining what is a bidirectional encoder, So, I think this is what made BERT different from the vanilla Transformer model. But as far as I know, the Transformer reads the entire sequence of words at once, therefore it is considered bidirectional too. Can someone point out what I'm missing?

도움이 되었습니까?

해결책

The name provides a clue. BERT (Bidirectional Encoder Representations from Transformers): So basically BERT = Transformer Minus the Decoder

BERT ends with the final representation of the words after the encoder is done processing it.

In Transformer, the above is used in the decoder. That piece of architecture is not there in BERT

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top