Hacker News new | past | comments | ask | show | jobs | submit login

They cite the paper where the architecture was introduced. If you go to that paper, you'll see that it mostly consists of a very detailed and careful comparison with Transformer-XL.

In the new paper, they plug their memory system into vanilla BERT. This makes the resulting model essentially nothing like Transformer-XL, which was a strictly decoder-only generative language model.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: