Hacker News new | past | comments | ask | show | jobs | submit login

I attended one of the talks(1) of the Sam Bowman. His talk was about "Task-Independent Language Understanding" and he also talked about GLUE and super GLUE; he mentioned that some models are passing an average person in experiments. They did some experiments to understand BERT's performance (2). (similar to article 'NLP's Clever Hans Moment') But they found a different answer to question "what BERT really knows," so he was skeptical about all conclusions. Check these out if you are interested in.

(1)[https://www.nyu.edu/projects/bowman/TILU-talk-19-09.pdf]

(2)[https://arxiv.org/abs/1905.06316]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: