Hacker News new | past | comments | ask | show | jobs | submit login

The paper also proves that this capability, one unlikely to occur naturally, does not help for tasks where one must create sequentially dependent chains of reasoning, a limiting constraint. At least not without overturning what we believe about TCS.

> A single fundamental breakthrough

Then we'd no longer be talking about transformers. That something unpredicted could happen is trivially true.

> immergent capability

It's specifically trained in, requires heavy supervision and is hard to learn. It's surprising that Transformers can achieve this at all but it's not emergent.




Look...

You are taking literally 2-4 token phrases from my comment and attacking them without context. I'll spend time on the latter quote. You quote 'emergent capability'.

A) appreciate you correcting my spelling

B) 'The narrow test tube environment in which we see better performance hints at the unknown which when better understood could promise further yields down the road.

To my mind, the idea that filler tokens might promote immergent capability leading to broader task complexity'

C) Now that we have actual context... I'll leave the rest to the thoughtful reader. I said the following key words: 'hints', 'could', 'might'

D) Who asserted this behavior was emergent?

Recommend slowing down next time. You might get a more clear picture before you attack a straw man. Expect no further exchange. Best of luck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: