Hacker News new | past | comments | ask | show | jobs | submit login

I think you're right when thinking about CNNs on words. It's max-pool that's usually combined with CNNs that helps with translational invariance, less so the CNN filters themselves (which, if it were a full convolution, would show up in the complex phase).

I think it makes more sense when CNNs are applied at the character level. The filter banks then activate for specific n-gram patterns of characters, like certain prefixes, suffixes, and root words. The higher level LSTMs are then relieved of having to understand that level of structure. Also, tokenization is hard, and might be especially wrong for media with grammatical abuse like Twitter, and this avoids that janky preprocessing. See: http://arxiv.org/abs/1508.06615




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: