Hacker News new | past | comments | ask | show | jobs | submit login

The task is not to compress general data by a factor of 200, the task is to compress a very domain-specific kind of data by a factor of 200. Presumably the hope is this data has lower entropy than e.g. the Hutter prize data.

If I tell you to write an image compression algorithm, you aren't going to be able to do much with a bitmap of uniform randomly generated pixels. However if I tell you that in the domain I'm working in there are only two colors white and black, immediately I can reduce storing each pixel from 24 bits to 1bit, saving a factor of 24. If I tell you further that >99% of pixels are going to be black, more compression tricks become possible, etc.

I don't have expertise in this particular problem, but a priori dismissing it by comparing to Hutter is not valid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: