Hacker News new | past | comments | ask | show | jobs | submit login

Ok? How are you comparing these systems to the benchmark so they might be considered relevant? Compressing "Lots of small files" describes an infinite variety of workloads. To achieve anything close to the benchmark you'd need to specifically only compress only small files in a single directory of an average small size. And even the contents of those files would have large implications as to expected performance....



My comment is not making any claims about that. It's just a correction that filesystems with "81k 1KB files" are indeed common.


If that were true, surely it would make sense to demonstrate this directly rather than with a contrived benchmark? The issue is not the preponderance of small files but rather the distribution of data shapes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: