Hacker News new | past | comments | ask | show | jobs | submit login

I am just a user and occasional patch submitter.

With CouchDB, you front-load all of your disappointment. In exchange, everything that CouchDB can do has compelling big-O performance. For example, all queries finish in logarithmic time, including one-to-many, one-to-one, and merge-joins. Map-reduce is not a job you run; it is a living data set that always exists and always reflects the latest changes to your data. (Updating a map-reduce result takes linear time for the number of updates, if I recall.)

Plus, the BigCouch builds allow you to specify your redundancy needs. The preceding paragraph still holds true. Nothing has changed. You just get to throw hardware at the problem to guard against machine failures.

CouchDB is slow. Its VM is pokey. Its disk format is bulky. Its protocol is bloated.

CouchDB is fast. Everything that you can do, you can do in logarithmic time.

CouchDB is neither slow nor fast, but predictable. Fun fact: the entire CouchDB Erlang code base is almost the same size as the NodeJS standard library (20k apples vs. 15k oranges).

To answer your question, snappy compression and view optimizations will be a welcome boost for the other speed question: speed of development, time to market. If you think the compile step is time sink, rebuilding an index on all of your data is just untenable. So, the optimizations will improve day-to-day experience, but they will not change its fundamental value proposition.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: