Hacker News new | past | comments | ask | show | jobs | submit login

I regularly use duckdb on datasets of 1B+ rows, with nasty strong columns that may be over 10MB per value in the outliers. Mostly it just works, and fast too! When it doesn't, I'll usually just dump to parquet and hit it with sparksql, but that is the exception rather than the rule.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: