glad it works for you but this sounds like bad advice for most people.. why change to an outlier toolchain for some percentage increase in performance and raw data capacity, when you admit yourself that many Big Data sets are not fitting there? No computer on my desk has less than 32GB RAM now -- pandas works well with that.
I think that neither "outlier toolchain" nor "some percentage increase" are fair. This benchmark [0] show significant speedup while lowering the memory needs. You still need to reach dask/spark for really big data where you need a cluster of beefy computers for your tasks.
If you use an r5d.24xlarge-like[1] instance, you can skip spark/dask for most workflows as 768 GB is plenty enough. On top of that, polars will efficiently use the 96 available cores when you are computing your join, groupby, etc.
I think it depends on whether you use it for operations or for data analysis. Speed is only one concern and it may not always be the most relevant concern.
A statistician/data scientist wrangling data and making plots would not have cared whether loading a CSV file takes one second or one microsecond, because they may only do it a handful of times for a project.
A data engineer has different requirements and expectations. They may need to implement an operational component that process CSV files repeatedly for billions of time a day.
If your use case is the latter, then pandas is probably not for you.
I love pandas and work with quite small datasets for EDA (10^(3..6) most of the time) but even then I run into slowdowns. I don't really mind as I'm pursuing my own research rather than satisfying an employer/client, and often figuring out why something is slow turns into a useful learning experience (the canonical rite of passage for new pandas users is using lambda functions with df.apply instead of looping).
I've definitely procrastinated doing some analyses or turning prototypes into dashboards because of the potential for small slowdowns to turn into big slowdowns, so it's nice to have other options available. I'm very interested in Dask but have also been apprehensive about doing something stupid and incurring a huge bill by failing to think through my problem sufficiently.
Some percentage? I haven't used polars, but I ditched pandas after observing x20 speedup by just rewriting a piece of code to plain csv.reader. Pandas is unacceptably slow for medium data, period.