If this will read a csv that has columns with mixed integers and nulls without converting all of the numbers to float by default, it will replace pandas in my life. 99% of my problems with pandas arise from ints being coerced into floats when a bull shows up.
The problem is not that it can't be done, it's that I'll read one dataset and write the script that behaves as expected (using `head` here and there to check things as out the script progresses), then come back to it later after I get a new dataset, that now has nulls mixed with numbers. It starts behaving differently or is broken in a subtle way, and it's not always obvious why. After lots of experience, I have learned to check for int mangling each time a new Dataframe is read or two Dataframes are merged together. It is enough of a frustration that I am willing to look for a viable alternative, because I think it's a bit absurd that Int64 isn't the default for columns that are clearly meant to integers mixed with nulls, or that I can't set a flag to tell it to stop int mangling.
It's not absurd that Int64 isn't the default, because:
1. nullable Int64 was only implemented recently, still experimental, and changing defaults can break lots of existing code
2. implementing nullable Int64 was a very non-trivial exercise, because pandas was mostly built on top of numpy which didn't (and still doesn't) have nullable integer arrays
I disagree that those things make it not absurd. The current behavior is a surprise when you discover it and continues to bite you long after. It shouldn't be changed to a default now; the current behavior should never have existed.
I understood the technical reasons since I've researched them myself. It does literally nothing to change the frustration or convince me not to look for an alternative.