Better to deal with the exceptions up-front, than to discover months down the line that your database has been silently chopping long strings, silently casting types and coercing values into useless junk.
That may be, but when an application developer is developing, they primarily need to get data into the database. That may be right or wrong[1], but it has a big impact on the adoption of particular database systems or practices.
And it's somewhat irrelevant to this discussion, because there are many ways you can feed the data you want into postgres (like declaring columns BYTEA, TEXT, or HSTORE), no matter how "bad" it is. I would argue that it's much easier to deal with "bad" data in postgres than mysql because you have so much flexibility (e.g. write a function inside of postgres in PL/python to get some meaning out of the ill-specified data; or write some triggers to do post-processing).
[1] I personally think that a misalignment of incentives is responsible for the widespread focus on the ease of writing into a database without considering the utility of the data once it's in there.
Better to deal with the exceptions up-front, than to discover months down the line that your database has been silently chopping long strings, silently casting types and coercing values into useless junk.