Which reminds me to wonder - does anybody know why the cost model uses constants from a config file? Compared to all the amazing things that Postgres does, it seems like "measure the value of a sequential read and a random read at runtime, then use those values" should be pretty easy...
There are newer/commercial databases that use that sort of microbenchmarking approach. My guess is that no one has implemented it for Postgres, or there's a fork that does it, or it's somehow pluggable. I'm not sure if there's been any discussion of it in the developer communications, but its worth a look.