(Caveat: I know this about SQL server, not Postgres)
If you use random UUID's (as opposed to sequential UUID's) for your primary key your database will spend an extra hunk of time on reordering your PK index on disk on insert. This bit us at Stackoverflow. So remember: just because you can do something doesn't mean you should.
Yes, what you said also apply to MySQL/InnoDB and MongoDB. Someone did an experiment with MongoDB some times ago for various id schemes: http://i.imgur.com/clm9D.png (https://groups.google.com/d/topic/mongodb-user/1gPqVmFHExY/d...), I don't know if Postgres is being different but people should do their own testings instead of blindly listening to advices from the Internet.
As pallinder said, it can be very handy: the IDs can be generated by the nodes, not the db server. Very useful in disconnected environments. Imagine being able to create data on a smartphone whilst sitting on the plane, and not having to do anything messy with ID replacement when you sync with the server in the office.
The cost? The keys are larger, and (unless using a sequential algorithm) are poor candidates for clustered keys (because they force page splits). The impact can be rather large (this lead to terrible performance in early versions of Sharepoint).
It's also useful if you don't want your URLs to expose how many of a certain thing you have, whether it be users, posts, payments etc. A lot of sites let you derive how much activity they have based on how fast their numeric IDs increment. You could use a separate token alongside the pkey to do the same but this feature just makes it simpler.
Imagine a distributed system where you want to preserve uniqueness across the board. Using a uuid more or less (by the sheer number of possibilities) guarantees that this will be the case.
UUID v1 uses the MAC address of the computer doing the generation as a part of the UUID, which ensures uniqueness so long as you aren't cloning MAC addresses in your infrastructure.
The downside of this is that it can leak information about the machine that generated the UUID, but if you require deterministic uniqueness, there you go.
At the DB level, it facilitates master-master replication set-up, eliminating the auto_increment collision problems.
Master-master replication, in turn, allows building distributed applications that can handle net splits reasonably well.
For this to work, the PG role that the Rails app is using has to be a superuser, as AFAIK only superuser roles can execute CREATE EXTENSION.
Does anyone consider this an issue? I have been using non-superuser roles within my Rails apps, and using an outside superuser role to add extensions with an external tool (like pgAdmin).
After reading the documentation more clearly. http://edgeapi.rubyonrails.org/classes/ActiveRecord/Batches..... It looks like there may need to be a patch for this feature, however the batching process forces the database to make a query in Ascending order for the primary keys so as long as you can generate new UUIDS in ascending order you should be fine. However problems look like they will occur if you generate a new record that has a UUID that would be inserted randomly between two numbers in the UUID range. I will see if I can bang something out and issue a pull request.
If you use random UUID's (as opposed to sequential UUID's) for your primary key your database will spend an extra hunk of time on reordering your PK index on disk on insert. This bit us at Stackoverflow. So remember: just because you can do something doesn't mean you should.