Hacker News new | past | comments | ask | show | jobs | submit login
Postgresql 9.3 Released (postgresql.org)
437 points by ergo14 on Sept 9, 2013 | hide | past | favorite | 138 comments



I'm pretty sure I've posted that in every pg 9.3 thread to date, but for application developers don't miss all the new diagnostic fields on PGResult[0]: you can now get (when applicable on the error) the relevant schema, table, column, column type and constraint[1] in order to translate error information back into your application domain and generate logging and error messages which make sense to developers & end users.

It's already available in psycopg2 (since 2.5)[2] and ruby-pg (in 0.16)[3], it may be available in other adapters (outside of libpq obviously).

[0] http://www.postgresql.org/docs/9.3/static/libpq-exec.html#LI...

[1] without having to perform string munging on localized error messages, which probably doesn't include half of them

[2] http://psycopg.lighthouseapp.com/projects/62710/tickets/149

[3] https://bitbucket.org/ged/ruby-pg/issue/161/add-support-for-...


diag stuff is also supported by psycopg2cffi (the postgresql adapter for use with PyPy)


Any idea if it is feasible/plausible/possible for postgresql to return multiple errors instead of just one? The only reason I can see to have to do validation in the app now is so you can report 5 validation errors at once, rather than make the user submit the form and correct one field five times.


Not that I know, you get a single PGResult which may be in an error state and have a bunch of error fields set. I don't think there is a way to get multiple errors out of it.

But I'd suggest asking on the mailing list about the possibility of getting multiple errors in the future. Not sure that makes sense from the database's point of view though, they're not "validation" issues as far as it's concerned they're integrity errors, a query is putting the db in an invalid state and thus rejected. Because the state is invalid, I'm not sure how the db would progress to get more errors.


Then don't call them errors. Call them "speculative problems" or something.


Lying about what they are doesn't change the underlying issue:

> they're integrity errors, a query is putting the db in an invalid state and thus rejected. Because the state is invalid, I'm not sure how the db would progress to get more errors.


I agree with you that it's logically insensible to consider them "errors" because the resolution of the current constraint may change the downstream interpretation.

However it's a tremendously useful development tool.


> error messages which make sense to developers & end users.

Giving this sort of detail direct to users is generally a bad idea for two reasons:

1. It can scare the living bejesus out of some of them.

2. You are handing out knowledge of your applications inner workings. If you have an unfound injection route a malicious entity might be able to use of your error messages to either find the injection flaw and/or find things to do with it once found.

This sort of detail should be logged of course, but give the users a simple code to report to you when reporting the problem ("please quote issue XY0009 when contacting support about this issue") and record the detail against that for your reference.


I think you missed the whole point, which is to not give these details directly to the user: if you were a crazy pervert you could already do that by sending them Postgres's generated error messages.

By having error details in a programmatic format, you can now generates messages which are actually useful and sensible to developers and/or end users if you wish to. Something which you could not do before.


I'm not questioning the enhancement generally: it sounds invaluable for diagnostic purposes for developers (and potentially 3rd line support), far easier than manually plucking details out of an arbitrary error string (or not getting any specific information at all).

Using the information to give a user more indication of whether they should try again or report the issue and wait would be useful and safe, but if you are trying to report to the user "that code already exists, they must be unique" or "name missing, it must be provided" then you have failed input validation as that sort of thing should be picked up on much earlier in a request lifecycle IMO.

I'm surprised how often I still see a full exception report echoed all the way from the data layer of an application up to my browser.


> if you are trying to report to the user "that code already exists, they must be unique" or "name missing, it must be provided" then you have failed input validation as that sort of thing should be picked up on much earlier in a request lifecycle IMO.

1. Not necessarily, most applications could use a well-designed schema to let the database do these validations and send improved results back to the user, it avoids duplicating (or triplicating) validation information. Especially for things like UNIQUE constraints, you're going to hit the database in-application when the DB will do it anyway? Unless it skips a significant amount of application work it's a complete waste of developer time (and likely application time as well).

2. A good schema can do significantly more complex, interesting and expensive (especially in-application if they need database data) validations than a mere NOT NULL check.

3. In-transaction issues can't be caught by the application server, they'll only blow up in the database.

> I'm surprised how often I still see a full exception report echoed all the way from the data layer of an application up to my browser.

Thus confirming that you're completely missing the point. Again, if one wanted to send database error messages to the end user one could already easily do so.


"[if you get a unique constraint violation] ... then you have failed input validation"

Due to concurrency, it's almost impossible to enforce unique constraints properly without actually executing the insert/update. Sometimes you can catch it ahead of time, but it's not guaranteed.

Knowing precisely which constraint failed helps improve the error message you give to the user, which may allow them to correct the problem. For instance, if you have two unique constraints involved in a transaction, your application can figure out which one was violated, and you can use that to give the user directions to correct it (e.g. "choose a different username" versus "that email address already has an account here, click here to send you the username").


You can't catch all errors in the application server. Data in transactions that haven't committed yet isn't available (usually) to the application servers.


> if you are trying to report to the user "that code already exists, they must be unique" or "name missing, it must be provided" then you have failed input validation as that sort of thing should be picked up on much earlier in a request lifecycle IMO.

"Name missing, it must be provided" is an internal validation of the entity which could, in principle, be done reliably in the application; "that code already exists, they must be unique" is a validation against other entities in the DB and therefore could not reliably be done anywhere except in the DB server itself if you have any concurrency.


I am trying to see how input validation addresses cases where the constraint is based on what else is in the database. It is conceptually much simpler and less error prone to let that class of issues be enforced only by the database rather than asking the db first "is this ok" and if so then doing another round trip to insert it (to be safe you'd have to lock the table).

Now, PostgreSQL can do any data validation you want to do in the application, it can do so at least partially declaratively. You can then pass back messages that the application can use in error handling.

I am actually working on frameworks which essentially delegate most of this to PostgreSQL. It's a very powerful approach but it means the db just doesn't trust the application.


> It's a very powerful approach but it means the db just doesn't trust the application.

Which has its advantages. The DB essentially becomes an API/service used by the application rather than an integral part thereof, and thus multiple applications can be cleanly plugged into the database instead of an "owner" application providing a service backed by the owned DB.


> The DB essentially becomes an API/service used by the application....

Exactly, which is what Martin Fowler is pushing with NoSQL... The idea that many people have been using RDBMS's as encapsulated databases for decades is usually lost on the NoSQL marketing though.


You can translate error into user friendly message easier when the source error is structured. You cannot as easily translate arbitrary error string to user friendly message.


>"name missing, it must be provided" then you have failed input validation as that sort of thing should be picked up on much earlier in a request lifecycle IMO.

Why? What is wrong with letting it get picked up where you already have the validation anyways instead of repeating the validation in two different languages?


What if there's two problems in the form? You generally just get one error from the database at a time.


That's really the only reason I can see to do it. But I can see plenty of cases where I don't care about that, so I don't think "you failed input validation" is at all accurate. The consequences are a more annoying UX, assuming it is something a user is interacting with. That doesn't sound like outright failure.


Sure, but it's got to be better than the error you get from JDBC, which will be 200 lines long and not bother to include the SQL being executed, nor the username nor the database, making it utterly useless to anyone.


I'm not questioning the feature as a diagnostics tool, but the post I replied to seemed to be suggesting giving the error details out fairly directly to the user which is not good security practise (IMO). That 200 line JDBC error should not be going out to the end-user at all either, especially not if it did include the SQL being executed or worse still the db/authentication details (db name, user name).


> the post I replied to seemed to be suggesting giving the error details out fairly directly to the user

I can't fathom how you'd come to such a conclusion from:

> translate error information back into your application domain and generate logging and error messages which make sense to developers & end users.

which describes more or less the opposite of "giving the error details out fairly directly".


Depends on who you're targeting (developers know how to make sense of types etc) and how well you translate the error info to human-readable error messages.

More information should allow you to produce better reporting. It's also not necessary to present all the information to your users, you can also make better error logs that users can send to the developers.


> Depends on who you're targeting (developers

By all means have a developers mode which does hand out the information more readily (though I tend to be wary of that in case it gets left on in production, and because it is an extra code path that needs to be properly tested), or report the detail directly if it is an internal tool. But the post I was referring to infored (to my mind) handing the detail out to end users (and by "end users" I mean the untrustworthy mob that is the general public).

> It's also not necessary to present all the information to your users, you can also make better error logs that users can send to the developers.

Hence my last para, where I suggest giving the user reference to the detail without giving them the actual details (the code to give to you for looking up the stored exception report). Giving all the info to the user as an encrypted package would work too (in that case "please quote this code in any issue reports" becomes "please copy+paste this block of text into any problem report").


One option is to log everything but display something simple to the user.


This one is my favorite:

"Prevent non-key-field row updates from blocking foreign key checks (Álvaro Herrera, Noah Misch, Andres Freund, Alexander Shulgin, Marti Raudsepp, Alexander Shulgin)

This change improves concurrency and reduces the probability of deadlocks when updating tables involved in a foreign-key constraint. UPDATEs that do not change any columns referenced in a foreign key now take the new NO KEY UPDATE lock mode on the row, while foreign key checks use the new KEY SHARE lock mode, which does not conflict with NO KEY UPDATE. So there is no blocking unless a foreign-key column is changed."


For those unfamiliar with the issue, you can see it in action on my blog post written from the perspective of an app developer: http://mina.naguib.ca/blog/2010/11/22/postgresql-foreign-key...



(FirstName, LastName) is probably not a very good UNIQUE KEY.



Thanks Joel (I assume you are Joel at Trustly) for helping to get work started on fixing this source of deadlocks. I believe your company also helped fund some of the work on the fix. I have been bit by a race condition caused by this myself.

EDIT: And even more thanks to Álvaro and the other developers working on it,


Another year, another awesome release.

Thank you so much, PostgreSQL team!

As always, there's one feature that makes me want to upgrade immediately. This time it's the NO KEY UPDATE lock mode which will greatly improve the performance of our main application during its lengthy importer runs.

Using pg_upgrade, switching major releases has become something comparably simple to do over the last years, though after last years issues in 9.2.0 (some IN() queries didn't return the correct results), I'm inclined to wait for 9.3.1 this time around.


Fully agreed, a very awesome release and the list of features is in no short supply with this version. Of note:

  * The improvements to foreign data wrappers to allow them to be writeable along with the Postgres FDW will improve visibility for the functionality
  * Lateral joins 
  * Materialized views
  * The checksums for checking against corruption
We've elaborated some of our favorites over at Heroku Postgres - https://postgres.heroku.com/blog/past/2013/9/9/postgres_93_n... along with how you can provision a 9.3 on us to begin using it right away, and then of course you can always see the full whats new on the wiki http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.3


Some excellent new features in 9.3. My two favorite features are 1) not having to configure sysv shared memory anymore removing an installation step and 2) LATERAL JOIN which makes it easy to join with a set returning function (the subquery usage seems less useful in practice). See What's New for examples on how LATERAL can be used.


Yes, it's interesting that they haven't mentioned removing the need to configure SysV shared memory. In my experience, a good number of developers initially miss out on this config / kernel setting, and wonder about their performance issues.

This feature will notably increase usability for people who are just getting started on Postgres. Now, I just wish they up the default config value from 32MB to something that's more in line with today's systems.


The default is determined by initdb, but was capped at 32MB due to the above issue. With this change, the cap's been bumped to 128MB.


Love the way they are heading with the JSON stuff http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-...


I expect to get a lot of use out of the ability to convert JSON into row types.



Cute: "Incidentally, this also makes it possible to set up 'cycles' where replication is going in a circle. Whether that's a feature or a bug depends on your perspective."

Is there any context in which it would make sense to set up circular replication? This would necessitate that all of your nodes be read-only, so I'm not sure what there would be to replicate.

In any case, a bunch of these new features are hot. Particularly excited about fast failover and custom background workers.


> Is there any context in which it would make sense to set up circular replication?

It depends on if each node in the loop knows that it's already seen the replication data before and thus logs and ignores it. That's basically how MySQL (multi-master) replication loops work.


But there's no value in closing the loop. If postgres circular replication works this way, then a -> b -> a works the same way as a -> b, so there's no new capability here.


I was excited to see this:

"Automatically updatable VIEWs"

But then read this:

https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_...

"Simple views can now be updated in the same way as regular tables. The view can only reference one table (or another updatable view) and must not contain more complex operators, join types etc."

As primarity a database developer, to me this is useless. Not sure why I'd want a view for only one table.

For permissions there has been column level perms for some time, which is more efficient than multiple views.


The reason I'm excited by this is that it makes it much easier to have small adapter tables that allow a table whose definition has changed slightly over time to support older code (that might be using column names, for example, that no longer make any sense); even if you only then use these adapters for a few minutes, it lets you, in a safe manner, make these kinds of modifications (by, in a transaction, renaming the old table and replacing it with a view; you can then modify the real table under a transaction, update your code to point to the new table, and then drop the view). Another interesting use case for this is to add computed columns to a table.


This is pretty standard (it is how updatable views work in MSSQL). The advantage they provide is if you wanted to provide a limited view into a table (say 4 of 20 columns) but still allow updates. Granted, that's a pretty small use case, but updating a view that spanned multiple tables could easily become a monster of a problem to solve generally.


>it is how updatable views work in MSSQL

Not exactly. The updated columns may only unambiguously reference one table, but you can execute an update on a view which references multiple tables.


Postgres tends to introduce features iteratively. Today it's the simple case that does very little; in a few versions you wonder why everyone else is so far behind.

This kind of updateable view is pretty much the standard. Oracle doesn't destructure VIEWs to route data to tables. If you think about it, this is not a simple problem to solve.


I am a big fan of postgresql, but I am always having issues with the cluster configuration part, e.g. failovers, automatic new master election and all these crazy administration about the wal shipping. Was there any improvement on this either in the code base or by a 3rd party? What do people use to handle larger postgresql clusters?

As a contrast, I really like Riak's ability to just work with one node down...


The one big improvement happening in 9.3 is that synchronizing with a new master has become much easier. In earlier releases you had to more or less either use file based wal archive logging or you would have to rsync your whole data directory over, now the built-in replication protocol can handle this for you. We also got pg_standby now which is all you need to set up a new slave.

But yeah - if you need automatic failover and master election, you still need third-party tools. Some have had success with pgPool as a out-of-the-box solution (I haven't. I had severe reliability issues with pgPool. You might be more lucky), others produce their own scripts.

The process isn't complicated, it just requires you reading a lot of manpages and thinking ahead, but once you got the process down, postgres itself is reliable enough that its (admittedly limited) tools just work (which is a very good thing).

As long as Postgres doesn't do master-master replication, failover will always be a complicated topic to deal with.


In 9.3 they also made failover faster.


Try this: http://www.repmgr.org/

They do support automatic failover too. The docs are in the Github repo somewhere.


I agree - one of the areas it would be great to have some development work done around is making replication administration simpler and easier.

One feature which would be really nice to have is the ability to do a manual switchover, ie making the existing master into a replica and an existing replica into the new master.

Another poster mentioned repmgr which looks good but hasn't had a release in sometime ( https://github.com/2ndQuadrant/repmgr/blob/master/HISTORY ) with 2 new Postgres releases since, although there does seem some sporadic work on a new beta.


That is something that is listed for 9.3. You can have a M1 with R1, R2, R3.

Now you change R1 to master, and take M1 "offline", afterwards turn M1 back on, and have it become a slave to R1.

See http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.3...


I am not sure whether it is exactly what you want, but I have just learned that there is something called postgres-xc.


Postgres-XC is a niche market mostly useful for things like BI setups and highly scalable transactional clusters. It's a very complex solution to very complex problems. If you need it, you know about it.


I'll still go with MangoDB any day. Its autosharting is unparalleled. https://github.com/dcramer/mangodb


Interesting! never heard of this before, even google seem to bring up mongodb answers as default. Have you used this in production? Care to share your experiences?


It's a joke, it just pipes everything to /dev/null



Sometimes HN just delivers in the comedy stakes.


inhaaaaaaaaaaaale

nice.


This places the decision to move to Mongo or redis in serious doubt - it's an impressive competitive attack.


Mongo - Sure. I'm hardpressed to find genuine reasons to use MongoDB these days. Postgres provides everything I need in a persistent data store and has a ton of other stuff I didn't know I'd need (till you do!). Mongo was great to use with something like node.js where the async/schema-less model translates perfectly but using Postgres with the async driver[1] isn't bad either. Add in hstore, JSON support, plV8 ... and what exactly would somebody use Mongo for? :D

Redis - No clue what you're saying here. If you're using Redis then you're keeping your data in memory and dealing with low level structures. It's for completely different use cases. A classic example is rate limiting for an API. Doing it in Postgres with a disk I/O per request would cripple any semi-popular API. The data doesn't need to be exactly persistent (if we miss a few updates due to the server crashing we don't really care). In exchange for that it's blazing fast for individual writes that we can batch together to backup to something like Postgres (or Redis's built in persistence like AOF).

[1]: https://github.com/brianc/node-postgres


I'm in the middle of moving from mongo to postgres and I really couldn't be happier about it. During the migration I've found various bit of data that were corrupted over the life of the product (due to changes in the application). I already feel the relief of knowing what state my data is in. Oh, and I get to use sqlalchemy again which is a total win.


There is a Redis foreign data wrapper (https://github.com/pg-redis-fdw/redis_fdw) so you can access Redis from PostgreSQL. The new 9.3 adds writable FDW so you could even write to Redis from Postgres when the adapter supports it.

Redis is a good complement to Postgres. It provides caching and fast, non-persistent data store, while Postgres is the durable database.


Postgres has "unlogged tables" for the not-exactly-persistent use case.


Additionally (as unclogged tables will cause total data loss on failure as opposed to just missing some updates) you can alternatively tune, per transaction, the durability of your changes: PostgreSQL lets you tune the durability of individual transactions using the synchronous_commit variable, letting you go entirely asynchronous (where you toss data at the database, it returns immediately, and will be saved to disk during the next ganged commit).


Agreed - spoke too fast.


The new background workers in 9.3 enable Postgres to speak the MongoDB protocol.

See for example Mongres, https://github.com/umitanuki/mongres

I hope this allows frameworks like Meteor to trade up to Postgres.


The fact that PGSQL doesn't shit all over your data due to bugs (like Mongo) or lose it if you haven't checkpointed it to disk recently (like Redis) was a pretty big win for PG to start with...


Redis has AOF persistence that does not need any checkpointing to ensure your data will be there. PostgreSQL is awesome and I love it, but it has nothing to do with you not reading the Redis documentation :-)

\o/ for PostgreSQL, anyway.


Fair point. To be honest my redis knowledge is about a year old. The AOF however does have to be loaded on startup which is a big problem especially if it wasn't compacted.


Redis and postgres are very different tools serving different problem domains. If you're able to swap one out for the other, you might want to reëvaluate your architecture.


Not always. Both can be used as backends for persistent message queues since both have built-in pub/sub. If you do not have high throughput or latency requirements both are perfectly fine for this. If you already use PostgreSQL it is not always worth breaking out key value storage into Redis either.


Redis Pub/Sub is /not/ persistent. You can go the resque / sidekiq route and use a List and do blocking pops. But you have to make sure you have some persistence in place that you feel comfortable with (maybe a master / slave setup, with master handling all real queue load, and the slave doing AOF for recoverability).


Another diaeresis user! yay!


That's why I'm using both in my project :)


Check out antirez' great article about persistence[0] if you're interested in learning how to use redis in a more sophisticated manner than periodic checkpoints to disk.

0: http://oldblog.antirez.com/post/redis-persistence-demystifie...


We have a collection of semi-structured data that we've been storing with the JSON features in 9.3.

Very cool to build a full text search index based on things inside of JSON fields with about 10 minutes of time.


can you do a normal index on JSON though? I guess you can with the function based indexes. 9.3 actually makes JSON meaningful in Postgres since you can actually query it now. But there are no write operations, so MongoDB is still a ton nicer here with its atomic modifiers like $push.


> can you do a normal index on JSON though?

Yes, using the ->> operator you should be able to create an index on a JSON field, e.g.

   CREATE INDEX ON table ((field->>'thing'))
(an index on a JSON column makes no more sense than on e.g. an hstore column)


Indexing on JSON columns is key to storing semi-structured data with JSON. It's fairly useless without, for anything but small amounts of data.


I'm not sure what you mean, could you clarify/expand on your comment?


Are there any PostgreSQL administration tools like HeidiSQL (which is for MySQL)[1]? Seriously, this piece of software is so feature-rich and polished and fun to work with, it makes me choose MySQL over PostgreSQL if the MySQL features fit my needs. Indeed, I know pgAdmin, but some features are rather clunky to use and it doesn't really look that actively developed.

[1]: http://www.heidisql.com/


I am building a (free) web interface for Postgres that aims to be fun and easy to use: http://www.teampostgresql.com

It has a lot of user friendly features particularly for data viewing and navigation, such as clicking through foreign key references to referenced rows, and the other way, looking up rows that reference this one, plus a lot of other stuff that I missed from other admin packages.

There's a demo here: http://teampostgresql.herokuapp.com/

Happy to answer any questions.


It looks promising. I wanted to try it out but I got this error when it was reading the metadata on my database:

"Encountered more 0 entries in index columns list than there were expressions in 'indexprs' expressions list for index 'activity_emd5_00_expr_activity_type_expr1_idx'. Current count = '1', parsed expressions = 1, 'indexprs' value='({OPEXPR :opno 3963 :opfuncid 3948 :opresulttype 25 :opretset false :opcollid 100 :inputcollid 100 :args ({VAR :varno 1 :varattno 6 :vartype 114 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 6 :location 43} {CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 60 :constvalue 12 [ 48 0 0 0 118 101 114 116 105 99 97 108 ]}) :location 57} {OPEXPR :opno 3963 :opfuncid 3948 :opresulttype 25 :opretset false :opcollid 100 :inputcollid 100 :args ({VAR :varno 1 :varattno 5 :vartype 114 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 5 :location 89} {CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 108 :constvalue 6 [ 24 0 0 0 105 100 ]}) :location 105})'"

For your reference, here's that index:

"activity_emd5_00_expr_activity_type_expr1_idx" UNIQUE, btree ((source_details ->> 'vertical'::text), activity_type, (activity_details ->> 'id'::text))


Thank you, I will look into this. What is your PG version if I may ask?


9.3beta2


Hey, you might not get this but the issue you reported is now fixed (failure to resolve segmented index expressions after some changes to PG system functions). Thanks for reporting this and let me know if I can help with anything.


Is it open-source?


Not currently, but that may change.


It costs monies, but Navicat is an absolute treat.

http://www.navicat.com/products/navicat-for-postgresql


DBVisualizer is an incredible cross-platform and cross-database tool that may fit your requirements. http://www.dbvis.com


+1,000,00


Am I just missing something obvious or does that product not offer any way to actually interact with and/or design a database other than raw SQL? Like, how do I add a new table, or change a column in an existing table?


Some advanced functionality is reserved for the Pro version. http://www.dbvis.com/features/feature-list/


I think that qualifies as me missing something obvious. Thanks.


http://www.heidisql.com/forum.php?t=7025

It looks like the author is entertaining the idea of postgres support (if you read further down the thread). A timely donation might make it happen.

Everyone has their own preferences for administration tools, so it's hard to find a one-to-one replacement. You might find something at one of these places though:

http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQ... http://www.postgresonline.com/journal/index.php?/archives/13...


pgAdmin III, it does damn near everything you could need. http://www.pgadmin.org


Have you used pgAdmin? It certainly does everything but it's user interface is too horrific for any simple tasks (like viewing rows or filtering by column etc.)


I wouldn't recommend it if I haven't used it.

Sure, the UI is ugly and it blocks on DB ops, but it works. Crashes I haven't had much of tbh


This sort of thing is really easy to do from a psql console; I don't think that should be a major point against pgadmin.


PGadmin also crashes often, and blocks on many database operations (eg modifying schema, adding indexes etc). How you can write a database GUI and make such a basic mistake as doing DB operations on the UI thread is beyond me.


It's a common mistake. Oracle's SQLDeveloper app does the same thing.


Two open source front ends I use are SQuirreL SQL and SQL Workbench/J. I agree HeidiSQL is one neat piece of software! Lets donate and have the developer add PostgreSQL support. http://squirrel-sql.sourceforge.net/ http://www.sql-workbench.net/


Not sure what HeidiSQL supports but Database.net [1] has what in my mind are the most important features: a tabbed SQL editor and field name completion.

[1]: http://fishcodelib.com/Database.htm


Tora looks similar to this, it's more of a workalike to TOAD for Oracle. I like it a lot because it's less clunky than pgadmin. But it still has a few bugs.


Foreign key deadlock fix finally after 5 years is out to production! Hopefuly now it is safe to use foreign keys in postgresql :)

Haveing table A, B, C with FK in A and B to C_column1, running function which updates A and other function which updates B, both refering to same C row via FK, but not modyfing it in any way - causes deadlock.

in pre 9.3, now its fixed, so I can use FKs again after so many years and its just creazy:)


9.3 is also laying some of the groundwork necessary for bi-directional replication in the future (hopefully 9.4): http://wiki.postgresql.org/wiki/BDR_User_Guide

That means native multi-master PG replication is on its way. Woot.


the idea of geograpically distributed database is useless from my POV


Why? Some background one your thoughts would be helpful.

It (probably) won't be ACID compliant, but it'll be extremely useful for most use-cases.


Sorry:)

Not planning to have database distributed all over the globe and even if I had to distribute the data I would do this on application level with tools like RabbitMQ.


But once you write to Rabbit, you don't know that it'll make it to _any_ db. At least with MM replication you know it's somewhere, and short of disk corruption, you should be able to retrieve it?


I have 8 years experience with Postgres and I already know that a lot of cool built-in features just do not work in production, under heavy load. Usually I have to make some logic inside application in order to protect myself from postgres-driven fuck up.

That's why I am sceptical about this new cool and ambitious built-in MM replication.


Examples please? I am aware of hard corners in places, and rough spots, but some clear discussion might be helpful.


You sound an awful lot like a troll. I have more experience with postgresql than that, and it is literally the single least problematic major software project I've ever used.


fantastic, hopefully this will enable further progress in GPU-based database accelerators like http://wiki.postgresql.org/wiki/PGStrom


Love PG and the community around it. The IRC channel on #freenode has been an invaluable resource with a lot of very patient folks in it answering very dumb questions (mostly from me).


I'm super-excited that this release includes the FOR KEY SHARE lock granularity (used by foreign keys) - this is a big win for web app developers - I've blogged about it here 3 years ago: http://mina.naguib.ca/blog/2010/11/22/postgresql-foreign-key...


Postgres is the C++ of databases. It is well thought out and designed. It seems overly complex at first until you actually use it a lot to solve hard problems, then you understand why it is the way it is and just how great all those features are.


Considering I think C++ has many problems because it is overly complex because it grew on top of a very low level language, I have to say I can't agree with your statement. Just because something has features, doesn't mean it's great, because some of these features can make everything unstable. And PostgreSQL is great because it is solid and reliable and doesn't have anything like that.


Not quite sure what RDBMSs you are comparing Postgres to here... Firebird? Microsoft SQL/Sybase? Oracle? Because none of those strike me as particularly "simple" products?


Almost certainly MySQL, which is overwhelmingly the competitor in the psql space. A big reason why mysql took off is the same reason that PHP took off -- it is very quick and easy to take the first steps.


I think a large part of that simplicity can be attributed to the lack of Windows port of PostgreSQL at the time that PHP was gaining traction. MySQL was the only reasonable option then. And then the community mindset was solidified.


I think it was the need for manual vacuuming. If you were running a small web host at the time picking MySQL over Postgres meant a huge reduction in potential support tickets related to non-vacuumed databases causing customers to run out of space.


That certainly didn't help, but if your DB was large enough where vacuuming mattered, you probably found a cron solution. The other place I saw MySQL win was in single access benchmarks that were scattered across the Web. Postgres would win in concurrent scenarios pretty handily, but people were glued to those serial benchmarks.


> A big reason why mysql took off is the same reason that PHP took off -- it is very quick and easy to take the first steps.

I think most people struggle with setting up postgres auth. Personally, I find postgres' auth well designed.


Postgres, however, is written in C. Mysql uses C++. I'll take Postgres any day, though.


Yeah, because C++ is well thought and designed, specially simple when trying to solve hard problems with it.


Differently from C++, Postgres complexity is completely optional. The features you don't know about won't change the way your code works.


When you are comparing to other offerings, postgres's auth is complex than them. But other than that, what about postgres is complex?


The more I hear about Postgresql the more I'm intrigued. I'm a sysadmin by profession and I manage quite a few MySQL servers. Are there any good resources to learn Postgresql (both admin & usage) outside of the documentation?


The official documentation is really quite excellent.


What is the ORM/Driver that anyone here would suggest to go with Express/Node.js while working with postgres?

Also, any libraries similar to ruby gem "apartment" that would make multitenancy with postgres easier in node.js world?


For what it's worth, Node's ORM packages seemed fairly immature, either lacking support for migrations, relations, just being awkward to work with, or lacking some other functionality I had grown accustomed to.

Bookshelf (http://bookshelfjs.org/) seemed quite nice though, but by the time I got around to it I decided my project wasn't very interesting anyways ;)


Just a few days ago I was talking to people about JSON support in 9.2 and how I was excited about the upcoming 9.3 release with more built-in support for JSON (I wasn't aware of the release date though). Here we are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: