Hacker News new | past | comments | ask | show | jobs | submit login

I agree. To me this sounds more like bad programming (high coupling and low cohesion) and like it would be a problem even if a separate application acted as the backend.



Well some systems make good programming really difficult. Stored procedures, especially back then.... Yeah.


I believe this in the abstract, but would be very interested in some concrete explanation for why sprocs compose worse than other programming languages. How would the OP's situation have improved if instead of sprocs he had a similarly complex webapp and had to deal with the schema changes there? I'm not challenging the OP; I don't have enough experience with sprocs to have an informed opinion, but this information is necessary to make sense of the OP's criticism.

The best argument against sprocs that I've heard is that you really don't want any code running on your database hosts that you don't absolutely need because it steals CPU cycles from those hosts and they don't scale horizontally as well as stateless web servers. This is a completely different argument than the OP's, however.


Procedures don't have arrays/structs/classes/objects. This means they can't share data unless through parameters.

Polymorphism is possible, but not really encapsulation. One could argue that arrays/objects would cause consistency problems, and therefore don't belong in a data language.

Someone expecting python-like OO programming will go mad.

The performance argument depends on the whether the database is limited by CPU or bandwidth and what the load looks like. It's shown in benchmarks that the round-trip between db/network/orm/app takes many orders of magnitude longer than the procedures themselves.


Databases are engineered for high reliability - data integrity above all else. That means they develop more slowly and are perpetually behind what you expect from a modern programming language.

If someone created a database with the intention of it being a good development framework, it would probably be more pleasurable to code against, but would you trust it with your data?


Postgres evolves much more quickly than, for example, Go. I’m not sure I buy this argument.


Relative to most RDBMS it evolves quickly, but from a language perspective it's barely different to the SQL I was writing at the start of my career. Meanwhile Go didn't exist then and does now, and has better tooling for any normal development workflow.


My impression of Postgres development is markedly different. It seems like each release since 8 or 9 has brought significant, exciting new features. I don’t pay close attention, but IIRC, a lot of investment has gone into JSON, window functions, and lots of miscellaneous but important things (e.g., upsert). Meanwhile Go’s language changes have all been minor and boring (not a bad thing!).


> Databases are engineered for high reliability - data integrity above all else.

So should be most of your (micro)services. I have seen more instances of sloppy system design (non-transactional but sold as such) than coherent eventually consistent ones.


Surely that's situation dependent? Databases are built to handle anything up to and including financial and health data.

On the other hand, does it matter if my social network updoot microservice loses a few transactions? With code running outside of the database, you get to decide how careful you want to be.


Yes it does. Are you mentioning in the internal services documentation that they can lose data second unknown conditions? And on your product hunt page that "your content might be randomly lost"? If you do not then you are lying. Everybody using the service expects transactional consistency.

Note that what you described is not eventual consistency but rather "certain non-determinism", there is an abyss of difference.


I don't mean specifically with transactional consistency (I don't think I even mentioned eventual consistency), but just generally the level of engineering required to write a system as well tested as postgres inevitably will slow down its feature development. This means databases don't have the latest features typically enjoyed within application development environments.

However I believe you CAN tolerate some level of failure and defects in your app code, knowing the more battle hardened database will - for the most part - ensure your data is safe once committed. Yes, there will always probably be bugs and yes some of those bugs may cause data loss in extreme cases, but if you're saying you perform the same level of testing and validation on a product hunt style app as you would on a safety critical system, or as postgres do on their database, I find that extraordinary and very unrepresentative of most application development.

I'm not saying defects are good or tolerated when found, but from an economic perspective you have to weigh up the additional cost of testing and verification against the likely impact these unknown bugs could have. Obviously everyone expects any given service to work correctly - but when is that ever true outside of medical, automotive and aerospace which have notoriously slow development cycles?

Personally I'd pick rapid development over complete reliability in most cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: