If you write your views, tvf and stored procedures to use common elements defined once and once only (this starts with once and once only tables), maintenance is easy peasy. If you let every developer create their own view for the same business logic, then you are creating an unmanageable creature.
This also applies to "composable" code written in the application layer outside the database.
The issue is when central ideas about how your application work changes. Suppose you’re a US bank and after a few decades you want to open a branch in Europe and suddenly every dollar value needs to be associated with a specific currency.
At it’s core that’s a very basic change, but the issue with this stuff is how your adapting to such changes without introducing massive technical debt. Databases that elegantly to your data are easy to work with. However, it’s always tempting to use the lazy solution, which is why systems relying on stored procedures tends to age poorly. In effect they tend to corrupt the purity of your schema.
I don't see how that has anything to do with stored procedures. For such large changes, your schema and application logic both need to be changed anyway. So if you're already changing the schema, why is it easier to change the application logic if it lives in a separate codebase?
If anything, it seems like it would go the other way around: With stored procedures, you're guaranteed that any application logic is at least consistent with the current database schema, whereas with separate services, you need to maintain that consistency yourself. Not that I'm advocating for stored procedures in general because they come with a mountain of other drawbacks, I just don't buy this particular argument.
> why is it easier to change the application logic if it lives in a separate codebase?
Because stored procedures don’t include the ability to do things like use inheritance to easily add extra layers of abstraction and let the compiler detect when you’re messing up. Adding a column is easy enough, but enforcing that it means something and every one of your prior calculations are now meaningless without taking it into account is difficult. Making such changes elegant and enforcing them to avoid future bugs is even harder.
Remember this example is at it’s core a minor difference. If you’re changing something like amounts being measured by volume instead of weight, then basic assumptions no longer hold making things much worse.
> Adding a column is easy enough, but enforcing that it means something and every one of your prior calculations are now meaningless without taking it into account is difficult.
But again, I don't see how this is any easier to solve in application code than it is at the DB level. I don't know of any programming language or framework that allows you to encode rules such as "this DB column must be used". And even if such a thing existed, it wouldn't prevent you from making mistakes in using that column.
On the other hand, using stored procedures at least prevents errors in the other direction: referencing columns in a way that no longer makes sense (because the type was changed, it no longer exists, etc).
“This DB column must be used” is kind of a complex topic here. OOP for example is about building internal API’s which in theory force users of those API’s to account for these kinds of changes. On the other hand those API’s are generally used by the people creating them so it’s more about assistance than forced compliance.
Stored Procedures have all kinds of other issues, but this is an obvious failure.
Sounds like the issue is a type system then, no? So stored procedures should be no worse than all of that Python, JS, Ruby, Elixir, Clojure, etc etc etc that’s running web backed a all over, right?
In fact, stored procedures (in relational DBs at least) are statically typed, so they will error if you change the type of a referenced column to something that no longer makes sense. This gives you more safety than a dynamically typed application language can, at least out of the box.
The parent was arguing that stored procedures are worse than conventional programming languages because the compiler won't warn them about any errors. I responded by pointing out that lots of web application backends are written in dynamically typed languages such as Ruby where there is also no compiler to warn of type errors. I'm definitely not arguing that stored procedures should look more like Ruby.
This also applies to "composable" code written in the application layer outside the database.