Indeed. Wait until you have to deal with an audit. Apparently, even on microsoft's advice on licensing, we still owed them about £50,000 of missing licenses after an audit...
Most of this was down to a SQL Server upgrade where core and CPU terminology was changed.
I'm currently arguing for serious considerations of SQL Server alternatives for selected new development work (and where reasonably achievable), in order to mitigate spiralling license fees. The main issue is that mostly those fees are paid by our customers, but ultimately our customer's cost is our cost. Collectively it's becoming a stupid amount of money.
We're only keeping it around due to a pile of stored procedures and coupling (see an earlier thread on this I was whinging about). PostgreSQL is the next step. We'll pay for support via EnterpriseDB still.
Our main SQL cluster is two 48 core HP machines with 512Gb of RAM each and a big EMC SAN. We want this as lots of much smaller machines but you can't really scale down SQL Server once everything is coupled into it.
Right. Funny how 'best practice' became to use stored procs rather than generated queries. Partly because it constrains and defines the API exposed by the DB and greatly helps avoid SQL injection issues. Those things can also be achieved with a well written code layer, and as for the 'API', well, we have so many stored procs that that argument has become somewhat tenuous.
There's the performance aspect as well - having the DBA know what queries will be 'thrown' at the server. But again, it's not black and white, it's more that the stored proc does tend to limit really bad SQL queries moreso that open ended srting queries, but 'it depends'.
Nice reference discussion, thanks. Sounds like we're on the same wavelength.
Our main issue (IMO) is that if you get a SQL Server person in to solve SQL Server performance issues then you're likely going to go down the route of one massive all powerful SQL Server box which just compounds the problem. A broader solution of moving away from pure SQLServer and towards distributed work, caching layers, etc. is probably a saner long term path to take. But in business short term thinking generally takes precedence over long term.
While I use PostgreSQL whenever I can, it is significantly easier to design a backend using a cluster of SQL Server machines, than it is to design a backend using several commodity machines with PostgreSQL. The former is just more mature, but you pay for that.
I agree with respect to traditional architectures but most of our stuff runs from our cache layer (97% of queries are cache hits) so we're moving to lots of smaller and cheaper instances with a cache front end (memcached). We literally use the database as a storage engine going forward and nothing clever.
Cache hits versus misses. The latter may result in multiple SQL queries whereas the former are returned from the cache. Imagine the cluster we'd need to support that!
It is, but the nature of the application demands that a write query is not considered done until it is guranteed persisted on the disk, and the same data is rarely queried often enough to warrant a dedicated cache layer. The data that is queried often is so far handled by SQL Servers buildin caching gracefully.
That isn't the same as we couldn't get some benefit, because we could especially as the userbase scales. But so far we haven't had to scale to the point where it's worth the added complexity to persuit. One can only look forward to the day it is.
Yes. We stick with memcached. Every node in our cluster has an uptime of over 3 years handling up to 5000 requests a second so we're quite happy to leave it. In fact it's the most reliable thing I've ever seen I think.
Redis looks nice but I suspect that it may be easy to lean on it too much for functionality. We were looking at it for a couple of tasks but haven't found much motivation to move yet.
Most of this was down to a SQL Server upgrade where core and CPU terminology was changed.