Hacker News new | past | comments | ask | show | jobs | submit login

Indeed. Wait until you have to deal with an audit. Apparently, even on microsoft's advice on licensing, we still owed them about £50,000 of missing licenses after an audit...

Most of this was down to a SQL Server upgrade where core and CPU terminology was changed.




I'm currently arguing for serious considerations of SQL Server alternatives for selected new development work (and where reasonably achievable), in order to mitigate spiralling license fees. The main issue is that mostly those fees are paid by our customers, but ultimately our customer's cost is our cost. Collectively it's becoming a stupid amount of money.


We're only keeping it around due to a pile of stored procedures and coupling (see an earlier thread on this I was whinging about). PostgreSQL is the next step. We'll pay for support via EnterpriseDB still.

Our main SQL cluster is two 48 core HP machines with 512Gb of RAM each and a big EMC SAN. We want this as lots of much smaller machines but you can't really scale down SQL Server once everything is coupled into it.


> stored procedures and coupling

Right. Funny how 'best practice' became to use stored procs rather than generated queries. Partly because it constrains and defines the API exposed by the DB and greatly helps avoid SQL injection issues. Those things can also be achieved with a well written code layer, and as for the 'API', well, we have so many stored procs that that argument has become somewhat tenuous.

There's the performance aspect as well - having the DBA know what queries will be 'thrown' at the server. But again, it's not black and white, it's more that the stored proc does tend to limit really bad SQL queries moreso that open ended srting queries, but 'it depends'.


Rather than write another reply, I'll link you to my thoughts: https://news.ycombinator.com/item?id=9928688


Nice reference discussion, thanks. Sounds like we're on the same wavelength.

Our main issue (IMO) is that if you get a SQL Server person in to solve SQL Server performance issues then you're likely going to go down the route of one massive all powerful SQL Server box which just compounds the problem. A broader solution of moving away from pure SQLServer and towards distributed work, caching layers, etc. is probably a saner long term path to take. But in business short term thinking generally takes precedence over long term.


That's very true which is why we wrangled the architecture off the database folk :)


While I use PostgreSQL whenever I can, it is significantly easier to design a backend using a cluster of SQL Server machines, than it is to design a backend using several commodity machines with PostgreSQL. The former is just more mature, but you pay for that.


I agree with respect to traditional architectures but most of our stuff runs from our cache layer (97% of queries are cache hits) so we're moving to lots of smaller and cheaper instances with a cache front end (memcached). We literally use the database as a storage engine going forward and nothing clever.


I really wish I could do something like that for our system. We need to hit the database almost every time a query comes in.


Indeed. This is good motivation for you:

http://i.imgur.com/Q8NtKTk.png

Cache hits versus misses. The latter may result in multiple SQL queries whereas the former are returned from the cache. Imagine the cluster we'd need to support that!

That's over 28 days for reference.


It is, but the nature of the application demands that a write query is not considered done until it is guranteed persisted on the disk, and the same data is rarely queried often enough to warrant a dedicated cache layer. The data that is queried often is so far handled by SQL Servers buildin caching gracefully.

That isn't the same as we couldn't get some benefit, because we could especially as the userbase scales. But so far we haven't had to scale to the point where it's worth the added complexity to persuit. One can only look forward to the day it is.


Do you have a view on Redis cache? It seems to be more popular than memcached at the moment.


Yes. We stick with memcached. Every node in our cluster has an uptime of over 3 years handling up to 5000 requests a second so we're quite happy to leave it. In fact it's the most reliable thing I've ever seen I think.

Redis looks nice but I suspect that it may be easy to lean on it too much for functionality. We were looking at it for a couple of tasks but haven't found much motivation to move yet.


With that track record, there has to be a very good reason to switch.


What legally allows them to audit you?


When you sign a volume license contract with them it grants them the right to do it.

The immediate cost savings is all the business sees.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: