I would guess that SQL Server has been getting more expensive largely because the people who decide these things see themselves as competing with Oracle more than with free alternatives, and (I believe) SQL Server is still cheaper than Oracle.
Disclaimer: I work at Microsoft in "the SQL org" but I'm just a peon so what do I know?
> and (I believe) SQL Server is still cheaper than Oracle.
That explains the pricing of Enterprise, but not the crippled memory (64GB) or the lack of decent features in Standard Edition. Standard doesn't compete with Oracle - or if it did, it would get laughed out of town for a 64GB memory limit.
Standard used to be a gateway drug that would get people hooked on good-enough performance and easy-enough management, but these days, crippled with $500 worth of memory, I don't think that reputation's going to hold out.
Sorry, but 64GB of RAM is a LOT.. for that matter a db install that would need 64GB of memory is probably not something you would call short of "Enterprise" ...
Beyond that, if you need something at that scale, often SQL isn't necessarily the right choice.
It's $500. When you consider that bringing in a performance tuning consultant can easily cost thousands of dollars per day, $500 worth of memory isn't much at all.
> for that matter a db install that would need 64GB of memory is probably not something you would call short of "Enterprise" ...
Remember that database servers use memory for at least 4 things: caching data, workspace for running queries, workspace for TempDB, and caching execution plans. If you run DMV queries against sys.dm_os_buffer_descriptors, you might be surprised at how little of your memory is being used to cache data. Even a 50GB database can get to a point where it's having to hit the disk in order to satisfy queries.
This is the age of "big data", as much as I hate to use that term. It's the age of 256GB USB thumb drives, laptops with 32GB of memory, and storing files in the database server. 64GB isn't large for databases anymore - in fact, it's a fairly unusual week when I'm dealing with a database that small.
Sorry I still don't get it. A database that actually used all 64GB of MEMORY--not disk--would store billions of customer records, and yet this is a small business? What small business stores billions of records of anything?
Yes I have seen many extremely poorly designed schemas that did take up huge amounts of memory, but that is easily corrected before launch. Don't store everything as a string, that's one way. But there are more reasons such a schema needs to be fixed, other than the $ cost of memory.
These days, it's fairly common to store larger fields or even files in the database. SQL Server's FileTable and FileStream fields are designed for that exact use.
Plus, remember that one server can be used for multiple databases. In SQL Server, it's fairly common to see a single server housing the accounting software, the payroll software, the email auditing back end, VMware Virtual Center's databases, etc.
How do you figure you could store "billions" of customer records in a 64GB memory space? That's 68 billion bytes, and you lose a very significant portion of it to things that aren't base table storage. Never mind cached query plans.... how about indexes? If you consider a table containing a customer name, address, telephone number, and a couple of other basic pieces, you could be looking at a few kb for each record. That'll get you closer to a total potential storage of 20m records. Not billions.
Oh, and I have seen small businesses running SQL Standard with databases exceeding 500GB and individual tables with over 1.5 billion rows -- and the tables were designed efficiently! They couldn't afford Enterprise because of the tight profit margin nature of their line of work. What I'm saying is, don't discount the data needs of small business.
Telecom. We store tight, tiny columns. We've got 100s of GB of data. Many records are transactions that earned us nothing (call didn't connect, yet tried several attempts). We're not a large business by any measure.
Even smaller companies in other fields might want to store tons of rows. User action data, for instance. Living in the past and insisting 64GB of MEMORY is somehow huge is just being silly.
64GB for database systems is still fairly large amount of ram for Databases and still largely considered an enterprise class server.. I know lots of several multi billion dollar companies running their financials and oltp systems on servers with <64gb ram. Also, people don't generally by over the counter ram for their servers, they're spending extra on ECC memory correcting ram.
Most servers aren't serving a single large database, but multiple databases with multiple applications. Plus you add some ETL via SSIS, some reporting, a few ad-hoc queries, etc and memory goes like Doritos and beer at Mississippi Super Bowl party. My smallest test SQL Server for development that isn't virtualized has 32GBs and we are considering changing it's role to something less taxing...
For $4,000 you can get yourself a shiny new Dell rack server with dual 8-core E5 Xeon CPUs and 192GB of memory.
To license SQL Server Standard on that inexpensive device would cost $28,640, and would, as the blog post mentions, limit you to 1/3rd of that memory per instance.
Databases live on memory, and 64GB just isn't a lot these days (nor is it "Enterprise" when it is vastly exceeded by a very small workgroup server). Their point is absolutely valid, and it is very strange that while memory capacities have exploded, the memory limit is the same that it was with SQL Server 2008 R2 (prior editions didn't handicap memory like this).
I don't know much about MS SQL Server or database in general, what would you guys consider free and open source alternatives to MS and Oracle Enterprise offerings?
Postgresql. Mysql is missing far too many basic features to be considered. One of the big reasons that so many fad driven developers jumped ship from mysql to nosql "dbs" was just because mysql can't even do online schema changes.
I think the tension might be that looking back, we see many big (money making) operations that ran on far less than 64GB(1), but looking forward we can all think of fun hacks we can do on cheap and common 64GB machines.
It's another way of saying that yeah, that's old-world pricing.
(1 - I remember when friends told me they were real-time tracking all the telephone calling cards in a moderate sized country, with a Sun and 1G of memory.)
Disclaimer: I work at Microsoft in "the SQL org" but I'm just a peon so what do I know?