For years Microsoft-sponsored SQL Server consultancies have been telling us to use the database to its fullest extent. Use Stored Procs and non ANSI SQL features, they're fine! Logic in the database? We love that.
That's what ISVs and Enterprises are being told to do and that's exactly what they've done, at least the ones who couldn't see Microsoft's business angle. Now many are left to choose to either accept Microsoft's ever-increasing licensing costs or a high cost to switch.
I agree with @BrentOzar, time to find other options.
I see nothing wrong with using stored procs or non-ANSI features. You just need to know what you're getting into. The same goes for any database. Being database agnostic is great but so is actually finishing what you're working on and having it run efficiently. If DB specific features get you there and you're willing to accept the terms of being locked in then it's a sound decision.
For our product we use Postgres as the app's database and happen to use some Postgres specific features like stored procs and hstore[1]. The latter in particular is not ANSI at all. There is no equivalent at all in other databases and migrating usage of it to another DB would be a real pain. I know we're tied to Postgres and we're ok with that as it's a joy to use and let's us spend our time elsewhere.
You use database specific features because they're useful, not because you're forced to. Of course I'll concede the point that it's a bit apples and oranges to compare being "stuck" on Posgtres vs a commercial closed source stack.
"...so is actually finishing what you're working on and having it run efficiently."
A "finished" product is usually a dead one and efficiency is relative. You've seen that incremental improvements to the entire stack can have far greater impact than fiddling with just the storage end alone after all.
A product never actually stops evolving if it's going to stay competitive and make money. But I'd say staying with Postgres is one of the best decisions you've made. Besides being a joy, as you say, you can be fairly confident that it won't suddenly become broken, become proprietary and best of all, become an order of magnitude more expensive for the same capability.
The funny thing is how Microsoft used to make fun of Oracle for being restrictive on the CPU licensing types. How "hardware improvements are for our customers' benefit". Now? You pay differently depending on AMD/Intel and type of chip. Oh hey, just like Oracle. It's hypocritical, but I guess it's also just business.
At any rate, like this post says, whatcha going to do? The ease of use of SQL Server is fantastic. As much as I would like to use Postgres (despite the HA story looking very confusing, and lack of basic things like materialized views), the tooling is lightyears behind Microsoft's. The fact that I'd truly need to become a DBA to properly use Postgres (oh great, an ini file with poorly documented settings!), whereas I can fumble my way through SQL Server... that's worth a lot.
You have to be a DBA to properly use SQL Server too.
Yes you can stumble your way through wizards and SSMS, but you will get tripped up eventually if you don't spend some serious time hitting the books and blogs. (Recovery model, what's a recovery model? Why's my disk space gone?)
Well I'm doing alright with 100s of GBs of SQL DBs, replication, tx log shipping, HA clusters, etc. and I wouldn't consider myself a DBA. Basic stuff like transaction log management (recovery model) hardly qualifies me as a DBA.
We're both right depending on the value of DBA. I don't think the argument can be made that SQL Server will require vastly less effort to figure out for the majority of cases (and edge cases have people explaining things assuming you're not an expert).
(Tempdb, what's tempdb? Clustered index, what's that and why the hell do I need one? Escalated locks - what the hell? Isolation modes, why are they important?!?)
I wonder if those working in corporate IT realize that they're competing with Microsoft for money.
Most corporations see IT as an expense center. It's not a place where the business make money. It's simply a place where they spend it. You can argue the validity of that idea but whether it's right or wrong doesn't matter. It's the way most businesses think.
Businesses, of course, try to limit expenses. They especially try to limit any increase in expenses. So basically there is a certain sized pile of money for IT to work with. If the costs of Microsoft infrastructure goes up, the difference is made up by cutting another part of the pile. The part of the pile that can be most easily managed is salaries.
>"I wonder if those working in corporate IT realize that they're competing with Microsoft for money."
Generally no, but I expect many are quickly realizing it.
Microsoft in particular has been very open about trying to sell things like Office 365 and Azure to Enterprises. They aren't afraid to be blunt and say "Hey, you can get rid of your staff if you go with this".
It's an odd relationship.
In my experience, many in corporate IT are perfectly happy to be simple facilitators - shoppers basically.
I believe the perception is "It won't be me", that being increasingly beholden to a vendor might prevent the creation of new positions or filling vacancies but will never eliminate their own positions.
The entire industry of internal IT support has hard times ahead. Anecdotally, I see XaaS being adopted surprisingly quickly. A few big VARs I've talked to have noticed this and have been realigning themselves to sell managed services instead of stopping at planning and implementation as they have in the past.
they aren't afraid to be blunt and say "Hey, you can get rid of your staff if you go with this".
During the release of, I think, Windows 2008, Microsoft put out an ad campaign that showed the IT staff doing the congo and other non-work things. It was a fun, upbeat message about how it made things so much easier. Only the real, unavoidable message was to HR and the executive -- give us some cash and you can start sending out the layoff notices.
I'm not judging Microsoft for that -- they've forever boasted that if you go with Microsoft stuff you can pay your employees less -- and efficiency is good for everyone. It was just such a bizarre ad, and seemed to be like one of those dark side Skittles ads, but minus the awareness of what it really was.
Yes, once you switch to ALL XaaS and ditch your IT staff, you are forever at the mercy of your existing XaaS vendor. You don't own the software, the vendor actually has your data somewhere in the cloud, and you have no one who can even make a credible recommendation on an alternative. They got you by the balls, and BTW, the XaaS pricing is tripling next month. Muahaha !
Actually Microsoft spend billions on marketing and employ top notch sales people. It is perfectly reasonable to assume they will do a better job of arguing to increase the size of the pile of money set aside for IT then you would.
Once the pile of money is increased it is a much easier job to argue for your share of it.
My company went through BizSpark 3 years ago. We're now running everything on Windows Azure. No complaints here. Sure, I'd like SQL Azure to have a few more features but there haven't been any deal breakers yet. C# is by far my most favorite programming language and is probably what keeps me coming back to .NET & Sql Server.
I must admit though, I'm very intrigued by a lot of cool features found in PostgreSQL. Specifically HStore.
Chances are you are not dealing with banking, insurance or medical data while running on Azure. After the last several weeks, we had to assure clients, that none of the data is stored on Azure. Plus with the standard version of SQL Server or Azure you don't get full auditing or encryption.
That's correct. I actually used to work for a company which built health information systems for hospitals so I'm aware of the hurdles there. They were .NET based too. Hospital network contracts pay very well but they are absolutely the worst customers.
> Which Open Source DB does everything that MS Enterprise SQL does
They shouldn't fear a competitor that does everything they do. They should fear a bunch competitors each doing a set of features a given customer needs.
SQL Server is Microsoft's best software product, but why would I even bother with it since both MySQL and PostgreSQL do everything I need from an RDBMS, for a tiny fraction of the TCO and with comparable if not better performance and much better platform support? And why would I even consider it when non-relational datastores are a much better fit for most of my data? It makes sense if you are into Windows, but makes no sense whatsoever if you are not.
> Which Open Source DB does everything that MS Enterprise SQL does and why aren’t you using that instead?
I used to think the replacement wouldn't have to do everyone MSSQL does - after all, nobody uses all of the features. But getting a core subset often isn't enough either - for example, Windows Azure SQL Database supports a core subset of features and datatypes, but I still constantly hear people say it's not enough to migrate their apps.
64 GB of RAM is 100,000 times more than what Microsoft used to think people needed.
:D
Seriously, Oracle's biggest point of losing customers is when it's time to upgrade from Standard Edition to Enterprise Edition ... or else pick an alternative. I imagine something similar is true for Microsoft SQL Server.
We figured out we weren't using most of SQL Server's features and didn't want the mental overhead of MSFT licensing. We switched to Postgresql and haven't looked back.
How big was the application (like how many developers were involved in the migration, data size, etc)?
How long did the migration take? Was the business okay with a feature freeze during that time, or did you take advantage of the switch to rewrite it too?
Did you have multiple connected systems, like ETL processes that also hit the database for other reporting systems?
Always curious about these kinds of issues because they seem to be what's holding businesses back.
It was several smaller apps written with most logic in the app (Hibernate) and had very little T-SQL/stored procs. We also weren't using any of the add-ons like SQL reporting, etc. The big commercial DBs have the advantages when you start using their reporting services, BI/OLAP stuff...and likely when you have several different groups as stakeholders in the process...
I went through a similar switch from MSSQL to postgres. We moved 7 applications of various sizes over and the biggest hurdle was office politics. The migration never took more than a day for each app. ETL and reporting were not affected since we had previously integrated messaging (rabbitmq) pretty heavily into every system at the company and ETL gets their data from there.
We also switched the DB servers to over to linux and got a huge performance boost from that.
We did rewrite some stuff, but that was mostly to take advantage of postgres features like array types (instead of a article tags table, you'd have an tags array) which we used to speed up some applications. One we sped up by a factor of 10, just by using array types and offloading json generation to postgres: select array_to_json(array_agg(row_to_json(t))) from ( some query ) t
I just took my first plunge into the caustic waters of SQL Server, being forced there because my company is deploying a vendor-supplied shipping system that only runs on it. Since the data set would be growing beyond 5GB, SQL Server Express was not an option. Coming from a Postgres world, Linux, Java world, the whole subject makes me sick.
I was disgusted to discover that my 200-strong Mac shop would have to spend $7,000 on SQL Server. I was even more disgusted when I discovered that Microsoft no longer lets me run it on the server I purchased, a dual 6-core machine, because now EVERY core is licensed individually, and you are FORCED to license every core in the machine. You can't just license some of them. That forced to run it in a VM, which is just asinine. And THAT means that I also had to purchase an additional Windows Server license to run a SEPARATE VM for IIS/ASP.NET.
And THEN (and this is the part that offends me most of all), I have to purchase a CAL for every damn machine that will be using the shipping system, because it's not a public website, but a private web app.
"you have two choices: purchase per core licenses at $1,793 or purchase a server license at $898 and client access licenses at $209 per client."
So, if everyone is going through the web app and the web app is not using Windows Auth (you certainly don't sound like you need to use that) - You should be able to get SQL Server for ~ $1107.
The minimum cores you can license on MSSQL, whether on bare metal, or a VM, is 4. Wherever you run it, you must license all cores available to the OS. Had I chosen to run on bare metal, I would be forced to license 12 cores (6-core Xeon CPU x 2). This app requires SQL Server 2008 R2, so the 4GB limit applies. But I can't buy 2008 R2 anymore directly. I have to license 2012, and use my downgrade rights.
You are wrong on the Windows Authentication requirement. This has nothing to do with anything.
For MSSQL, you must license the database per core if you are using it in a public web application, because you have an arbitrary number of users. And you must license all cores in the machine. The VM is the only way to license only some cores, namely, as many cores as are in your VM.
If you are using MSSQL in a private web application, you may choose CALs. However, if you do so, it doesn't matter how users authenticate. Each user or device needs a license. Microsoft is very clear about this. User count is just that: the number of users using the database, either directly, or through a web application which does everything through a single set of credentials. It's not system accounts. It's users or devices.
The same applies for Windows Server licenses. It doesn't matter how the user authenticates. If they are using a web-app hosted on a Windows Server, and that web-app is not publicly accessible (a login page does not count), then you need a CAL for each user or device.
You said "...because it's not a public website, but a private web app."
So, you do not have to license it per core. You can buy the $898 standard version and a few CALs - one for your web app and the rest for db admins.
Furthermore - even if there is some wording that says "if 100 people are hitting your private web app, you have to buy 100 CALS" - just ignore it like everybody else does.
So, your argument amounts to this: Ignore Microsoft's licensing requirements, be a pirate, and just lie through your teeth.
No, this is not what "everybody else" does, especially those who must answer to shareholders, compliance personnel, and a management team that actually cares about staying within the bounds of the law, and not paying thousands of dollars in civil penalties should we be found in violation of a licensing agreement.
This is why we develop everything we can on an open source application stack. But we can't write everything from scratch. We aren't going to spend $100K+ developing an entire enterprise shipping system from scratch. That means we must purchase proprietary solutions that will cost us much less and have a greater ROI. That also means licensing compliance.
I only said one third of that. I also said that I don't think you're as restricted with intranet web apps as you are with public web apps, but I could be wrong...
Anyway - you can lie if you want to and I certainly won't judge you - but I didn't say you should lie or be a pirate. I said ignore the license. Big difference. (From your original post, it sounded to me like you probably don't have shareholders. Do you?)
Are you aware that vast numbers of small, medium and even larger size businesses are running SQL Server without the proper licenses? Are you also aware that Microsoft knowingly allows this to happen with a wink and a nod? Just like Windows and Office...until XP/2003 when they turned to activation. They didn't do that for SQL Server or many of their other products as far as I can remember. Could happen, but we'll see...
So yeah, advising you to do what millions of other business do too - I have no problem with that. (I also have no problem bribing the locals if that's the normal course of business. Don't think of me as immoral - I'm a realist. Big businesses squeeze everybody in one way or another, so if you can get away with it - it's great advice. The risk goes up the larger your company gets, obviously...)
Also, SQL Server 2012 is 100% backwards compatible with 2008 R2. So Express 2012 would definitely work (Microsoft is legendary for their backwards compatibility. Just sayin'.)
What kind of business are you running 200 Macs for anyway?
I'm aware people lie all the time. I'm aware that sometimes they get caught. I'm aware that when you owe millions of dollars to a bank, the auditors actually check this stuff.
"I didn't say you should lie or be a pirate. I said ignore the license."
And with that you lose the argument, and all credibility.
Enjoy the lame Mac-based "business" infrastructure that you built. Maybe if you'd gone with Windows to begin with, like every other business on the planet - you'd have saved enough money so you wouldn't be complaining about Microsoft's server licensing costs right now.
I bow to your manifestly superior entrepreneurial prowess. Obviously, I am too stupid to know how lame Macs are. That explains why my company has maintained a paltry year-to-year growth rate of not less than 50% for the last 15 years.
Oh, and for the record: "Am I bovvered? Look at my face. Am I bovvered though?"
Please, excuse me if I don't believe you in the slightest. I'm sure though, if you tried really hard, you could even attribute your imaginary profits to your use of Macs. I'd love to see some of your hipster logic in detail.
Anyway so, say you're making good money and the only product that can apparently fill your needs properly requires SQL Server....and yet you're still complaining? With all that money you have? One would think you'd be happy to even have found a product that does exactly what your business needs.
Did you wonder though why there are no products for you that run on OS X? (It's a real mind-boggler for you, I'm sure.)
I'm still laughing at the fact that you've (allegedly) spent well over $100K on overpriced Apple hardware and yet you have the audacity to complain about spending a fraction of that on some software that you actually need. Really...thanks for the entertainment :)
Your a funny guy. You ready to call me Hitler, yet?
I never said our infrastructure was Mac-based. Our user machines are Macs. Our infrastructure is almost entirely Linux/Java/Postgres. We are in the printing and graphics arts business. Hey, imagine that, a Mac-dominated field - but Macs are so lame that no one could possibly have a good reason for using them, right?
There's a reason we don't use Mac servers (aside from a file server). They suck. Apple has sucked at servers ever since they abandoned the enterprise when they cancelled XServe (which we never used), and re-focused their server product for small business and home use.
We also run Windows terminal servers, press controllers and RIPs, legacy shipping systems, etc. Our CAD team uses Windows. I've been in this industry for 20 years, and have worked with pretty much everything out there in common use.
As for attributing our quite real profits to the use of Macs - don't be a moron. Our profits are the result of a world-class management team with whom I am privileged to work.
You are excellent at stereotyping, and obviously have a vendetta against Apple, and by extension, anyone using Apple products. I'm sorry for you. We are not mind-numbed robots. We have reasons for the business decisions that we make. And we, despite your consternation, have been just as successful as I have asserted.
I certainly don't need to know you or even have a "vendetta" against Apple to have a good laugh at someone complaining about SQL Server pricing when they've happily paid Apple for the privilege of running OS X.
I'm going to let you have just one more "last word" here though because that seems to be important to you. Good night, my fellow comedian :)
Well, that's your assessment of it. Plenty of small and medium sized businesses have been doing it without any problems for over a decade. Same thing with MSDN subscriptions - you're supposed to buy one sub per developer and many places simply buy one and then proceed to install Visual Studio on 20 computers.
I'm pretty sure I wrote about the history of the Enterprise vs. Standard decision making on my blog (http://hal2020.com) but I can't find the post right now. I'm one of the people responsible for the original philosophy, and I don't think it's changed much. I'll go back and look for it again.
Basically you have three dynamics going on. The reality check of course is that the competition is Oracle and IBM DB2, and to a lesser extent open source databases, and various analytics products. Check out Oracle's price list and SQL Server Enterprise remains inexpensive. And Microsoft has introduced cheaper options to keep "free" open source options somewhat at bay, though the truth is that without multiplatform support there is nothing they can do to really capture that segment of the market.
Standard Edition exists because I couldn't convince my then boss that we should bifurcate it into a couple of sensible products, one slightly lower in capability and one slightly higher. The slightly higher one would have been a "Small Business Enterprise" edition that included many of the features of Enterprise but somehow retained differentiation from full Enterprise and would have been dramatically less expensive. I had a differentiation, but I don't recall what it was. The reason the bifurcation was rejected was that Standard was the edition that matched earlier versions and we didn't want to piss off customers by forcing them into a more expensive or less functional edition. And we didn't want to complicate the world with yet another two editions. So the status quo was maintained. BTW, this is a late 90s discussion.
The next dynamic is that there are a lot of features which cause a crapload to engineer but don't increase product volumes substantially. This is the primary driver of what becomes a candidate for Enterprise rather than Standard. When you are investing $10s of millions in a particular feature's engineering then you want some way to get a return on that investment. It really is that simple. Almost.
There is (or was) a re-analysis each version of what goes into each edition. My philosophy was that you introduce new enterprise features in the Enterprise Edition, then examine moving them into Standard Edition in subsequent releases. So there is a constant stream of new high-end features flowing into Enterprise, then as they become part of the mainstream thinking you push them (or appropriate subsets) into Standard. But that philosophy was never adopted and so the effort seems far more haphazard than I'd wanted it to be. Customer and competitive pressure will result in capabilities being pushed into Standard, but it doesn't seem to happen in a rational way.
Max memory size and high-availability features were the original differentiators when Enterprise Edition was introduced as a mid-life kicker for SQL Server 6.5. In the case of memory it was an actual technology differentiator back in the mid-90s on 32-bit machines. That it has survived through the 64-bit transition is shocking. But the reality check is likely that very few servers actually have more than 64GB,despite today's hardware prices, and thus Microsoft sees it as an acceptable differentiator. High Availability should remain a differentiator, though a simple subset does need to be in Standard. That the current subset is actually deprecated is, ummm, looney.
Customer demand for capabilities in editions other than Enterprise, or competitor moves, will lead to Microsoft changing the balance between Standard and Enterprise. But it isn't a few sophisticated DBAs/developers/etc. calling for the change. Or a niche or flash-in-the-plan competitor. It is an actual shift in market dynamics.
> Brent, to which db flavor do you recommend people migrate? Thoughts on Postgres?
I'm not the right guy to ask on that one - I just don't have enough experience with alternative platforms. You're in the right forum though - as long as you follow the news on HN, you'll do a good job of choosing the right data storage platform for your needs.
I'm thinking your post would benefit from a recommendation as far as the "next closest thing" to MSSQL, since you're exhorting DBAs to make a transition, and minimizing friction would be an important aspect of it.
> I'm thinking your post would benefit from a recommendation as far as the "next closest thing" to MSSQL
Then by all means, add it in the comments and I'd be glad to point to it. I'm just not qualified to make that recommendation myself. My experience with other platforms amounts to importing data into MySQL for WordPress, heh.
That's bullshit. A more accurate statement would be that large companies have a lot of momentum and often have the turning radius of an aircraft carrier, but that have absolutely nothing to do with technology choices. The age of "no one got fired for choosing IBM" is over.
At my company, the team using Java+PostgreSQL is twice the size (~120 people) of the team using C#/MSSQL (~60), and the only ones really "stuck" right now re: tech are the ERP folks, because that has such a huge switching cost in the change management/training side of things, not to mention the whole "what do I do with all my legacy data when it doesn't have a natural fit in my new ERP?" problem.
IME, large firms (including government agencies) don't like not having a support contract in place for key infrastructure.
They don't seem to mind open source as such, and will use open source without support contracts for tools that aren't key infrastructure (or for libraries for internally-developed software), and probably wouldn't mind open source with a support contract for key infrastructure (but for a lot of things, proprietary s/w vendors are just better at selling support than the people selling support for open source software.)
Large (tech) company employee here. For the project I work on, most of our production stack is made out of FOSS software that's wrapped up in support contracts.
I don't think this is a useful mindset. Grids are very handy comparison tools. Also, licensing costs are only part of the overall cost of running a service. 'Serious businesses' will not run on a piece of software that is not supported by either a vendor or their own staff. Not everyone is comfortable fixing bugs in their DB platform, or relying on the community to do it.
Before AGs came out, they used a custom log shipping script for DR, and mirroring for the critical DBs. I don't think they would have gone Enterprise just for the AGs, but since they were on EE anyway, AGs were an easy decision.
SQL Server has been getting much more expensive because their current userbase is captive (meaning the cost, complexity and risk of migrating to another database system is enormous because of a heavy integration with SQL Server specific features). I don't imagine a large number of new systems are being built around SQL Server, apart from those at shops that are already captive.
It is an excellent database, but I do get a chuckle that by far the greatest benefit being pushed for SQL Server 2014 is in memory tables (which is something that SQL had -- at least for temporary tables -- back in the 6.5/7.0 era, but then had removed). While it is hardly identical, an approach we did on one team is to have SQL Server take 64GB (note that it is per instance, and most server deploys see many instances on a single server, so that isn't quite as prohibitive as it might sound) and then have an enormous RAMDISK on which tempdb would be created. Made use of the RAM, and saved enormous amounts of IO (tempdb is the weak link of almost all SQL Server platforms, as everything hits it, especially if you make use of snapshot isolation / row versioning).
> greatest benefit being pushed for SQL Server 2014 is in memory tables (which is something that SQL had back in the 6.5/7.0 era, but then had removed)
Sorta kinda - back then, we had DBCC PINTABLE, and it would cache the data, but it would still require that the transaction logs and data pages be written to disk. Now with Hekaton, you can specify that a table never hits disk, ever. (Don't get me wrong, I don't think people are going to adopt that particular feature due to other limitations around datatypes and supported syntax.)
I was not at all clear, but sorry I was speaking of temporary tables, or basically anything intended to be ephemeral. Back in the 6.5 days you could flag tempdb as being RAM resident automatically - http://support.microsoft.com/kb/115050. Load up some global temp tables on db startup and rock and roll.
It made tremendous sense because that's often an area of enormous churn (I saw a ServerFault post or something where someone did this and saw 30 iops to it, wondering why it didn't improve their system. That is nothing. One product I worked on saw literally hundreds of thousands of IOPS, and everything that sped tempdb sped the world).
Ah, temporary tables. Tell me, does SQL Server still have a single shared tempdb that is used by ALL databases on the server?
If so, I can't understand why. Hash Joins? Often uses tempdb. Snapshot Isolation (otherwise known as multi version consistency by every othe major database vendor)? Uses tempdb. Reindexing? Uses tempdb. Temp tables, cursors or user objects used across your databases? Uses tempdb.
I can't for the life of me understand why Microsoft built in this bottleneck into their database server!
Don't mean to shoot the messenger here, but if SGAM contention happens due to high tempdb usage, surely that's an architectural problem? From my reading this only happens when a lot of user objects are allocated - again in the tempdb!
I'd love to know why Microsoft do so many things in a shared resource. In Oracle you can setup multiple temporary tablespaces, curious to see what Postgres does.
I would guess that SQL Server has been getting more expensive largely because the people who decide these things see themselves as competing with Oracle more than with free alternatives, and (I believe) SQL Server is still cheaper than Oracle.
Disclaimer: I work at Microsoft in "the SQL org" but I'm just a peon so what do I know?
> and (I believe) SQL Server is still cheaper than Oracle.
That explains the pricing of Enterprise, but not the crippled memory (64GB) or the lack of decent features in Standard Edition. Standard doesn't compete with Oracle - or if it did, it would get laughed out of town for a 64GB memory limit.
Standard used to be a gateway drug that would get people hooked on good-enough performance and easy-enough management, but these days, crippled with $500 worth of memory, I don't think that reputation's going to hold out.
Sorry, but 64GB of RAM is a LOT.. for that matter a db install that would need 64GB of memory is probably not something you would call short of "Enterprise" ...
Beyond that, if you need something at that scale, often SQL isn't necessarily the right choice.
It's $500. When you consider that bringing in a performance tuning consultant can easily cost thousands of dollars per day, $500 worth of memory isn't much at all.
> for that matter a db install that would need 64GB of memory is probably not something you would call short of "Enterprise" ...
Remember that database servers use memory for at least 4 things: caching data, workspace for running queries, workspace for TempDB, and caching execution plans. If you run DMV queries against sys.dm_os_buffer_descriptors, you might be surprised at how little of your memory is being used to cache data. Even a 50GB database can get to a point where it's having to hit the disk in order to satisfy queries.
This is the age of "big data", as much as I hate to use that term. It's the age of 256GB USB thumb drives, laptops with 32GB of memory, and storing files in the database server. 64GB isn't large for databases anymore - in fact, it's a fairly unusual week when I'm dealing with a database that small.
Sorry I still don't get it. A database that actually used all 64GB of MEMORY--not disk--would store billions of customer records, and yet this is a small business? What small business stores billions of records of anything?
Yes I have seen many extremely poorly designed schemas that did take up huge amounts of memory, but that is easily corrected before launch. Don't store everything as a string, that's one way. But there are more reasons such a schema needs to be fixed, other than the $ cost of memory.
These days, it's fairly common to store larger fields or even files in the database. SQL Server's FileTable and FileStream fields are designed for that exact use.
Plus, remember that one server can be used for multiple databases. In SQL Server, it's fairly common to see a single server housing the accounting software, the payroll software, the email auditing back end, VMware Virtual Center's databases, etc.
How do you figure you could store "billions" of customer records in a 64GB memory space? That's 68 billion bytes, and you lose a very significant portion of it to things that aren't base table storage. Never mind cached query plans.... how about indexes? If you consider a table containing a customer name, address, telephone number, and a couple of other basic pieces, you could be looking at a few kb for each record. That'll get you closer to a total potential storage of 20m records. Not billions.
Oh, and I have seen small businesses running SQL Standard with databases exceeding 500GB and individual tables with over 1.5 billion rows -- and the tables were designed efficiently! They couldn't afford Enterprise because of the tight profit margin nature of their line of work. What I'm saying is, don't discount the data needs of small business.
Telecom. We store tight, tiny columns. We've got 100s of GB of data. Many records are transactions that earned us nothing (call didn't connect, yet tried several attempts). We're not a large business by any measure.
Even smaller companies in other fields might want to store tons of rows. User action data, for instance. Living in the past and insisting 64GB of MEMORY is somehow huge is just being silly.
64GB for database systems is still fairly large amount of ram for Databases and still largely considered an enterprise class server.. I know lots of several multi billion dollar companies running their financials and oltp systems on servers with <64gb ram. Also, people don't generally by over the counter ram for their servers, they're spending extra on ECC memory correcting ram.
Most servers aren't serving a single large database, but multiple databases with multiple applications. Plus you add some ETL via SSIS, some reporting, a few ad-hoc queries, etc and memory goes like Doritos and beer at Mississippi Super Bowl party. My smallest test SQL Server for development that isn't virtualized has 32GBs and we are considering changing it's role to something less taxing...
For $4,000 you can get yourself a shiny new Dell rack server with dual 8-core E5 Xeon CPUs and 192GB of memory.
To license SQL Server Standard on that inexpensive device would cost $28,640, and would, as the blog post mentions, limit you to 1/3rd of that memory per instance.
Databases live on memory, and 64GB just isn't a lot these days (nor is it "Enterprise" when it is vastly exceeded by a very small workgroup server). Their point is absolutely valid, and it is very strange that while memory capacities have exploded, the memory limit is the same that it was with SQL Server 2008 R2 (prior editions didn't handicap memory like this).
I don't know much about MS SQL Server or database in general, what would you guys consider free and open source alternatives to MS and Oracle Enterprise offerings?
Postgresql. Mysql is missing far too many basic features to be considered. One of the big reasons that so many fad driven developers jumped ship from mysql to nosql "dbs" was just because mysql can't even do online schema changes.
I think the tension might be that looking back, we see many big (money making) operations that ran on far less than 64GB(1), but looking forward we can all think of fun hacks we can do on cheap and common 64GB machines.
It's another way of saying that yeah, that's old-world pricing.
(1 - I remember when friends told me they were real-time tracking all the telephone calling cards in a moderate sized country, with a Sun and 1G of memory.)
Microsoft's BizSpark (free software like SQL Server for startups and free credits for Azure) is chugging along. They're at least making an effort to convince new startups to build on their stack.
Absolutely, and I'm an alumni of it (yuk yuk love that saying). The program is an incredible, ridiculous value.
While you're in it.
Then you leave it and start to try to scale out that Windows/.NET/SQL Server/App Fabric deploy you've built, and the expenses start to become very real.
We're a Microsoft shop ( "gold partner" ). We have to throw nearly $1m at them every year and they reward us with expensive license audits and absolutely fuck all support.
It still takes us weeks to get a hotfix for something yet if they think we owe them something, it's instant audit.
It's bloody depressing really: products are going to shit, prices rising, support declining and we're treated like criminals.
That's where you BizSpark customers are going to be the moment it expires. You have been warned.
That's funny, I see it differently. Windows 2012 is finally coming of age with R2, HyperV is starting to shine competitively, PowerShell is kicking ass and taking names - Desired State Config FTW, Visual Studio is finally getting better C++ support, Windows 8 has HyperV built in, much better performance overall. Exchange has improved leaps and bounds when deployed correctly (hardware load balancer instead of its crappy mac address cloning hack) so on and so forth. The fact you essentially got 1m of free software and support over the what, 3-4 years in the program should have allowed you to better leverage your known expenses and plan accordingly. Also, Microsoft isn't nearly this evil unless you're doing something wrong, just license datacenter copies or hell, move to Azure and save even more money.
HyperV is years too late. We've had VMware since 1998 and Xen, KVM etc for years as well.
PowerShell - Shit crock from end to end. It's a rotten mess of an over-complicated, inconsistent environment that barely works and performs abysmally at the best of times. They can't even get the story straight on dates. Half of it is UTC and half local time and when you mix, boom!
Desired State Config - We had this 20 years ago on Sun machines, plus it doesn't work very well and is not holistic.
VS2012 - Yeah a little better but still crashes 5 times a day with HRESULT errors and throws your workflow out of the window.
Intrigued by your statement I googled about and haven't been able to find any articles that cast doubt on Azure's suitability for financial, insurance or healthcare. Nor any articles that claim poor attitude of these industries towards azure in general. Actually, most of what I have encountered - are fairly enthusiastic articles, case studies and white papers claiming just the opposite. Would you care to provide some information to substantiate your claims? Thanks.
License compliance with Microsoft software is a lot of trouble and risk, and it, more than the paper cost, is often the worst thing about using Microsoft stuff.
"Simple" scenarios like maintaining a warm replicated data center becomes a ridiculous exercise of license guessing, where every Microsoft rep tells you something different. Licensing newer instances of Sharepoint is simply a riot.
I'm not saying there aren't advantages, but it's nice as platform and system architects to plan out hardware and software and be done with it, and not get distracted for months discussing exactly which stack of licenses you need from Microsoft.
Or you could just ignore it and hope they don't come knocking. BizSpark is actually a great example where Microsoft encourages that -- the graduation benefits are in many ways incredibly vague (not to mention that the program duration and benefits are changing constantly). Right now when you graduate you get-
up to 4 Windows Servers (Standard Edition) and 2 SQL Servers (Standard Edition)
What does that mean? Per user/device copies of SQL Server, because that would be completely useless to any company serving the internet in any way. Two core copies, because again that's laughably anemic. What does it mean?
Yet almost no one actually talks about what it means because who cares, right? Until the day that the BSA comes knocking because a disgruntled former employee left a tip.
Totally agree. The licensing is ridiculously opaque and complex.
As a simple example: If I understand correctly, BizSpark lets you install a couple of copies of Office and use it for regular word processing tasks within your startup. However, you cannot install the BizSpark Windows 8 desktop license -- those are only for testing and development. I think. There's no indication of any restrictions on the license key download page.
THIS! This is one of the two largest arguments in favor of OSS. With a PG server all I have to do is throw it up, and not worry about the licensing. That's extra time back in my day for real work. No stress, no fuss, no muss.
They think we're captive. We're mostly not these days.
We have our platform going through testing against Postgres and SQL Server at the same time thanks to the use of NHibernate. We have already done a successful trial migration which moved an entire vertical subsystem over.
There are a few issues to iron out but we can switch out.
We're not paying for SQL 2014 having been totally fucked over for SQL 2012 licenses due to the CPU to Core license change and our hefty DB cluster has lots of cores.
Because they are another form of coupling which is hellish to get out of if you want stuff to be portable, which as we found out when SQL pricing went through the roof is definitely a desirable feature in your application...
Also we use an ORM (NHibernate) which abstracts the entire query and schema away from us. We load/perf test that and get on with life. If there are any blockers, it's 99.9% architectural or loading related which we cover with test cases.
For us, the database is the hole we put our shit in when we don't want it in memory any more. Nothing more.
In your other comment, you mention that "our hefty DB cluster has lots of cores", and that this caused licensing to be much more expensive. Have you considered that maybe the abstractions and attitudes you're using have inflated the amount of hardware you need?
NHibernate, for example, can make it extremely easy to generate extremely poorly-performing queries. It often takes much more care and effort to have it generate mediocre queries than it takes to write good queries and any binding code by hand.
The "abstracts the entire query and schema away from us" and "database is the hole we put our shit in when we don't want it in memory any more" attitudes don't help reinforce the idea that you guys know how to use relational databases properly.
When teams go out of their way to remain as ignorant as possible about relational databases, while also using abstractions that are often inefficient, they shouldn't be surprised if their hardware needs (and their licensing costs, in some cases) balloon due to this inefficiency.
The question is, what's cheaper- labor for optimizing the queries + paying (possibly reduced) SQL server licensing, or keeping the "mediocre" queries + throwing in more hardware?
99% of the hinting I've seen people "need" was not needed at all, it was just the lazy way to temporarily work around a bad query plan. Other than actual bugs in the query planner, I can't even invent a hypothetical scenario where the tools postgresql already gives you to influence the query planner couldn't solve a problem.
> I thought (although I could easily be mistaken) that one of the big features about SQL Server 2014 was its lock free tables [1].
Same feature, just a different benefit. Hekaton's tables are pinned to memory, use a different storage structure, can have their T-SQL statements compiled to native code, and avoid locking.
All tables in sqlserver will be memory resident if there is available ram. But it happens dynamically, or is non-deterministic if you like. I assume the in memory feature allows you to force a table into ram and force it to stay there.
> All tables in sqlserver will be memory resident if there is available ram.
Right, but Hekaton introduces the option of never requiring your tables to hit the disk if you don't want them to. Think data warehouse staging tables, web session state, or data marts that are easily recreated from source data.
For sure, and a common strategy when you have large amounts of RAM is to pre-emptively prime the cache by doing unnecessary selects of all of the principal data and index sets.
However the issue I'm talking about is primarily churn -- the writing to disk that is often the weak link in larger data systems.
Many completely banal queries generate potentially enormous loads on the tempdb, without the user ever knowing otherwise. And if you're using temporary working tables for large-scale processes (this is common in financial data where based upon the requested data you need to build up from the base data, climbing the mountain until you yield the final rendered resultset. In between there may be hundreds of GBs of churn to tempdb).
And then there's just generally ephemeral data that never, ever needs to live through a server restart, and that you never want fighting for the probably very limited IO.
That's what ISVs and Enterprises are being told to do and that's exactly what they've done, at least the ones who couldn't see Microsoft's business angle. Now many are left to choose to either accept Microsoft's ever-increasing licensing costs or a high cost to switch.
I agree with @BrentOzar, time to find other options.