I concur. It's not the most powerful in all aspects or for all applications but, most of the time, if you want an enterprise grade RDBMS (and you're happy to pay) it's the one to go for simply because by comparison with the others it's so easy to work with.
The ease of use also makes it suitable for smaller shops as well. At the lowest level you install it, and it just works, and it's back by a great tooling ecosystem, and excellent documentation. It really does stand in absolutely stark contrast to, for example, Oracle.
I wouldn't call it idiot-proof, but it's certainly friendlier than most RDBMS. (That's not to say there aren't issues and annoyances with it, mind, but that's true of any product.)
Beyond this, if you need a relational store and you really, really don't want to have to give a monkey's about it, I'd say SQL Azure is probably the way to go.
(Saying all of this I recognise that lots of people will be able to tell war stories about the torrid time they had with SQL Server, or for any other data store. My experience has largely been it's a winner though.)
Anyone care to elaborate on how well it has aged? When I last used SQL Server in 2010, it was indeed pretty cool, especially the awesome ecosystem (SQL Studio, Reporting Services, graphical Query Planner, etc.).
Then I went to MySQL because my startup couldn't afford a database and it was a pretty large step down.
Then at the next startup, I got used to PostgreSQL and by now I'd have a pretty hard time to not being able to use JSONB, window functions or CTEs - made me realize how bad most other DB systems are by now. That is, except for a really nice GUI for PostgreSQL I'm a pretty happy camper by now.
Yes SQL 2016 has native support for JSON, CTEs (this one has been in since 2005), and windowed functions I believe were also in 2005.
Some other massive improvements are Columnstore indexes for absolutely ridiculous OLAP data reads and compression (we are seeing 90% compression and < 1 sec instant lookup times on multi-billion-row fact tables).
No master/master - that's the only thing I would consider missing compared to Oracle. It scales up, but not out. Peer to peer replication technically is master/master, but calling that clunky would be a kindness.
The short of it is that it's pretty good. They call it "Always On", and you can do both asynchronous and synchronous, with the latter offering automated failover.
IMO, it's matured very well. I'm always surprised by how good performance is in a virtualized environment compared with Oracle using the same front-end enterprise applications.
The biggest advantage IMO is that you can get strong performance out of SQL Server with relatively little DBA knowledge compared with Oracle. Installation is easy, integrated authentication is easy, and backups are easy. Running a cluster and doing replication is an area that all RDBMS systems need serious improvement on, but initial setup and working with a relatively stagnant schema is straightforward.
The one area that SQL Server has been lacking is having a native linux client (which is hopefully alleviated now). Not having one is the only real drawback that I run into.
Absolutely agree on the performance front. If you're starting out, generally SQL Server will run well out of the box, and if you upgrade hardware and reinstall on the new hardware it'll generally work well without having to do anything else. At some point you might have to start thinking about I/O and how best to arrange files but maybe not for a while, depending on what you're doing. (Just to be clear, let me restate: I'm talking starting out here - this is obviously NOT the approach you'd take when building or upgrading a large, complex application - I'm talking basic CRUDdy type apps here.)
Contrast this with some of the NoSQL platforms where immediately you're forced to think about topics such as sharding and clustering (I'm looking at you Couchbase) and the pain in the ass factor is much lower.
Key point: you don't need much of a clue about SQL Server administration to start using it, but you perhaps do with Couchbase, Elasticsearch, etc. From an ease of use point of view with NoSQL, I'd probably point to MongoDB as having the most approachable learning curve.
There's also no BS about having to keep everything in memory to keep performance acceptable (I'm looking at you, again, Couchbase - that's two strikes). The whole point of SQL Server and other RDBMSs is that they continue to work well when the amount of data you need to store vastly outweighs the amount of physical memory you have. Of course, if you can get everything into physical memory it will perform better but, even if you can't, it will still perform well.
Of course, if you're using SQL Server you do need to learn SQL (true for any RDBMS), which can be off-putting. It's absolutely not a beautiful language - in fact it's bloody aggravating when you write a lot of it - but it is extremely powerful and excellent for working with sets of data arranged in rows (i.e., tables). Nothing else really comes close.
Here's a couple of pain points from a dev that doesn't know any other database:
1) Lack of deferred constraints. To me it's natural to think that database constraints shouldn't apply while you're shuffling things around during a transaction. Other databases support this, apparently
2) Lack of multiple cascade paths. Why should I have to implement this functionality in triggers?
3) Why can't you create a column in the middle of a table? I know it has something to do with the column order being tied to the actual physical storage order, but why are they connected in the first place? Just show me the columns in the order I specify, I don't give a damn what order they're actually stored in (or should I?)...
AD 3) The typical answer is "why would you ever want to insert column in the middle of a table"? Logical order of columns is not connected with physical storage. MSSQL will physically arange columns in an order which ensures most effective perfomance (generally smallest data types at the beginning, largest at the end). Microsoft could make it so that new column is being inserted into any place in table definition, there just isn't any real reason to do it. You have to specify column order in select statement anyway so.. (SELECT * is bad bad practice for anything serious)
EF will deal with much of this for you because it uses "empty" rows so FKs references can always be satisfied during a transaction, but this obviously slows things down.
With (2) I'd question what you're doing. I'm not saying absolutely don't use them, or that what you're doing is "wrong", but TRIGGERs often result in side-effects that cause problems further down the line. They can be OK for auditing (but consider other options such as event sourcing if you have this kind of requirement), but the moment you start putting business logic in them, or hanging it off data modified by a trigger, you've started down a potentially dark path. The reason for this is that suddenly DML operations have side effects that may not be anticipated by people working with the database in future. In fact, depending upon how permissions are set for developers, they may not even be aware that triggers exist, leading to unexpected results, "weird" performance problems, etc. I've run across this recently in a moderately sized client database: ~500 tables, ~2500 stored procedures, and a few dozen magic triggers that most people are unaware of, which manifested themselves when we collected execution plans for some query tuning ("Where's all this stuff coming from? ... Oh.").
With (3) I'd just be interested in hearing your use case.
> Windows may be worth getting rid of, SQL Server isn't (except perhaps on grounds of licensing cost)
I second that. I am not a big fan of Windows, but SQL Server has not given me the slightest trouble so far, except in cases where I was kind of asking for it.
> SQL Server isn't (except perhaps on grounds of licensing cost)
What about supported applications? Most apps I've worked with (eg: django) support MSSQL as a second-class citizen, while other (eg: postgres) are much better supported.
Speaking to some folks from Microsoft at RubyConf last year, I know they are actively bringing in developers for the various adapters to try to improve this story. I believe this is a matter of the frameworks choosing first-class support rather than Microsoft. (been running a Rails app backed by a large "legacy" SQL Server for 3-4 years)
> I believe this is a matter of the frameworks choosing first-class support rather than Microsoft.
The problem has generally been the license and the environment.
Any framework wanting to support MSSQL needed to have MS Windows and MSSQL licences for CI (+developers), plus the infrastructure to run windows on the CI (I've no idea how you'd do that, so there's also a knowledge barrier).
There are few things which can be more embedded in your app's architecture than the data store, especially if the initial (and reasonable) assumption is that you won't ever switch away from it.