If you subscribe to the idea that the bit of your app that is coupled to the database should be very thin, then abstractions that live between that layer and the database tend to be a useless piece of excess complication. In the best case.
I've found that those two extra steps in the middle typically add negative net value.
They don't really make the code more readable; LINQ is so close to basic SQL that I can't fathom how a developer who can learn to write LINQ queries wouldn't be able to grok SQL.
They cut you off from the full power of the database by limiting you to a sort of least common denominator version of the things you can do with real SQL (e.g., no PIVOT or MERGE if you're using mssql).
They make the code harder to properly understand, because it's not readily apparent what will execute on the database. Some expressions will compile to SQL, some will execute in code, and some will execute in code and also force a bunch of other stuff that might have been compilable to SQL to execute in code instead.
They will probably remove some boilerplate, but you can just as easily remove that boilerplate with a too like Dapper that doesn't come with all those other downsides.
A benefit is strong typing. A change in the model breaks the queries and the code doesn't compile. You change the database when you are using stored procedures or direct queries and your code still compiles fine.
> They make the code harder to properly understand, because it's not readily apparent what will execute on the database. Some expressions will compile to SQL, some will execute in code, and some will execute in code and also force a bunch of other stuff that might have been compilable to SQL to execute in code instead.
I find statements like this statement are the crux differentiator of whether or not you find IQueryable<T> to be a useful abstraction or not. If you think LINQ is some unfathomable black box that magically, and with little observable difference, spits out SQL and/or executes code then of course you are going to have a bad time.
In my time doing LINQ performance work, I've had some ideas on how to make the tooling better (ToList() considered harmful, there could be stronger highlighting of "monad transitions" especially IQueryable<T> to (accidental) IEnumerable<T> and/or (intentional and mistaken) IList<T>), but all the information is already there in the tooling for the most part. It's not a black box, you can query a lot of information with tooltips alone.
LINQ really shines in places where you've got one "query" that runs on multiple databases and multiple types of databases, with no code change but a connection string or three.
Those transitions between monads can be a headache to an inexperienced developer and a delightful tool in experienced hands. There is a huge benefit that the "same query" can run on database with IQueryable<T> and also in memory with IEnumerable<T>, are interesting ways to take advantage of that, reusing code that can work both with fresh data directly on the database and with cached data already in memory as if it were the same data. There are amazing strengths to that abstraction.
There is also no doubt that it all has a steep learning curve.
LINQ is not just for SQL. Say if your LINQ queries are but simply `where obj.prop == val`, you can have a Redis LINQ provider hot-swapped in.
Even without the use of other backends it may still be beneficial. The "abstraction layer" need not to be thin as you explained here. You have the power to reinterpret and optimize your query. This allows visitor patterns (and maybe Roslyn-style analyzers) to kick in in the symbolic form, not just strings (indeed there's SQL parser, but again why? why limit it to SQL?)