Hacker News new | past | comments | ask | show | jobs | submit login
.NET 5.0 (microsoft.com)
791 points by benaadams on Nov 10, 2020 | hide | past | favorite | 455 comments



So, on the plus side with the new .net, I recently made a .net core web app on Linux, and generally it's been pretty easy.

I'm also impressed at just how fast asp.net core is compared to asp.net, the time it takes to open your site in debug mode has dropped dramatically, from what used to be 1/2 minute in asp.net to a few seconds in .net core.

On the bad side? Mainly the asp.net core team and their push for Dependency Injection and the really poor async code has got to be the most frustrating thing about the whole thing.

My biggest bugbear is the way you access your config, which is an absolute and utter kafka-esque mess. Because someone at MS was drinking the DI kool-aid you have to add a minimum of 5 lines of code to any class you want to access config values in, and you're in for an even bigger nightmare if you want to separate business logic and web code into two projects (which is a pretty common design). I've given up with the config DI and just assign them all to static variables. I still don't get why you'd even want your config injected!

The entire authentication uses async code too, which still has all the normal problems of being hard to debug, silently failing if you invoke it wrong and completely overwhelming the callerstack with a bunch of redundant calls.

Huge disadvantages with absolutely zero performance gains for 95% of programmers.

Having been using express again recently, I'm seriously thinking of ditching the asp.net core stack despite my general preference for statically typed languages. C# is great, but the asp.net core are so obsessed with shoving DI and async down your throat and making you write really unpleasant, boilerplate code. Feels like you're back in 1990 with all the FactoryFactoryFactories.


> My biggest bugbear is the way you access your config, which is an absolute and utter kafka-esque mess. Because someone at MS was drinking the DI kool-aid you have to add a minimum of 5 lines of code to any class you want to access config values in

I'm not sure if maybe you're referring to the Options<T> stuff?

The way I do it is pretty simple.

1. Create a POCO that represents your config. You can have properties for bested configs if you want, e.g. SecurityConfig, MessageBusConfig 2. When the app starts, use a ConfigurationBuilder to build an IConfiguration from whatever Co fig sources you want - typically JSON files and env vars 3. Bind the IConfiguration to your POCO 4. Configure both IConfiguration and the POCO as singletons

Now you can inject the POCO config object wherever you want it, or the IConfiguration should you need it. I feel like this is simple, and works well for dev, test and prod.


For other people who are curious POCO stands for "Plain Old CLR Object" So I think just a basic uninhereted object.


Typically also implies that the object contains no real logic/functionality and is a simple container for storing some values. Something that could be a record type.


I think of "plain old" as meaning "not having dependencies on any framework" as opposed to "not inheriting from anything" or "not having logic".


Apologies, I usually remember to expand acronyms!


POCO is the POJO of .Net.


+1 This is the approach I use, to register a config interface in the DI, just grab IConfiguration and bind it to a config class.

The options stuff is a car crash.


I think the OP is referring to accessing the config 5 layers down from the controller. I've run into it myself. To be able to do that, you have to add Options<T> to the constructor of every class in the chain and then configure DI for it. It just has bad code smell.

On my last project, I just assigned the configs to a static class that's available everywhere and the code was just simply much cleaner.


> To be able to do that, you have to add Options<T> to the constructor of every class in the chain and then configure DI for it.

You really don't - just inject your populated config object where you need it.


Why does something 5 layers down need all the config values?


Not necessarily all options, but just maybe settings for connection to some 3rd party server. You will need the configs.


Why wouldn't you implement a service for the 3rd party that itself requires the config, then? If a caller of the service needs to know how it's configured for some reason, you should probably be exposing it as a property of the service, not sharing the config directly.


Smells like you're not doing DI properly :)


Is it a bad design, if it could be held wrong?


Well from the sound of it he's instantiating objects manually 5 levels deep and he has control over all those constructors since he passed a new parameter through all of them.

So it sounds like basically he's not using DI (is just using it to pass stuff to controllers) and is surprised that using a library designed around DI is awkward. The correct solution is use DI to construct that hierarchy and then you can just request IOptions at the bottom/where you need it.


You can hold pretty much anything wrong. Sometimes the onus is on the user to understand the tools. Sometimes the onus is on the tools to not be unnecessarily dangerous. DI is a well understood pattern, it just takes some time to wrap your head around it.


“5 layers down”

This is precisely why I jumped ship 5 years ago to Go.


Code smell OR a pattern, I believe they call it the "Options Pattern". Obviously not a real pattern like other design patterns.


Exactly. [Removed due to bad wording].


Well, work on your reading comprehension, I clearly said I ended up doing exactly that.

Plenty of other people piping up saying they also find this bad abstraction obnoxious, sorry you can't empathise and understand we all have difference preferences.


My apologies for the wording. Was not intended like that.


I agree. The first thing I always do is make a custom Settings class which just loads settings via good old `Environment.GetEnvironmentVariable`. Super simple and never failed me.

Bonus Points, by loading all settings at once into strongly typed properties at startup, it will fail fast if a setting is missing, which to me is a benefit.

But I still pass this class via DI to the controllers for testability reasons. If you are using ASP Net, just do things the asp-net way for the sake of other people who have to inherit the code.

Outside of AspNet I never use any automatic DI.


ASP.NET supports environment variables and uses them by default (that's how the dev/prod environment context is set).

This feature is added in the `config.AddEnvironmentVariables()` line, which is already included in the default host builder. The latest releases use very good conventions and include a lot of the typical functionality. I recommend reading the docs to avoid redoing what's already there: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/co...


>I agree. The first thing I always do is make a custom Settings class which just loads settings via good old `Environment.GetEnvironmentVariable`. Super simple and never failed me.

The .net core configuration mechanism has a hierarchy of providers it reads from, one of which is environment variables, and strongly typed configuration is supported out of the box, so you're exact use case is supported natively by it.


I am aware of this, but like many other, I find the IOptions<T> a bad abstraction.

This blog has similar thoughts: https://rimdev.io/strongly-typed-configuration-settings-in-a...

or this blog: https://adamstorr.azurewebsites.net/blog/beyond-basics-aspne...

Or simply google 'asp net strongly typed configuration' and notice everyone seems to be reaching similar conclusions and building their own things.

Just load the setting yourself and register it as a singleton.


Options are a very misunderstood abstraction that we have done a poor job explaining. Those blog posts are simplistic and don't explain what you gain and lose by using the options abstraction.


> that we have done a poor job explaining

Yes, indeed you have. I've tried to both skim and read the "introductory" documentation for `Options<T>` and the only gain I could figure out is monitoring + refresh. I still have no idea how to use it though, so I don't.


I don't like it either, but one advantage is that the settings can be live updated when the config files change


The interface just doesn't bother me. I make reasonably good use of the hierarchical config providers including some custom ones. Plus it supports actually being able to live update the config values.

Meh, seems like just an interface to me.


Do not use Options. Use regular poco bound against Configuration sections like the rest of us :)


> make a custom Settings class class which just loads settings via good old `Environment.GetEnvironmentVariable

Why would you prefer that, to the the built-in classes to load settings via good old environment variables, among other sources? Which do you think new people coming to your team would prefer to see?


Because the morass that is HostSetting/HostedServices/etc and friends is a confusing tangle of dependencies (some unspecified) bizarre calls that you've missed and general confusion.

I seriously don't understand how MS managed to make loading some configs so outrageously confusing.

Every time I have to write in .net and I'm forced to go down this path, I've got to go back to the one project where it worked, and copy code across and install dependencies until the whole thing decides it's finally happy.


You'd want your config injected so that you can swap out configs for test harnesses for continuous integration and deployment. It's one of the mainstays of making your software deployable by automation.


You know a better way of doing this?

appsettings.Test.json

Voila, different settings!

I've said elsewhere, the DEFAULT should be super simple, really, really, really easy to use. No thought, no effort, just use it.

If you want to go all crazy and start injecting values into your config in your unit tests, great to have that option, you should have that option. But you're the one who should be scrabbling around writing tons of extra boilerplate code, not me.

But I just want to set a filepath, that's probably never going to change, but might one day. Or an email address to send a weekly summary email to. Or some settings on paging that the client might change their mind about once and I don't want to have to rebuild the project.

The vast majority of config settings are just cover your ass in case you need to one day change this value. They don't need to be tested.

So it not the "normal" path to need to test config values. It's not the path the vast majority of programmers need.


I'm going to have to disagree here. Having a settings object is much easier then appsettings.Test.json

The .json is for running the application locally. A unit test should be able to cover if a setting has value 'a, b or c', which is much easier with a regular object.


Making software harder to read, write, and maintain in the name of making unit tests easier to read, write, and maintain is putting the cart in front of the horse.


Indeed it would be but making unit tests easier to read, write, and maintain should force you to make the application easier to write and maintain. I agree that readability can suffer somewhat if you’re not careful though.


Writing quality tests is often much harder than writing quality software, and I find quality unit tests to be significantly harder than e2e or integration tests, so if I can only write software for which I can also write quality unit tests, I am greatly reducing the space of quality software I can write.

Yes, one solution is to make the software more complicated in order to make the tests less complicated, but the other solution is to just use fewer unit tests. Note that I'm not advocating not testing (my motto with software is "If nobody tested it, then you can be certain it doesn't work."), but One good e2e test can often replace dozens of unit tests.

I find that E2E -> Integration -> Unit form a scale from "easy to write, hard to run" on the e2e side and "easy to run, hard to write" on the unit side.

There are exceptions; testing that you can sort N objects in M microseconds is a trivial test to write, and potentially a challenging requirement to implement, and a unit test might be completely appropriate there. However, for the general case, I find unit tests to be too often trying to make a square-peg fit a round-hole, and then saying "okay we'll shave the corners off a bit so that it's easier to fit in this hole" rather than saying "maybe we should use a square hole"


> Writing quality tests is often much harder than writing quality software

I personally believe that tests are how you prove you understand the code you’re writing, and the changes you’re making. If they’re harder to write than the software, you probably need to further clarify the behavior being tested, but they’re going to be hard sometimes, just as some software is hard.

> if I can only write software for which I can also write quality unit tests, I am greatly reducing the space of quality software I can write.

I follow TDD, so indeed I only ever write code that I know how to write unit tests for. I haven’t found that TDD restricts the kind of software that I can write, but sometimes it does mean I’m slower to get started in a particular domain. I do find that once over the initial hump my unit tests are a constant companion that make me more comfortable and productive in developing new software.

Realistically we both agree that testing software is important, we just disagree in the details. The problems I see with end to end testing are that good patterns for building extensible and loosely coupled end to end testing suites are not as widespread in the industry as they are for unit and even integration tests.

My theory as to why is that we assume that end to end tests are going to be slow to run and “heavy” in some sense, and that as such we aren’t running them often enough to force the improvements we need in those suites.

I always find it weird when discussions about unit tests become discussions about other kinds of testing, though. I think high (though not necessarily 100%, as that’s arbitrary) and increasing unit test coverage are an empirical sign of code that is being properly maintained. They are not the last word in testing and software QA, they’re closer to being an indicator like using CI and a proper build system instead of a thousand line shell script, and having quality well-maintained documentation. As I said in another comment, I am not going to say that unit tests are the only way to write good software. They’re a powerful tool that I have found helps me and my teams a lot. That’s as far as I will go.


This is simply a dogma, if you want to follow that dogma, fine, but there's no reason to be forcing everyone else to.


I'm not going to force anyone else to, I am expressing an opinion. I would be pushing this position if I were on a team with you because it's not just a dogma, it reflects the best knowledge I have about how to build software well. You are free to build software as you see fit and have always been.


The age old TDD dilemma, that treats the test suite as first class, rather than the end user's experience.


The end user being the programmer?

I find it humorous that so many programmers still can't see past the end of their nose when it comes to designing code for application testing.

An application is deliverable. That means testable and deployable with a consistent, repeatable process. Anything short of that is not an application, it's a prototype.

Yes, it takes longer to deliver up front, but it removes the long tail of maintenance. Getting a low quality product out of the door makes for a bad user experience and garners you a bad reputation.

The test suite _should_ be a first class citizen.


What utter nonsense.


I’m confused as to what you want instead of what Asp.Net core offers. Nothing is forcing you to inject configuration or to use IOptions. In most cases you don’t even need to do anything with the configuration builder because that is part of the default of how the host gets setup. You’re configuration should be automatically built from environment vars and appsettings.json.

And of course you can always just access environment variables directly if you want to.


I think they should have done what every other framework does and make it super easy to access, like this:

    Env.Config("Settings:MyEasyValue");
Or:

    Env.Config<Settings>().MySuperEasyTypedValue;

And the Dependency Injection? Sure, add some version you can DI with. But not the default.

I now know how to navigate the mess they've made, but it's not time that I feel where I gained anything in my life, it was just frustrating.

Here's one (of many) questions asking simple questions on how to access config values in .Net core:

https://stackoverflow.com/questions/46940710/getting-value-f...

280k views!

And look at the sheer length of that answer.

If you think they succeeded in making a good configuration library, we have very different definitions of success.


The sheer length of the answer? That's because it shows optional, more advanced ways to get config stuff.

Is this so hard?

    public class AccountController : Controller
    {
        private readonly IConfiguration _config;
    
        public AccountController(IConfiguration config)
        {
            _config = config;
        }
    
        public IActionResult ResetPassword(int userId, string code)
        {
            var vm = new ResetPasswordViewModel
            {
                PasswordRequiredLength = _config.GetValue<int>(
                    "AppIdentitySettings:Password:RequiredLength"),
                RequireUppercase = _config.GetValue<bool>(
                    "AppIdentitySettings:Password:RequireUppercase")
            };
    
            return View(vm);
        }
    }


Makes perfect sense to me, even as a mostly-Python developer with very limited experience in .NET/C# or other ecosystems.


Is this supposed to be ironic? Or just unintentionally?

"What's hard about [mass of code] compared to [one liner]?"


The "mass" of code is an API controller, not just a way to get settings. It's an example of config use in real code. There are roughly 4 lines of code to get two variables out of the settings.json. This scales quite nicely - if we needed 8 different variables from settings then it would be 10 lines of code for 8 variables.

Sure, it could be done in one line. My point wasn't to say that this is the most terse environmental variable code possible. My point was to say that it's disingenuous to say "look at the sheer length of that answer" as a way to state that the way to get env variables is hugely bloated. It's literally one ctor param and one private variable. If you don't like that for stylistic reasons, that's fine. I understand the argument saying "there should just be a static class with a readonly prop per variable", I just don't think that this particular code is in real terms actually any worse.


It would be pretty trivial to implement the Env class you desire abstracting away the DI stuff.


They have that? You can call Configuration manually if you prefer that way. I feel like everyone in this thread is working overtime to justify complaining when there are multiple options to do what you want.


I was thinking about this last night, that's actually not at all true.

A 'new' C# programmer doesn't. You have to be advanced enough at C# to know you can do that (i.e. not a junior or outsider)

It's also not clear how the config works at first glance or the implications of using a POCO on the app lifecycle.

An experienced coder will have to ask themselves questions like, if I store the IOptions object, will it automatically update? If I assign the POCO in startup, will it be a single object, or does startup get called for every new request? Or maybe it will get called for every thread? Or maybe if there are no requests for 10 minutes it automatically shuts down?


You can do that just with different config files or env vars too. No need to complicate code for stuff that can be done easily differently as well.


And then you realize that it's 2020 and you don't want configuration files, you want environment variables. Or, now that you've hardstuck yourself on environment variables and built all this stuff around them to set and re-set them properly between tests (now running either in multi-process, which good luck in most environments, or are just running in serial), you're using k8s and the voluming of secrets rather than exposing them as environment variables requires refactoring any code, both operational and testing, that touches them.

"But wait," you say. "I'll just pass in the thing that provides that data"--and you just reinvented DI, albeit likely poorly.

DI is the removal of complication when it is done correctly. (I have no opinion on whether ASP.NET Core does it correctly.)


> DI is the removal of complication when it is done correctly. (I have no opinion on whether ASP.NET Core does it correctly.)

I do, it was done in a really weird way and I don't care for the provided DI Abstractions nor the 'Microsoft.Extensions.Configuration' namespace.

To take the 'common' object used for configuration, the nuget package for IOptions<T> requires pulling in Microsoft's DI Abstraction..

That's the first sign of a smell. Config and DI can go hand in hand, but they should still be orthogonal.

The further you go down the DI stack, the more you can see that it's an abstraction has a lot of tradeoffs for front-line devs in the name of using the same abstraction for the underlying framework.


A better take on DI wireups in .NET: https://nblumhardt.com/2010/01/the-relationship-zoo/

The gist:

    Relationship                                Adapter Type     Meaning
    A needs a B                                 None             Dependency
    A needs a B at some point in the future     Lazy<B>          Delayed instantiation
    A needs to create instances of B            Func<B>          Dynamic instantiation
    A provides parameters of types X and Y to B Func<X,Y,B>      Parameterisation
    A needs all the kinds of B                  IEnumerable<B>   Enumeration


For sure--that sounds real smelly. (Unless it's being used to pull in attributes that are shared between and used for wire-up, but those should then be in a separate assembly.)


> (Unless it's being used to pull in attributes that are shared between and used for wire-up, but those should then be in a separate assembly.)

Right. It's primarily interfaces, but they are tangled.

This was especially painful between Net Core 2.0 and 3.1, because moving fast and breaking things is ugly when you have tangled dependencies and everyone is trying to catch up to the breaking API changes and related nuget versioning dance.


> DI is the removal of complication when it is done correctly.

Personally, I would rephrase that as, "DI is a pattern that is designed to mitigate certain kinds of complexity when done correctly."

That leaves room for two ways in which it can backfire. Doing it wrong, like you say, but also doing it in situations where you don't actually have one of the problems it's trying to solve. Cost/benefit ratios always get out of whack when there's no benefit to offset the cost.


That is a fair edit, for sure. Many systems don't need a formal DI mechanism, though should they scale to a certain human-size they'll probably invent enough of one anyway just through composition (if they don't collapse into a ball of mud).


You wouldn't be using the config at all if you're not using the config files. They're part of the default project.


How is it that the big enterprise languages haven't gotten the ability to just mock imports yet?

There are certain benefits to DI, but this one seems more like a lack of tooling.


I agree with the DI for config and async push. Too much ceremony for a very simple task. Async being used by auth can certainly be frustrating if you have sync code that needs to use it at some point. Then suddenly you have to redo all sync code that calls the async method.

I’ve been experiencing the same issue with some Azure SDKs that only expose async methods rather than both async and sync. Frustrating to have to redo 8 caller functions to make a single sdk call.


You can actually just use `.Wait` or `.Result()` most of the time if you just want to make it synchronous if it's just scrappy code.

Though you'll probably hit the `async void return` runtime bug at some point where you accidentally return void from a method you tried to make async but then got bored of rewriting everything, so just whack in a .Wait or .Result() but haven't returned Task and the damn thing fails at runtime with a mysterious error making you curse async even more.


>You can actually just use `.Wait` or `.Result()` most of the time if you just want to make it synchronous if it's just scrappy code.

You call .GetAwaiter().GetResult();

and avoid mixing sync and async where at all possible.


Never, ever, ever do this in production. This will block at least two threads, which will cause premature thread pool starvation at spike loads.

This pattern, 'sync-over-async', is to be avoided at all costs, at all times.

You're not going to like it, but the MS recommended way of dealing with this is to make your callstack async.

In all honest - if you're making an ASP net core web app, go all in on async from the start. If you have to mix and match, never go sync-over-async inside an action.


Authentication is in the request pipeline, so it makes sense that it's async if you're talking to some sort of storage engine. We're using the async APIs to pull user/client app information from our database. Why would we want that to be synchronous?


> Why would we want that to be synchronous?

Authentication is blocked by accessing storage. You cannot proceed until the storage read operation is completed. There is no parallel work to be accomplished here. This isn't UI code. There's no event loop. Why would you want this to be asynchronous? This seems like async as a dogma, rather than as a tool to accomplish something specific.


Async means non-blocking, not parallel. It helps keep the app responsive while using less resources and avoiding lockups and crashes.

.NET is used by millions of developers. Strong and scalable foundations matter, and performance is an explicit goal with asp.net being one of the fastest web frameworks. Async is a core part of this.

There's very little - if any - overhead that you need to worry about with async code from the language to Visual Studio.


> It helps keep the app responsive while using less resources and avoiding lockups and crashes.

it actually uses more resources. async has a cost, but it is really low.


There is a resource tradeoff. async/await has a minor CPU/memory impact, but it can also free up threads to do other work during an I/O bound operation.

Threads have their own resource cost.


>You cannot proceed until the storage read operation is completed.

Exactly. So when you await it, that thread that would have but blocked waiting for the I/O instead can be used to service other requests until the I/O has finished.


Async is not used just to enable parallel execution. Using 1 thread per every long running request is a good way to bottleneck your web server.


> There is no parallel work to be accomplished here.

Concurrency is not parallelism [1].

[1]: https://blog.golang.org/waza-talk


Each HTTP request in a Go webserver spawns a goroutine, and there is no need to think about "freeing up threads" since coroutines are not OS threads. There's also no need to leak the word `async` all over your codebase. The problem being solved here with async/await is also being caused by async/await.


Goroutine like .NET IAwaitable/Task both abstract the thread away. No one thinks about threads when programming async/await code despite in any await keyword a thread change can happen.

Both solve the same problem with the same technique (under the hood it is all the same). One language had the benefit of late birth (go), the other suffers with its friends through a library/language migration (C#, C++, Java, JavaScript, Python, ...). async/await is effectively the modernization of structural programming to benefit from modern concurrency.


Worth noting that "late birth" is not the only thing at play here - Rust _had_ green threads at one point and removed them - only to add async/await later.


async/await has underpinnings in monadic computation expressions (via F#/Haskell). There are some benefits to the model, despite a lot of its detractors in the thread here. Such as it is possible to work with multiple monads (many languages that support async/await are not strict about which monad they rewrite for; C# supports any Monad that has a GetAwaiter() method of the right shape, JS supports any object with a then() and/or catch() of the right shape, etc).

While async/await is nowhere structurally as capable/composable as for instance Haskell's do notation, it's still more composable than a lot of alternatives and a lot of manual thread management techniques.

The biggest thing though is that these concerns are orthogonal. You can have green threads backing an async/await monad (with some caveats), but you can't as easily swap in anything that follows monad rules into code written specifically just for green threads. (Python makes you explicitly define your threading model before using async/await; the others provide methods to configure it.)

Which is to say "late birth" isn't really a consideration in async/await, it's as much a consideration of flexibility/composition of abstractions. JS, for example, needed that flexibility/composition in the wild west of multiple disparate Promise implementations early on, and may need it again if/when Browsers ever decide to support proper multi-threading whether it is green threads or something else. In such a future you should still be able to compose existing async/await code without modifying it, even as you take advantage of newer threading options.


The issue is that Go simply does not allow large classes of system which require direct and unfettered access to the operating system primitives to be built.

I'm a huge fan of Go, but pretending the two systems are equivalent either is either disingenuous or belies misunderstanding.


There is sort of an event loop: the .NET Threadpool. It is used by async by default. It will by default quickly create an number of threads equal to the number of CPU cores, but then slowly increase it beyond that. If you are blocking in a threadpool thread, the work item queue will back up. This is called threadpool starvation.

Some articles:

https://docs.microsoft.com/en-us/archive/blogs/vancem/diagno...

https://github.com/Microsoft/vs-threading/blob/master/doc/th...


Awaiting a storage operation (ie. look up user) in C# will allow other calls to proceed until the I/O is available to read.

Synchronous code locks you into the model of handling one request per thread.

Given the minor overhead (both cognitive and runtime) of a sync/await, I can’t see why you wouldn’t want to do it here.


Because async/await infects everything it touches and forces me to spread irrelevant junk words all over the code which have nothing to do with the task I'm trying to accomplish.


What about the far worse stack traces and debugging?

Whats a way to get the utility of normal debugging while using async?


Stack traces were massively improved in 2.1.

Using a good IDE (Rider or Visual Studio), debugging async/await code works as expected.

Are you doing interop or calling unmanaged code?


Thanks, I will make sure to check this out. Does anyone have any articles or documentation on this?


Stacktrace improvements in .NET Core 2.1: Intelligible stack traces for async, iterators and Dictionary (key not found)

https://www.ageofascent.com/2018/01/26/stack-trace-for-excep...


Awesome! Thank you. Wonderful to see these open source contributions to .NET.


For simplicity? Just like how js get async await thing eventually.


async/await is the first class citizen .NET and C#. You just mark your function as async and return Task<OriginalReturnValue> and you can call any async function with await, just like js.


That's a factual definition of how to use the first step of async, however it's missing the point.

Async makes your code do significantly more complex thnigs, and while the compiler makes the easy stuff easy, the underlaying complexity hasn't gone away it's just been abstracted, and somewhat leakily.

You can't do 'just' what you said, you also have to deal with of boundary conditions that don't exist in non-async code, plus if you want to understand what your code is doing you now need to keep two mental models of program execution in your mind.

For an introduction to the complexity of async, and practical considerations you need to manage to use it well this official documentation is a place to start - and note the article is pretty long, async does have a lot of additional complexity.

https://docs.microsoft.com/en-us/archive/msdn-magazine/2011/...

Btw I'm not saying don't use Async. Async is an incredibly powerful and valuable tool. But it has costs and those are weighted toward downstream in the software lifecycle and you should be aware of them when deciding which situations that tool is appropriate in.

Yes async is a first class citizen in .NET and thats emphasised by the framework libraries using it everywhere. But they are writing a framework to be consumed widely by users of differing requirements so they have real reasons to support the widest possible use with highest possible performance. Most application code doesn't really have those needs.

Using async should be considered as a tool to reach for (when appropriate) not a default for every situation because it has significant costs through the lifecycle of the system.


I fail to see how this is sensible in the slightest - you then have to re-write the code if you want the application to scale. Otherwise you are wasting memory, trashing GC, spawning all these extra threads.All awaitable operations like network requests, disk reads, etc, should be async. Why would you write potato code?


>Why would you write potato code?

Side note: this gave me a good laugh. I just pictured it being used as a sign off on PR reviews.

Is it a technical term? If it's not I think it should be, and if it is I want to know what the exact nature of potato code is, so I can call it out when I see it in the wild.


you then have to re-write the code if you want the application to scale.

Scaling machines (VMs, containers) is a solved problem and very easy to do today.


So you want to pause the thread and keep it from processing other requests for simplicity?


Unless you have a lot of concurrent connections (C10k problem), having one thread per active request isn't a big deal. For my web applications, requests spend most of their time waiting on the database, and an open transaction is at least as expensive as a thread in an app server for many databases. So I agree that for most applications async isn't worth the complexity they introduce. In many implementations they also mess up your stacktraces, which makes debugging production issues harder.


.NET has millions of developers. They optimize for the greatest impact and async is necessary for performance. In fact high-performance is an explicit goal of the framework and it ranks at the top of the TechEmpower benchmarks.

Even if you don't need the performance, async code helps keep your app responsive and use less resources on your server. It's far better to have async and not need it then try to refactor your entire ap when it's necessary.


The web server uses a threadpool anyway. You're not creating threads that don't already exist. You're simply able to use the threads you do have for other tasks while waiting for I/O.


That is literally how php does. Let's fire a thread for every request and block on db reading or any action that require io. And it seems it works (at least for php)


.NET is meant for stuff like microservices; you don't see lots of those written in php, right?

Also, for high-traffic websites, that's really not a great idea. You might say "but Facebook, PHP" - I'm really certain that their current codebase doesn't have a thread that blocks on DB for each user request.


> .NET is meant for stuff like microservices;

What does this mean? .NET has been around before SOA was a term and microservices is an evolution of that. The concepts are orthogonal and you could write microservices in PHP, if you really wanted. Like Java, the .NET runtime and platform is highly efficient. That's what .NET has historically had over PHP/Python/Ruby, but I don't keep up with the PHP and Ruby platforms these days.

I feel like this is another one of those discussions like the "Python GIL problem." Whether it's a really a problem depends on the circumstances, and the circumstances were it is a problem affects fewer people than the HN threads suggest.


I'm not sure where SOA as a specific term showed up, but the very first release of .NET very much emphasized web services in the docs, presentations etc. Back then, this meant SOAP.


I don't get why the auth being async is a hang up but at the same time, most projects can handle not scaling to Facebook-scale and the sooner developers accept that, the sooner we can escape a great gnashing of teeth.

I'd be willing to bet more projects have been killed by trying to architecture for performance up front than have been killed by being too successful for their own good and keeling over.

If Twitter could make it past the Fail Whale days, I think most of us can too.


I'd agree if you talk about "most projects". But here we're talking about a platform - you can't fault Microsoft for caring about performance! .NET is absolutely not meant only for small sites, my current employer is heavily invested in .NET and we'd probably be dead (performance-wise) without async. Definitely couldn't run our services with PHP (not without Facebook-like investments), we already outgrew that kind of traffic.


Until you look who is faster and has more througput. The only way how PHP can do (via swoole and friends) is by adopting non-blocking frameworks. And remember througput * duration = Serverlese costs


That's just standard threading, not async/await.


The async stuff may require some attention, but it's part of the performance story when dealing with many requests, not when evaluating just one.

As regards configuration, all the means you could want for configuration are there. You can access items directly [1], or via strongly typed objects. These all support multiple sources of configuration data, and many means of overriding them.

I generally really don't like environment variables as they expose configured secrets to everything in the same machine / container, but we make use of the standard configuration system's environment variable overrides, but we use akv2k8s.io and thus only expose secrets to the container's entry point application.

[1] public Startup(IConfiguration config) { // injects config store var setting = config["yourConfigKey"] // ... but you now have to convert this from a string for any other datatype }


> [Async has] Huge disadvantages with absolutely zero performance gains for 95% of programmers.

One curious thing is that Java has elected to use lightweight threads instead of async methods: https://www.javacodegeeks.com/2019/12/project-loom.html

Their arguments are compelling:

- Works with existing debuggers and debugging techniques.

- Stack traces aren't polluted with irrelevant garbage.

- Doesn't create method apartheid, where async and non-async can't "mix".

- Doesn't force library authors to "buy into" the async model.


- Cannot interop with other languages/VMs that do async differently.

The nice thing about promise-based async is that it's very easy to map to straight C ABI callbacks, which means that it can be fully cross-language. Green threads are runtime-specific.


"Cannot" seems a bit harsh. CompletableFuture isn't going away, and is made for callback situations.

With green threads being "free" (you can have millions), it might be reasonable to just block your green thread until the callback arrives.


Completely do away with dependency injection by using one of the many available F# options, for instance SAFE stack.

Should be able to likewise give you a much simpler and understandable config model.

https://safe-stack.github.io/


> and you're in for an even bigger nightmare if you want to separate business logic and web code into two projects (which is a pretty common design)

I do this with a lot (most?) web apps, and don't have any issues with it at all - what kind of problems are you seeing?


Yeah, the DI stuff is a lot, but eventually bringing in the dependencies becomes total autopilot.

> I've given up with the config DI and just assign them all to static variables.

Not to be a fanboy, but it's designed in a way where you CAN use all the fancy features. If you want something simpler, you can just do exactly what you did.

This syntax is probably wrong because I'm doing this from memory, you should probably also use the Lazy class or something, and if I were to actually write this code I'd think twice about thread safety -- but if you want simple static access to config values, you can just do this:

public static MyConfigPoco MyConfig {get;set;} = JsonConvert.DeserializeObject<MyConfigPoco>(File.ReadAllLines("appsettings.json")) ;


Can't you just ignore the built in configuration and roll your own (probably there's a dotenv library?). That said, I've also personally moved away from C# despite really likeing the lanaguage due to a lacking ecosystem.

I'd recommend both node with TypeScript and Rust for backend web development in a statically typed langauge.


We started to use F# with Elm and it works very well. Statically typed all the way.


This year I converted a standard F#+Giraffe backend and React+Typescript frontend to use Feliz [0] with Fable.Remoting [1] and it has been a pleasant experience.

[0] https://github.com/Zaid-Ajaj/Feliz

[1] https://github.com/Zaid-Ajaj/Fable.Remoting


why not go all the way with fable + elmish for the Frontend?


Good question, in my experience using Feliz is easier to integrate with existing react components. For state management I use `UseElmish` [0] so I get an experience that's pretty close to pure Fable + elmish.

[0] https://zaid-ajaj.github.io/Feliz/#/Hooks/UseElmish


C# lacks and ecosystem... I understand that.

But C# lacks an ecosystem compared to Rust? I don't understand that.


Which libraries did you miss from the ecosystem?


Is Rust mature enough for web development already?



The last time i looked into it, all the big cloud providers lacked officially supported Rust SDKs.


Rusoto for AWS is unofficial but excellent (and auto-generated off of the official botocore API definitions).


I found Go to be a really nice replacement for C#. It has the static typing and garbage collection, but you also get a self-contained binary executing without the requirement of an external runtime environment. I also discovered that I don't miss classes at all.


>but you also get a self-contained binary executing without the requirement of an external runtime environment.

I'm not sure whether we're talking about the same thing, but you can publish Self-Contained App (basically your app and framework together) and you don't have to install anything.


This is technically true, but it's a difference of a 5 MB executable vs a 50 MB.


Is it, actually? I mean, yes of course its 45MB difference, but does this matter? We are not talking about applications downloaded to a browser (webapp) or to a mobile phone. In a time where I download >50GB+ games on Steam and deploy from CI/CD server or docker registries to my servers, is a size of 50MB or 100MB really a showstopper?

Don't get me wrong, I totally love 4k and 64k demos and am always fascinated what could be packed into such small binaries, but in my professional life I think developer productivity, code quality and tooling is way more important than filesize. This is of course different if you are a webdev that ships .js, or an App developer publishing to Appstores.


The only environments where self-contained deployment matters are the ones you mentioned: browser, mobile phone, and also desktop. So yes, 45mb is a big difference (actually my app is 250mb).


In the .NET context, browser-targeted deployments can have a minimal download overhead of around 2 MB, which is a far cry from 50 MB. How? Blazor. For details, see https://blog.ndepend.com/blazor-internals-you-need-to-know.


That's 2Mb front-end which is quite heavy.


How 2 MB could be considered heavy, when visiting most websites results in much larger sizes of downloaded resources?


tell that to websites auto-playing videos (in muted state) or even better, using videos as page background that play without interop.


.NET has trimming now, so it is 5 MB? (Possibly less!)


Nope.

Running this command:

    dotnet publish `
    -p:Configuration=Release `
    -p:PublishSingleFile=true `
    -p:PublishTrimmed=true `
    -p:RuntimeIdentifier=win-x64
gives an 11 MB file, that still requires 4 DLL as well.


Go seems nice, but don’t you think writing business logic feels very tedious compared to C#? Generics and linq speed development time up for me significantly.


I can't think of anything other than Java that's as tedious to write as C#/ASP.Net. With Java and C# code you spend a siginifcant amount of your time messing with the complexities of the language itself instead of dealing with your business problem. The mixture of generics and static typing is particularly pernicious. I really hope Go 2.0 doesn't end-up going down the generics rabbit hole.


Can you give some concrete examples of how C# is tedious compared to go?

Things like

  var filtered = purchases.Where(x => x.Name.Contains("Sean"));
and

  var grouped = purchases.GroupBy(x => x.Buyer).Where(x => x.ToList().Count > 10).ToDictionary(x => x.Buyer, x => x.ToList());
etc. come up a lot in day to day programming and always feel tedious to me in go. The first example in go looks something like

    filtered := []purchase{}
    for i := range purchases {
        if purchases[i].buyer == "Sean" {
            filtered = append(filtered, purchases[i])
        }
    }
and it only gets worse as the domain gets more complicated. I don't really know what your background is where you feel that static typing and generics are complicated. I write C# every day and spend 0% of my time fighting the language.


Same experience here for Java. I always have a hard time understanding the arguments against it. The platform and tooling far exceed that of Go and I've never found the writing of actual code to be anywhere near the bottle neck in the entire development process. And while I've never worked professionally with C#/.NET, I imagine it's a very similar experience.


I have to agree with Xeronate on this one, a lack of generics makes writing business logic in Go quite annoying. Generics and the monad-esque values it enables make it much easier to model your logic properly.

I like Go for services that just need to push bytes around to various places but otherwise it does get quite tedious for complex logic.


What types of services don't have complex business logic? Just trying to expand my horizons. I guess like a video transcoding service or something like maybe.


Something like a WebSocket server, a load balancer, message queue, that sort of thing.

It's not that those aren't also complex, but there's less "business" in the logic, if that makes sense. Something that doesn't handle arbitrary user input or deal with the messiness of humans, like first/last names, dates and timezones, etc.

As an example, I had once written a WebSocket server in go which took a program as input, ran it in a heavily stripped down docker container, and streamed the stdout back to the client. Go was perfect for that use case. (though to be honest I'd pick Rust now because I'm past the learning curve on it)


You don't need the .NET runtime installed on the target host.


C# doesn't require classes with the latest release.

In what capacity are you using Go that made it a good replacement for C#? Microservices? If so, which framework are you using?


I have migrated from C# to Node.js, my code is now 20x times shorter (from 800 files to 40 files), simpler to read and maintain, there is 100x more packages available on NPM compared to nuget. Performance is stellar, even better than in C# for my use case (Mega tons of concurrent queries executed in 1-5ms avg)

And also I don't have to fight against the language to do what I want to do ... No classes, no types, no constraints.


Every rewrite yields better results, even if you'd have rewritten your code in (modern) C# again.


> my code is now 20x times shorter (from 800 files to 40 files)

Well, do you have all of the bells an whistles that you had with .NET? I mean, you can shrink a .NET API down to pretty much just 1 file if you really want to. The question is if you should.

WebHost.CreateDefaultBuilder().Configure(app => app.Run(c => c.Response.WriteAsync("Hello world!"))).Build().Run();


>20x times shorter

that's giant, but I don't think that "just" language / environment difference makes this huge difference


I would say 2x-5x shorter is pretty realistic.


I would say that that C# code that can get 2x-5x reduction in size by being rewritten in JS is terribly bloated (not taking into account the case where you just remove code because there is now a library available for what you had to do yourself before).


yeah javascript is pretty short. unfortunatly I do not know how people love it. npm/yarn and javascript dependency management is the worst thing i've ever seen and it is slow and pulls in millions of files, just for a small project.


Huh, that's quite a dramatic code size reduction. Which percentage of this codebase was in tests and boilerplate code for stuff like helper data classes?


Also, I find DI to be very useful. I dont know about small projects, but in any reasonable size projects, having a DI container is a god send. Easy to setup and use.


Yeah, I'm pretty shocked how much hate it seems to get. Maybe it just makes things easier for simpletons like me?

The one issue I can see with DI in ASP.NET Core is that it doesn't really do DI correctly. The point of DI is that ultimately, you have complete control over how you build your dependency graph and you shouldn't be forced to use a container at all if you don't want to. With a DI abstraction the developers are basically saying "you can do what you want, as long as it's pretty much exactly what we expect you to do".

This explains this pretty well: https://blog.simpleinjector.org/2016/06/whats-wrong-with-the...


That is 4 years old, is it still relevant?


Yeah, that whole approach hasn't really changed at all. I'm not sure how MS could even realistically change it without breaking a lot of stuff, so I think it's more of an example of what to avoid.


>and you're in for an even bigger nightmare if you want to separate business logic and web code into two projects (which is a pretty common design). I've given up with the config DI and just assign them all to static variables.

So you have static variable in web project and then add reference to web project to obtain that config? Do I get that right?

I never felt like config DI was a problem once you did all the stuff that you do at the beginning of the project


To me it's just so pointless, here's an example:

    using Microsoft.Extensions.Configuration;  //extra line
    using Microsoft.Extensions.Options;    //extra line

    public class AdminController : Controller
    {
        IOptions<Settings> settings; //extra line

        public AdminController(IOptions<Settings> settings) //extra code
        {
            this.settings = settings; //extra line
        }
        

        public void RandomMethod()
        {
            var a  = settings.Value.FinallyMySetting; // 5 extra lines to access your config, genius...
        }
    }
A complete and utter waste of my time to put all those lines in, it's just pointless boilerplate trash, which C# has been getting rid of excellently, all to be undone by whoever wrote Microsoft.Extensions.Configuration. And you have to do it any where you want to get a config value.

On top of that, if you want to use it in a console app, you have to add like 5 packages. Yes, FIVE.

    <PackageReference Include="Microsoft.Extensions.Configuration" Version="3.1.9" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Binder" Version="3.1.9" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="3.1.9" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="3.1.9" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="3.1.9" />

Then add this delightful mess just to initialize it:

    var environmentName = Environment.GetEnvironmentVariable("ENVIRONMENT");

    var builder = new ConfigurationBuilder()
                .AddJsonFile($"appsettings.json", true, true)
                .AddJsonFile($"appsettings.{environmentName}.json", true, true)
                .AddEnvironmentVariables();
 
    var configuration = builder.Build();
When you compare that to what any other framework does, it just beggars belief that they though this was a good way of handling config.

When I was setting this all up, I obviously hit up SO, and it's clear from many of the (incorrect) answers on SO that people just don't get it. It's too complicated for something that should be super simple.

I get that it's handy to be able to configure all that yourself, but all of that should be default and a single line/package.


I agree that there's a bit of boilerplate, but in this case it's boilerplate that's normally already handled when you're using GenericHost or WebHost builders - which will setup all this Configuration, Logging and Dependency Injection for you (take a look here for an example https://dfederm.com/building-a-console-app-with-.net-generic...). I don't know what kind of console apps you're writing, but if they require DI and environment-specific configuration files then you'd likely be better off using GenericHost. Setting all this up by hand is a bit cumbersome, but I don't think they were really intended to be used that way.


Apart from needing two namespaces, this does look reasonable to me. I'm not sure how you could make constructor-based dependency injection easier without making it less obvious what is happening. And most of the additional lines are stuff your editor can do for you, even in VS Code you can automatically generate the field and assignment and the imports after just typing the constructor parameter.

ASP.NET Core does push you strongly towards dependency injection, but it does not force you at all to use it.


> ASP.NET Core does push you strongly towards dependency injection, but it does not force you at all to use it.

Their own documentation for making queries against SQL Server doesn't provide examples of how to do it that aren't using DI. "Just inject it" it just says. Thanks .net, super helpful. Maybe the docs are there, but I couldn't find them at all when I went looking.


    using(var conn = new SqlConnection("<conn-string>"))
    using(var cmd = new SqlCommand(@"Query here", conn))
    {
      conn.Open();
      cmd.executeNonQuery();
    }


Hmmmm, maybe I am just dense hahaha.


In their defence, there is a default builder that would save you maybe 3-4 lines there, but I do generally agree with your point.

It is something that annoys me about .NET in general there is just decades of over-engineering cruft and once these practices emerge its difficult to get people to stop doing them. The problem only gets worse as time passes.


THe last time I used .net core and had to handle some config was a nightmare like you describe. The documentation wasn't great either. The whole way everything is complicated and messy made me want to just quit my job.

It's such convoluted nonsense.

I like C# but sometimes it feels like MS main aim is to make developers miserable.


> it's just pointless boilerplate trash,

I don't regard the "ConfigurationBuilder" and following lines as "boilerplate" exactly, since by not using the default HostBuilder and instead including that code, you are making choices.

Specifically, you are choosing to pull in settings from "appsettings.json", "appsettings.{env}.json" and then environment vars, in that order, and specifying that both files are optional and should reload on change.

You can make other choices, with other code.The code is there to "configure the config", i.e. to make those choices.


I'm mainly interested in one statement:

> it's just pointless boilerplate trash, which C# has been getting rid of excellently

Could you elaborate on how you feel C# avoided this?

From my point of view the only alternative (to DI with constructor injection) is static members somewhere, which does not scale. Maybe property injection, but ugh.


I think they want to use something more like an environment contextual config fetcher instead.

For example:

test.appsettings.json

{"db-url":"test.url.here"}

prod.appsettings.json

{"db-url":"prod.url.here"}

Now in the code:

Settings.fetch("db-url");

// Will return the test one if environment variable says we are running under test environment, else will return the prod one.


The DI-based configuration system can be configured to do exactly that. Though yes, the configuration is maybe harder than it should be simply because it isn't on by default.

A lot of the complexity in the configuration system exists to enable a lot of configuration options. I have some ASP.NET applications that get some configuration from Environment Variables on the host machine, some from environment-specific config files, some from a configuration service (specifically Azure Key Vault), all without any of my DI-injected downstream components specifically needing to know which source provided specifically which configuration key. For the DI injected things it is just `config["db-url"]` or maybe type-safe class if I DI-inject an IOptions<T> (or add my type-safe classes directly to the DI without the IOptions<T> wrapper, which is another configurable option).

It's absolutely a very complex system, but with great complexity comes great flexibility (to badly paraphrase the Spider-Man mantra).


Do you type every line typically? If you use Rider or VS pretty much every one of those lines can be auto generated with autocomplete. Also, this can easily be worked around. Just create a Singleton which has the settings injected into it. I prefer the testability of the injected version but if you'd rather access a static global it is very easy to do.


I do agree

I just type IOptions<T> config in ctor, use ctrl+. and private readonly property and it's initialization is generated


That doesn't work unless you already have the using statements, which you won't have as you haven't added it yet.

I just tried it.


I'm using Roslynator:

https://i.imgur.com/cWs1M77.png

and I have some naming rule

https://i.imgur.com/iPwNj5b.png

So I basically have to type IOptions<T> Name and use ctrl+. twice in order to generate private readonly property and initialize it and add using to IOptions


you can use property injection when you use something like autofac as the di container.


Cool, so I'll add another dependency and figure out how to use that, to save myself time having to write out MS's convoluted dependency-injected configuration code.


Meanwhile one could of have this solved in their projects in less than the time it took to have this HN discussion and moved on with their life. Seriously, somethings are worth complaining over, others are not.


well you don't need to use dependency injection at all. there. is no need to use IConfiguration.

also as far as I heard, is that microsoft is working on a micro web framework, that does not use di. (something like https://github.com/featherhttp/framework, just more official i guess)


Talk about Stockholm Syndrome. Classic enterprise public SomethingBuilder builder = new SomethingBuilder() ; builder.AddFanfoldConfig() ; builder.build(). Sun, Microsoft and Oracle and have hoodwinked generations into believing this kind of crap is the Rolls Royce of software development and we let them get away with it.


> When you compare that to what any other framework does

Can you show me what that looks like?


You don't need to inject IOptions, you can inject the Settings class directly. I don't see an issue with those extra lines, VS generates them with one hotkey for me.


You also have to understand that shoving async down our throats is part of the philosophy of how aspnet core and kestrel (built ontop of libuv) is implemented: the whole stack is built to run async and with DI in mind. aspnet core itself uses the same DI system to manage itself.

No part of aspnet core is built to run synchronous. Its all async all the way down.


This is a pretty interesting comment section to read. I went from Node.js to .NET Core development for about a year, and got the impression that Microsoft had already figured out all the best practices, and shoving it down my throat was for my own good, so I didn’t question whether or not DI was good, as if it was dogmatic. My team thinks the same. I thought DI was pretty much the final conclusion to how software should be architected due to the collective trial-and-error of decades of OOP programmers.

In usual HN fashion, even the most seemingly obviously true points to me are being argued and refuted.


Agree with your complaint on async. And on asp.net, async doesn't disrupt that much the workflow (but creates the possibility for deadlocks). But if you are dealing with a UI and make a method async, then suddenly you have a ton more problems to deal with, like what happen if the user changes the state while you are waiting for an async call. That increases a lot the complexity of the code. Which is why I am lukewarm about their approach of mandating async for new features.


How does async/await introduce complexity where threading wouldn’t?

In my experience, the complexity is vastly reduced by not having to implement tricky asynchronous APIs (Task Paralell, thread pools, BackgroundWorker, etc).


no, vs blocking the UI thread. Which is perfectly fine the vast majority of the cases, unless you are running something long running.


You are trying to something slightly complex. I don't see how one could make it any easier from a language design perspective and staying idiomatic to the language.

Go: Goroutines make you wire the "stop" scenario on your own (channel close). Rust: Hold on to your future and drop() it. C#: Pass a cancellation token and call Cancel()

All seem reasonable and stay within what they think their developers will be able to pick up and run with.


Agreed. Async is nice on the server but on the desktop it’s a real problem. For a while they had both sync and async but now it’s often only async.

I just finished an app that had to shutdown while some async operations were still not finished. Really hard to manage compared to using threads.


How is Task.WaitAll different from Thread.Joining a bunch of threads? Or if you were cancelling tasks, using cancellation tokens?


The other comment alluded to this, but the crux of the problem is that you can schedule a task to run and wait on the completion of that task, but the only available thread for it to run is the one currently waiting for it to complete. The writeup here isn't bad: https://devblogs.microsoft.com/pfxteam/await-and-ui-and-dead...


If you are running Task.WaitAll on a UI thread you are likely to create a deadlock.


The move to async trades one set of problems for another. Yes, implementing cooperative cancellation is tricky. On the other hand, recovering from corrupted state caused by Thread.Abort is also tricky.


C# is great, but the asp.net core are so obsessed with shoving DI and async down your throat and making you write really unpleasant, boilerplate code. Feels like you're back in 1990 with all the FactoryFactoryFactories.

It's been said that C# is "Microsoft's Java". That applies not only to the language, but the culture of "enterprise-ism" (insanely excessive abstraction and complexity) that seems to permeate throughout the code.

I worked briefly with Enterprise Java a long time ago, and it brings back much of those memories whenever I have to look for something in a C# codebase.


C# tilts in that direction for sure - but it doesn't have the layers of XML BS to reach the good old Java (the only XML conf I remember touching in .NET Core was when I had to configure IIS for Azure deploys).

I've left C# ecosystem 3 years ago because I was done with that as well, but then I landed in a mature Ruby on Rails project and I was crying for .NET "enterprisey" abstractions - once you see the code duplication that comes from the fat models and controllers, you have to grep the codebase and guess what happens in a function because it's in a mixin that assumes property exists in context, tests take forever to rerun because they bootstrap an entire environment and have no mocking. If I had to chose between a bad C# codebase or a bad RoR codebase if it's beyond something very simple I would chose C# one any day of the week - which is what I think C# approach gets you - guarantee that if you follow the conventions your project will scale reasonably well into maintainability and codebase size. You pay for that in initial development speed and pointless verbosity.


The forced DI pattern makes sense here because:

Most DI containers are not just injectors, but they also manage life cycles/scopes of objects: singleton/transient/scoped.

Since in web development, a request hitting a controller endpoint is typically short lived and most objects and services are tied to the main request scope, it makes sense that all your objects should live and die when the request is made and completed. What is the cleanest way to manage this life cycle scopes? Dependency Injection. Most of your object/services will be "scoped", with fewer being transient (aka a pool of reusable objects) and the fewest being singletons that will survive individual requests.

A second side effect of DI, it forces you to code by convention. Conventions make maintaining the code 6 months from now a breeze. Want to add a new service? Just let it inherit from an IService and consume it in 10 places, it will auto find/inject your new code. No need to new() up and dispose your service in 10 place.

Do all of this without DI and use only constructors + Dispose(). Good luck! Mercy on the poor soul who has to maintain all that mess.

If you want to go a step further, instead of having 100's of IService and each constructor declaring they each need 10 of them (ILoggerService, IAccountService, IEmailService, IStockService etc etc)... that also becomes very gross in large projects: so to go a step further, use the Mediator pattern. You can either implement your own (super easy) or use the Mediatr nuget package. Then each service becomes much cleaner. Each service then only has 1 method that does work. It also ties in nicely with the Unit of Work pattern.

Why should my IStockService inject the IEmailService when I just want to check stock levels with GetLatestStock() that return an int and does no other work?

Lets say the IStockService has 15 methods to do with stock and 10 dependencies. So on each request where you depend on IStockService, you would also pull in the 10 dependencies and their inner dependencies which may add 1 second to your response time and uses 10mb RAM. But all you wanted to was check stock which takes 10ms and 1mb RAM.

This is exactly the scenario that mediator pattern can help solve. So your 15 methods become 15 individual classes, each pulling in ONLY the dependencies it needs and nothing more. Your file and git commits stay small too which makes everything more digestible (less than 100 lines of code if lucky),

So my favourite setup is DI + Mediatr. Each file/class has only ONE purpose to exist and only one reason to be pulled into a dependency tree, all while DI will manage their life cycles.

Edit: For those downvoting, please go read "Dependency Injection by Mark Seemann" and also read https://docs.microsoft.com/en-us/aspnet/core/performance/per... . Then try to build your own framework from scratch. Do it. Then come crawl back and upvote this comment. I've have a ton of experience with this stuff and have built 2 or 3 custom frameworks. The core trait that keeps everything sane and fast is to keep things simple, keep dependencies low, keep hot paths lean. The easiest way to get to that place is DI + Mediatr and to break some services into their own api's if they get deployed too often. It's that simple.


I totally agree. It took me a while to wrap my head around it, but my code has improved so much once I did. Very small classes that each do one easy to understand thing. All the structure stuff is in one part of the code, ready to be understood and modified, instead of scattered across tens of modules. Most of the time all I need for a new feature is a class that implements a specific interface, announces to the world what it needs to do its job by declaring its dependencies in the constructor (as intended by the language and OOP in general) and then only implements its core logic instead of meddling with creation and disposal that has nothing to do with its actual job.


> Most DI containers are not just injectors, but they also manage life cycles/scopes of objects: singleton/transient/scoped.

That imperative lifecycle stuff is a big problem. Is that idiomatic in .net? Ideally, the program creates per-request data and then the garbage collector disposes of it after the end of the request (or earlier if it doesn't need to be retained until the end of the request). If you need to actually call Dispose, short of `using ...`, you have a problem.

> Lets say the IStockService has 15 methods to do with stock and 10 dependencies

Is this idiomatic in .net? It's obvious that that is a problem and that smaller classes/functions are better. Normally, your programming language and environment would let you create smaller classes/functions and call them as necessary. Why is something more complicated required in this context?


I'd say the IStockService with 15 methods and 10 dependencies are not a .net thing but rather <enterprizey-domain-driven-design-all-the-things-coolaid> thing. I've seen it in more than one shop and it totally makes sense why it comes into being.

DI just hides the problem because such a class is now only 500 lines long instead of 2000 lines, so for some people that is acceptable (not for me though). You don't see any lifecycle code in your classes anymore since the DI container manages it for them.

Remember it is typical to see a business project with 300 classes/tables if the project is about 5 years old and never seen a major refactor/rewrite, and each of those classes gets manipulated in some form from an orchestration point (xManager class or xService class), which in turn might depend on each other or have 10 inner dependencies. To keep things "simple" most guys would just add another method to an existing service where it seems to make the most sense.

I argue against that cause the services become heavier and heavier over time and can do allll sorts of things after 5 years (aka IStockService can now check stock, send email notification, generate invoice, save pdf to disk etc). So I've come to the conclusion its better to have more folders/files to contain handlers, and each handler only has one Execute/Process method, and it's constructor declaring only the minimum amount of dependencies to do it's work. It also encourages ALL your services to do only one thing: you know and can trust that IPdfDiskPersistor only does one thing and doesn't have other weird side effects (like trying to send an email).

If all this is really not clean enough, then maybe an ActorModel framework like Orleans Framework might be better suited, but I haven't been able to convince anyone at work to use it instead of DDD (they are slightly at odds with each other).

Hope that answers some of your questions. If other are reading this, please think twice before you go down the DDD path - it is expensive both in code and time and can easily spiral into a monster (but also well worth it if a good developer have stewardship of it in the long term - keyword being stewardship). Unfortunately teams with high turnover is basically doomed to have a messy/failed project.


Deno + express gives you the statically typed language you’ll enjoy with a Node-like runtime, for what it’s worth.


I can get behind the DI dislike, but how are callbacks and promises in nodejs better than async in .NET?


What’s not to like about DI? Makes it so much easier to wire up dependent services and refactor those wirings at a later time. If you don’t use DI you have to manually manage everything yourself.


If you go too far with DI, the saying is "everything happens somewhere else."

Some feel it can be needlessly complex to troubleshoot such a system or gain understanding of it if you weren't one of the original authors.


Don't need DI to make your program so abstract no one can follow it. Wouldn't even say DI makes it easier to do because you can accomplish the same thing by just newing up objects.


That's related to my rule of when DI is being abused: if you can't just new up the objects by hand (perhaps in a unit test somewhere), then you should rethink your use of DI.

I've seen projects where the dependency object graph exceeded anything you could possibly new up yourself, due to circular dependencies and impossibly complex scope rules. That's definitely a sign of a program so abstract no one can follow it.

(Microsoft's very simple DI container that's now standard out-of-the-box in .NET Core has somewhat strict limits on DI scopes and disallows circular dependencies, so it's one of the better ones.)


If you're writing modern nodeJS you're probably not using callbacks or promises (directly) anymore (or using promises a lot less than you used to - Promise.all and Promise.allSettled is about it for me, and that's not that often).

The async/await syntax is where it's at. Honestly it's a massive improvement over promises, which I hated reasoning about almost as much as callbacks :)


And ... C# _invented_ the async/await syntax. Node, Rust, etc., were all inspired by what C# did.


> If you're writing modern nodeJS you're probably not using callbacks or promises (directly) anymore

I get that some people prefer async/await (I actually find explicit Promises clear enough that I don't have any strong preference, myself), but why wouldn't you be using explicit callbacks, which in my JS experience (mostly frontend, but also fairly modern) are super common outside of promises, which is the only place where I see modern JS replacing exlicit callbacks with a different structure.


Yeah, callbacks are fine until you get a big stack of them (so-called “callback hell”). Promises were the touted solution to callback hell, async/await the touted solution to endless promises chaining. While async/await is (tasty) syntactic sugar for promises, I think it’s a real improvement over callbacks - it’s one less argument to every function (because of promises) and I think it’s easier to read: there tends to be less nesting.


The weird thing about async/await is that it gets you out of deeply nested callbacks into, pretty much by definition, exactly equally deeply nested async function calls. The only thing that seems different is that it seems to be more common (but not any more or less supported, since you can obviously have named non-async functions and you can also have anonymous async functions) to call named functions in async/await and anonymous functions defined inline as callbacks with promises.


Having been using express again recently, I'm seriously thinking of ditching the asp.net core stack despite my general preference for statically typed languages.

Why not typescript?


I'm so glad I'm not alone. I adore C# and the .NET runtime is so convenient.

But the ASP.NET part feels bloated.

Is there a simpler .NET framework?


Feather HTTP https://github.com/featherhttp/framework. From the description: A lightweight low ceremony API for web services.


Carter


I didn't use it recently but Nancy was quite good: https://nancyfx.org/


Nancy is dead https://github.com/NancyFx/Nancy. Archived and stated that it is no longer maintained.


> the really poor async code

Very curious. Why do you think it is poor ? Are you comparing it against other languages or frameworks?


alternatives?


F# with Giraffe, a functional ASP.NET core framework


Express with Typescript feels pretty good in my opinion.


I can understand the preference for Node.js if you want to free yourself from the shackles of static typing but to then bolt-on Typescript doesn't make sense given that C# is a genuine statically-typed language. Last I looked Kestrel also ran rings round Node.


Congratulations to the team.

Microsoft has made exceptional strides in terms of performance and compatibility.

The whole vision is what is really impressive, not any individual part.

It is cross-platform, open source, and has almost complete coverage of the old .NET APIs. The new csproj format is vastly superior. The new dotnet CLI is a pleasure to use.

Many of the surrounding libraries like ASP.NET, Kestrel, LINQ, Entity Framework Core, System.Text.Json, etc are best in class.

I look forward to the Linker becoming stable, as this will really help in contexts such as Azure Functions and AWS Lambda.


The future is very bright for .NET. It has the right raison d'etre - .NET is part of the growth story for Nadella-Microsoft.

They already have you programming in their editors (VS/Code) and pushing to their VCS (Github). They're even teaching you the C# type system with TypeScript :) If they can just convince you to use their stack, Azure will win you over from AWS every time.

As a result, I really wanted to adopt .NET earlier this year. But they have some work to do to make the ecosystem approachable to newcomers.

One of the core issues feels so... trite. But experiencing it, I realized how real it really was:

The Names. Microsoft has created a Ninja Warrior obstacle course of proper nouns.

It is comical. It is worthy of parody.

.NET, the CLR, and their relation to C# and F#. ASP.NET, ASP.NET MVC, ASP.NET Web Api, Blazor vs Razor, .NET Core vs .NET Framework vs .NET Standard.

I think I could draw the lines between all those with about 50% accuracy. And I read like 3 books on .NET before diving in.

It's bad off the top when you're trying to figure out which way is up in the ecosystem. It goes from bad to worse when you're looking through docs and Stack Overflow answers.

It was almost never immediately clear if a document I was looking at on Microsoft's own website applied to me or not. That is a cardinal sin of technical documentation.

Ultimately, this meant that .NET Core Web API (or whatever I was using) felt poorly documented.

I'd find myself looking at docs that mention .NET Web API - but I can't remember if that rolls up to .NET Core or Framework -- or both? Am I using MVC? Is Web API a subset of MVC? No clue.

It's definitely a hard problem. Here's to hoping that now that everything is unified, they can work on paving clearer roads into their ecosystem.


> Azure will win you over from AWS every time.

Going to have to disagree here.

That might be the case if Azure was not literally a tire-fire. I have been using Azure for work and it's the most frustrating, inconsistent, often-broken, confusing and stress-inducing cloud service that my teammates and I have ever been subjected to. It left such a bad taste that I am quite certain I would flat-out refuse to use it again in the future, or work somewhere that uses it.

Take my advice: just don't use it. It's not worth it. Use AWS, use GCP, use Digital Ocean. Use some random cloud provider. Just don't use Azure.


I've been using Azure for all of my projects for 5 years now, with zero problems.


If you do the Microsoft way. But i find their batteries included approach to be broken more often than not. Tried deploying a docker image with cors on azure. Literally could not do it because of bugs and the gsc that they were intercepting cors packets. Opened up 3 bugs took Months to fix one. It's a tire fire.


Wow! Hot take. What Azure services were you using?


Front door, firewalls, LB’s, AKS, Azure Functions, Azure Disk (in AKS), scale sets and file-share storage accounts.

All of it was painful. There docs were lacking or sent you in circles. The documentation for their firewall product is three-quarters known issues and errors. Azure functions were painful to run locally, good forbid you’re not using Windows. Azure would take ages to attach a node on K8’s. Like, over an hour. It consistently had issues moving Azure disks between nodes in K8s: “can’t mount disk, attached to another host” in comparison AWS will speedily and happily re-attach an EBS disk to a new machine.

Permissions were opaque and distributed across the whole interface.

It silently deprecated keys underneath us, broke a number of services (couldn’t write to attached disks in K8s, couldn’t move them), didn’t inform us that this happened, we only figured it out by trawling through GitHub issues.

Storage Account explorer application breaks/stops consistently.

“Alert but permit” mode on firewalls doesn’t do what it’s supposed to: it will permit, but totally fails to alert you.

Scale sets operate weirdly, I didn’t personally deal with this too much, but my teammates had consistent issues with strange caching issues and more or less machines being spun up than should have.

Until we fixed it, every Azure PoP was health-checking our web app 2-3 times a minute: our logs and servers were being flooded with literally thousands of pointless requests.

If you have an AKS cluster with n machines currently in it, with a minimum and maximum of (n, m) machines, and you want to say, increase the minimum number, you cannot: it will refuse and tell you “the minimum number of nodes must include the current number”, so rather than just automatically adding a new node (a la AWS, and I presume GCP), you have to force the cluster to scale up to the new number of machines by throwing workload at it, then make the change.

AWS has a single Python package called “Boto3”, from which you can do pretty much everything. Microsoft in their infinite wisdom has a separate Python package for every service, and sometimes subset of service. Do you know whether you need the package for Storage Accounts, File Share, Share Accounts, Object Store or whatever else they had? Also, authenticating against these was a pain: sometimes you need a key provided by the service (let’s hope your permissions let you see that), sometimes you need to generate a service principal for your app (unless there is already one? In which case it’s listed in the UI, but nowhere you’ll find it, and certainly not under “service principals”, and you probably won’t have the permissions to see the information you need anyways) and then sometimes you need both!

Azure let us spin up a K8s cluster on a version of 1.18, but then didn’t let us scale the cluster a few weeks later, because apparently that version just didn’t exist, so we should either use 1.17, or update to a newer version of 1.18, but you can’t skip point-releases, so you’re going to have to update everything in your cluster before you can have another node.


Gee.. Sounds like you have had an awful time of it. Any chance you could speak more of some of the issues you encountered? We are looking at cloud providers and was kinda leaning towards Azure.


For what it is worth, I have been migrating from AWS to Azure for a few projects over the past year and have had nothing but good things to say about Azure. Documentation is superb, as is support and experience consistency.

AWS is still great, I just happen to like Azure more.


My advice, stay the f*ck away from Azure as far as you can. It’s the biggest clusterfuck I have ever had to deal with. Unless you are a Microsoft Gold Partner and you simply buy every shit which your Microsoft Account Manager tells you then you have no reason to use Azure. Unlike the Google Cloud or AWS, Azure has not a single service which is unique or good in any particular way. On the other side, they have many services which are uniquely so bad that they are unusable in Azure.

Example of unique services in the GCloud: Spanner, Firestore, BigQuery,...

AWS: Face recognition service and other neat things

Uniquely fucked up in Azure: Functions, Web Apps, Storage, Application Insights (FML!), ... many many more


Your post is not going to help him because you're just ranting without providing details on issues you encountered.


> Uniquely fucked up in Azure: Functions, Web Apps, Storage, Application Insights

I’m using all of these and it’s... fine? It does what is says on the tin basically.


Oh my god Azure functions. What a pain they were, nevermind getting them running locally.

AWS Lambda: run the code, because it’s easy to write code to be fully independent of lambda specifics.

Azure functions: oh no you need to run this thing to simulate stuff. Oh you’re not on Windows? Uhhhh too bad, that doesn’t work. You’ll have to use this other, random beta software. Oh it failed to run now, because you didn’t provide it with some arcane set of azure credentials which are inexplicably required to run it locally.

Storage-especially on Kubernetes is Alpha quality at best. The amount of pain we had was wild considering it is a fairly basic requirement.


I had no trouble developing Azure Functions with VS Code on my ThinkPad running Ubuntu.

Getting them to run locally [0] or in your Kubernetes Cluster [1] isn't too difficult. With AWS Lambdas that isn't possible at all.

Regarding the storage for Kubernetes, as long as you don't provide details it looks like just another rant. I get that you like AWS more than Azure but your opinion may not reflect the actual experience someone else will have.

[0] https://docs.microsoft.com/en-us/azure/azure-functions/funct... [1] https://docs.microsoft.com/en-us/azure/azure-functions/funct...


> Functions, Web Apps, Storage, Application Insights

What are the issue with these? What's missing/could be better?


What kind of workloads are you looking to run in the cloud? Chances are you won't have major issues despite the rants on here making Azure sound like a dumpster fire.


Honestly our stuff is pretty pedestrian. We have a front end website and API service, a couple of application servers for our telemetry IoT outstations and of course a backend database server.


I think you'll be fine with Azure then. You should use the pricing calculator beforehand to not be surprised by the costs and figure out how to reduce them but this applies to any cloud provider.


Thanks for the reply MaKey. Thats good advice.


Microsoft has always, always been awful at naming things, and all the intermediate steps in this unification have all been terribly named. However, this unification removes all the intermediates, which means they can simplify the naming as well. Let's hope they can stick to it...


The type system in typescript is a lot more advanced than the one available in C#.


This is due to the fact that typescript needs to interop with javascript, and existing js ecosystem has some very weird "typing" decisions. In TypeScript you can express types that you would never ever do in a "sane" language like C# which has proper typing design in place.


I have wanted union types plenty of times in C#.

Nullability analysis was recently added to C# (nullable reference types) but it was in tyepscript first.


Interested in what you mean by 'lot more advanced'?

The optional nature of the typing can be very useful for the web for sure. From my experience that same flexibility means you can't do as much reflection at runtime as in .NET but I have been for the most part away from typescript for last couple of years, so be interested in any resources around this subject.


* Structural types rather than nominal * Type unions and intersections * Type guards * `keyof` operator * String and number literal types

See https://www.typescriptlang.org/docs/handbook/advanced-types.... for more. There are a lot of utility types in the standard library that could not be expressed in C# as well.

Reflection is very different. It kind of still exists, but in a very different way.


It isn’t really more advanced as it is different. What typescript achieves with structural typing it loses in terms of good error messages and the encapsulation benefits of nominal typing.

I’ve gotten used to typescript, but I still enjoy using C#. Some things I really miss in C# (limited operator overloading) that will never come to typescript (not because of the type system, but because of javascript source compatibility).


Some things like conditional types and string template literal types start to get way out into abstract type land that is certainly more advanced than C#'s type system. Typescript is inching closer and closer to the Haskell-ish land of turing complete type systems where the type system itself is nearly it's own meta-programming language with each version.

(A recent example was the "SQL engine" written using string template literal types to type safe "query" an in-memory database. That's quite more advanced than anything C# is capable of, albeit a strange hack and unlikely to be specifically something useful in production anytime soon, though still built out of things that make Typescript useful to some JS projects.)


Typescript lacks the goal of soundness, which makes it much easier to introduce powerful features than in Haskell.


Part of it is that TypeScript doesn't need to worry about memory layout - everything is basically a hashtable.


Yeah, this is a big problem. I wanted to understand the differences between just C#, F#, .NET Standard, .NET Framework, and .NET Core, and it took a somewhat long article[1] just to figure that out!

[1]: .NET on Non-Windows Platforms, a Brief History. https://two-wrongs.com/dotnet-on-non-windows-platforms-brief...


...and if you’re creating a new class library today, you also need to understand how UWP and .NET 5 fit into the picture!

I’m not complaining too much, they are intentionally addressing this fragmentation, but it’s a real mess for newcomers.


As someone who uses .NET daily, but loves Clojure and other functional programming paradigms - Records seem like a very compelling feature. Immutable data structures without any hassle to set up. Simply write “record” where you would normally write “class” and boom, you are working with immutable objects. This is one step closer to one the best features of Clojure IMO -everything is immutable.

Additionally, there seems to be another new way to accomplish this. Normally class properties are accessed via get; and set;

Now we have the ability to use get; and init; which effectively makes the object immutable/read only after initialization. The syntax is just a little cleaner and more obvious to the reader how the property should be used.

C# has really been adding some great features the past few years and is certainly getting more expressive and concise which I appreciate.


You probably know, but F# exists if you want a functional language in the .NET ecosystem.

Granted, you end up dealing with a bunch of OOP libraries still since .NET is very C#-centered, but the language is pretty good and the integration is painless.


My experience with F# over 7 years of production coding is for most needs there is already an F# .NET library or wrapper over an OO library. It's mostly only very specialized needs that drive me to an OO .NET library.


Yes, I’d love to use F#, but not a lot of opportunities out there.

And .NET, like you mentioned, is very C# focused which would become frustrating at times.


If I wrote a F# codebase at a lot of companies I would find myself swiftly booted out the door.


Oh yes you'd definitely get booted for using F# in a C# shop. I love F# but would never use it at my current job because nobody else on my team knows anything about FP, even though the language itself can be learned in about a week since it's so simple.


My comment was a glib comment and wasn't supposed to be taken too seriously.


Fair enough, but it also outlines a bit of truth that you shouldn't necessarily go against the grain of the programming culture of a workplace just due to personal preference.


Aren’t records pretty trivial to create as a custom class in C#?


If you want to see how much code you get 'for free' by defining a record that you used to have to do manually, check out the generated C# here (structural equality, GetHashCode, Deconstruct, cloning for 'with', pretty printing, etc)

https://sharplab.io/#v2:EYLgtghgzgLgpgJwDQBMQGoA+BYAUANwgQAI...


Aren't records meant to be immutable?

I don't understand why properties have setters in the generated definition of the record. I'm missing something.


Constructor-defined properties of records are implemented using auto-implemented properties with the new init keyword, so:

    public int MyProp { get; init; }
instead of

    public int MyProp { get; set; }
Apparently those are implemented using readonly backing fields while still retaining property setters.


That’s neat!

It’s also generating code for the main method, which is also a new c# 9 feature


Absolutely not.

Records generate a lot of code when you compile them, including methods to compare by value.

The boilerplate required to do that in C# without records is extensive and hard to maintain. Even structs, which are supposed to be value objects, are not easy to compare by value.

I'll put it this way: records are important enough that I stopped using C# because it lacked something like records, and now I'm going to pick it up again.


Thing is, you can do it but with ceremony.

You have to override Equals() and GetHashCode() for starters. Then you also need to make sure you only have getters and no setters (or private setters). In some cases you need to make your empty constructor private and force a specific constructor to be used. If you have multiple constructors it means you will have different invariants which puts you in class land, you are better off with classes then anyway. Also if your immutable objects have many methods..that also puts you in class land.

Just that alone makes it cumbersome.

So a new record type that behaves somewhat like a struct would be perfect. Or the F# way, which is basically perfection.


I also want to stress that this is per class. Which in reality is madness.

If you have just 50 immutable classes you have to replicate those 50 lines of boilerplate for each class and make sure it's bug free. It clutters your classes up and it's just plain gross.

It's going to be way better that the platform supports this kind of data structure natively instead of us building out the monstrosity ourselves.

By the way, plenty of the features of C# is basically syntax sugar, gets compiled down to more vebose code anyway. We think we write one or two lines of happy code, but underneath it might get turned into 20 lines. Luckily we mostly don't have to see/deal with it unless you are trying to squeeze your apps performance to the max (or have a nasty bug that cannot be caught by normal means), but at that point it might make more sense to use a lower level language.


There is no correlation between lines of code and performance. Longer code is often faster, while short, succinct code is often slower.


they compile to classes in the background. the benefit is in the succinct syntax, value-type style equality checks and built in immutability (via the new init-only properties).


Record looks like scala case classes. I miss them; they save a lot of mucking about (like, loads). Immutability is an extra, it's the brevity it brings.


F# gains string interpolation and typed string interpolation:

https://devblogs.microsoft.com/dotnet/announcing-f-5/#string...

Knew it was coming but still really happy to see this.


These changes are huge! I'm interested to see what I can build with computation expression overloads


if this can be used to make a type checker for SQL strings like https://github.com/MedFlyt/mfsqlchecker or https://github.com/codemix/ts-sql but for C# I will be stoked!


The closest thing is https://github.com/rspeele/Rezoom.SQL which once compiled in your F# project, can be used in C# by referencing the generated assembly.

Although this is nothing to do with string interpolation, the "typed string interpolation" refer to the F# printf format specifiers.

You could also build such tool separately using FSharp.Compiler.Service (possibly using the analyzer infrastructure for ionide: https://github.com/ionide/FSharp.Analyzers.SDK), AFAIU there will be consolidation of this type of tooling relying on FSharp.Compiler.Service in the future to make this integrated to all F# tooling.


this is a nice looking project, I actually starred it in the past and had been meaning to check it out - thanks for reminding me!


Congratulations to the team but more than a year after the Surface Pro X shipped and there is still no way to build desktop applications for aarch64 using dotnet/vs2019 on the local machine. Windows on ARM has no native support for WPF, WinForms, or WinUI.

Visual Studio 2019 running as x86 32bits application can't see the local machine as a target for aarch64 applications. When I asked microsoft, the response was to use a x86-64 machine to develop on the same LAN as the aarch64 machine and use remote debugging. So you need two machines to ship a desktop application.

This is bad, and part of the reason there are so few app running natively on the Surface Pro X.


Meanwhile, this week launch three Apple products that come with a free toolchain and IDE tailored perfectly to the new architecture that supports old and new UI APIs.

On the Windows side, Visual Studio is still a 32bit application, and as you have pointed out, on ARM, it supports basically nothing but the ancient Win32 C API.


Yes but on Apple then you are at the mercy of the XCode environment. It's not all it's cracked up to be, and it's pretty cracked up..


Yep, they are just quietly slogging away over there on thier singular vision. They are not all over the map like Microsoft have been in the past: Throw everything against a wall and see what sticks approach!


Similarly, porting from HoloLens 1 (x86) to HoloLens 2 (aarch64) has been... challenging in many cases. For instance, there is no Fortran compiler easily available for Windows aarch64, but Apple has promised to ship one for the new Apple Silicon on day 1.

EDIT: Here it is, right on time -- https://www.nag.com/news/first-fortran-compiler-apple-silico...


It's because Microsoft is a small company with no history of implementing programming languages.


> From what we’ve seen and heard so far, .NET 5.0 delivers significant value without much effort to upgrade.

From the previous version of .Net Core only. They've done absolutely nothing to make migrating from MVC 5 any easier, while proclaiming that this somehow merges .Net Framework and .Net Core. Going from MVC 5 to .Net 5.0 MVC is a complete re-write of the web layer from an empty project upon up (even according to their own article[0]).

I think the poor messaging on .Net 5.0 bugs me more than the actual work required for the mitigation (which hasn't changed significantly). .Net 5.0 is just .NET Core 4 with a fancy new name.

[0] https://docs.microsoft.com/en-us/aspnet/core/migration/mvc?v...


We did the migration off ASP.NET a couple years ago. Our site wasnt that big, but we had some major anti-patterns that we decided to address during the migration. (we had multiple CSPROJs for "Businsess Logic" and "Data Access Layer" that didnt do anything of value, end effect was simpler code an ~70% less LOC)

There were a few major design changes that we had to embrace in the migration, mainly the middleware and DI. But we still managed to greatly increase performance, reduce our code footprint and get inline with the future Microsoft's vision for a C# webserver. All in it probably took 2 out of the 10 devs a month to complete the migration, while also addressing critical bugs.


Out of curiosity, why is your site using .NET instead of a modern framework in NodeJs or Python?


.NET is a modern web framework


I understand the naming. They want people to switch to the other track (Core). With this naming it’s like the other teach makes a turn and continues in front of the net4X track.

This makes it easier for me to make a case to managers that 4.8->5.0 is an obvious migration to do. Selling 4.8 to core4 would sound scary.


> From the previous version of .Net Core only.

Which is already three major versions ahead off classic .net.

You’re literally complaining about making upgrades past four major releases not being drop-in compatible.

Can you show any other stack which has made such strides to modernise and maintain better compatibility?


No, I'm literally complaining about the misleading messaging surrounding .Net 5.0 "merging" .Net Framework and .Net Core. My post was very clear on that. Like this quote from the linked article:

> .NET 5.0 is the first release in our .NET unification journey. We built .NET 5.0 to enable a much larger group of developers to migrate their .NET Framework code and apps to .NET 5.0.


> No, I'm literally complaining about the misleading messaging surrounding .Net 5.0 "merging" .Net Framework and .Net Core.

Yes, it’s kinda cheating. On the other hand it has been obvious since .Net Core 1.0 that this is where the future is going to be.

All announcements before .Net 5 has been clear on the compatibility issues and preparations required, for a couple of years now.

Whoever haven’t been planning this migration can thank themselves.

Calling what is effectively .Net Core 4.0 for simply “.Net 5.0” is just a way to make it completely obvious, even to those not paying attention, that .Net Framework 4.8 is now superseded, and that the upgrade path is .Net (Core). There’s no other option around the corner. Get with the times. Etc.


I believe that verbage is mostly referring to options for VB.NET, Winforms, and WPF being available in Core, meaning projects that use those can be ported


As pointed out earlier, Microsoft has really butchered the proper nouns of their offerings.

If you read a dozen blogs posts about the background for the decisions and are able to hold all that information in your head while you need to make a decision, it is possible to attain clarity for a few moments. But it is not fun.

On the plus side, going back to versioning .Net should help for the next few years to come.


Note that .NET 5.0 is not an LTS release:

> .NET 5.0 is a current release. That means that it will be supported for three months after .NET 6.0 is released. As a result, we expect to support .NET 5.0 through the middle of February 2022. .NET 6.0 will be an LTS release and will be supported for three years, just like .NET Core 3.1.


I believe Microsoft are still recommending people move to .NET 5 though. Unless you're extremely risk averse it seems like a very simple upgrade from 3.1 to 5 and hopefully 5 to 6 next November (which will be the LTS release) is the same.


I just did a straight up find-replace for `netcoreapp3.1` -> `net5.0` across 66 projects, built and passed tests without a hitch. Zero issues so far outside of the warnings resulting from enhanced nullable reference type smarts .


I'm excited about this! I've used .NET Core to build some CLI tools for Linux in F#. It's great to have an alternative to Go in terms of mainstream, high level languages (read: has tons of good libraries) that can compile to an easily distributed Linux binary.


Congrats to everyone involved. This is a huge accomplishment and arguably many steps in the right direction.

We are very excited to get our hands dirty with .NET 5 sometime in Q1 next year. We currently run on .NET Core 3.1. I expect our migration from 3.1=>5.0 will be a total non-event, but we don't want to risk any regressions during our current crunch phase. Our migration from 4.7=>2.0 was the most difficult, but once we got everything off Framework and onto Core things have been going really smoothly with the incremental upgrades. Really hoping this trend continues indefinitely.

The only part of .NET 5.0 that has left me disappointed is the continued lack of polymorphic deserialization support in System.Text.Json - a la Newtonsoft.Json's TypeNameHandling enumeration. This is the only thing holding us back from migrating to this first-party dependency. I have already left feedback regarding our use case in the appropriate dotnet/runtime GH issue.

The biggest new features I am looking forward to are the performance enhancements surrounding startup time, GC, etc. Improved P95 latency is something that will be very welcome in some of our busier environments.

In terms of the overall package that .NET brings to the table, we couldn't be happier. Sure, VS could run a little smoother and some of the configuration/DI stuff can be very aggravating (at first), but overall its a fantastic experience.


>I expect our migration from 3.1=>5.0 will be a total non-event, but we don't want to risk any regressions during our current crunch phase

FWIW, I just did a find/replace across 66 projects for `netcoreapp3.1` => `net5.0` and it build and passed tests first try. There were a fair few new nullable reference type warnings though!


"Our migration from 4.7=>2.0 was the most difficult"

Would love to see a write up of your challenges / approach. Seems to be a big lack of write ups on this process that I'm sure lots of devs would appreciate!


I can give you a 500 word abstract here.

The core challenge was dealing with 3rd party dependencies (Nugets) relative to each project. We ultimately found that attempting to mix Framework/Core/Standard projects together was an excellent substitute for nightmare fuel. So, the happy path for us turned out to be to do it all at once and only use .NET Core project types throughout (DLL/EXE). Trying to convert your overall solution 1 project at a time hoping for some sort of incremental outcome is probably going to be more frustration than it's worth.

Assuming you bite the bullet on all or nothing, the next challenge will be: Is your code is even supported anymore? This is obviously going to vary wildly depending on your use cases. For us, the Microsoft.Windows.Compatibility shim was enough to restore 100% of the functionality (we rely on System.Drawing and DirectoryServices). But, there was also a lot of other rewrite to support new AspNetCore primitives, and we also moved over to Blazor for web UI (which is more of a rewrite than migration).


Thanks!


My team did this when we moved Bing.com over to .NET Core, but it's internal. I will see if we can make it public. The problem is there are some skeletons in the closest that are irrelevant now (some since NS2.0, more since netcoreapp3.1), so I wonder how informative it will be.


Would love to read about that, even if some parts are now already outdated or no longer as big am obstacle.

A site as large / complex as I assume bing is would possibly allay lots of our concerns and give us some concrete steps to move forward with.


> .NET 6.0 will use the same approach, with net6.0, and will add net6.0-ios and net6.0-android

Huge endeavor and launch! Looks like mobile app development will be the same with Xamarin


well they are working with something called ".net maui" that should be released with 6 https://github.com/dotnet/maui


Mobile development in C# is just dead. Flutter got all the momentum.


Dead is perhaps overstating things, but I think Xamarin is problematic for new development because it hasn't kept up with the latest movement in the iOS ecosystem.

Looking at their issue tracker, it seems watchOS development is a nightmare, there's no support for catalyst, and there's no support for WidgetKit. Widgets are very hot right now, so that was disappointing.

It used to be that if you could make an app for iOS with Swift, you could make it with Xamarin. That guarantee is no longer there.


Google will move on and then all of the developers who hinged their platforms on it will need somewhere to go to.


This bugbear seems as outdated at "M$FT is evil." Every company cancels projects. Flutter/Dart appears to have staying power.


Huh? The saying "Microsoft is evil" is as valid today as it was 20 years ago.


You seem to be correct (in so far as Firefox is also dead): https://trends.google.com/trends/explore?q=Xamarin,%2Fg%2F11...


I don't know about that. Pretty much all cross-platform solutions are a rounding error compared to the first-party Swift/Java platforms. I can believe there's potential for another yet.


Xamarin is huge in the .NET world and Microsoft is investing heavily into it. I know plenty of LOB apps, huge apps, built in Xamarin. I don't even know of a single app built in Flutter.


It's ok but not huge. I'm starting a second mobile app in native Xamarin and there are some pretty active communities around.


With Linux support AOT compliation and other goodies I wonder how many people will prefer writing backends in .NET over Java.


I think that Java ecosystem is more diverse. There are multiple JDK implementations. There are a lot of OpenJDK builds from different vendors, including paid support options. There is an extremely mature library and tooling ecosystem. While I don't think that JVM is superior to .NET VM, it's not inferior either. .NET supports value types while JVM support some very advanced garbage collectors. And, of course, JVM development is not over. Hopefully, in a few years there will be value types in JVM as well. Java is inferior to C# as a language, but there's Scala or Kotlin for those who want better language and they are more on-par with C#. And for many people Java language is enough to write good code and they don't really need any more features. May be simple language is a feature in itself.

I don't see any strong reasons to switch to .NET from Java for those who heavily invested in Java already. But I think that .NET is pretty strong, so the same argument could be made for the other direction, for folks who are fluent in MS infrastructure and want to tackle Linux, .NET probably is good enough to keep using it.


Agreed about the Java ecosystem. Years ago I worked on a text indexing and searching application. All the interesting stuff was in Java and if you were lucky, there was some outdated and buggy .NET port. C# is a very good language but if I had to star5 sei thing from scratch i would probably choose the Java world just because of the availability of libraries.

.NET could get a huge boost if there was an ability to use Java libraries. There was KVM.NET but that looks pretty dead.


Kotlin is getting a pretty solid push as a server-side language too. Everyone I have talked to that uses it is a proponent.


Java has GraalVM whose AOT technology[1] is being adopted by pretty much every framework out there.

[1] https://www.graalvm.org/reference-manual/native-image/


No thanks. I've been writing .Net since before day one and quite frankly I've been buggered around enough now to run as far away from this as possible.


Interesting, I always felt C# was a step ahead of Java, while the JVM was some steps ahead of the CLR.


The language is absolutely brilliant and I love it. It's the vendor's schizophrenia that's the problem. I've run out of fingers to count things that have been deprecated painfully on after being promoted as the next greatest thing and sold hard. This has cost me, my clients and my employers ridiculous amounts of money to unfuck.


Totally fair, anyone in the Microsoft development ecosystem has to learn to be incredibly skeptical of anything new. Anytime I spent on adopting Silverlight was essentially wasted. I had one client that had adopted a random WYSIWYG released by Microsoft to design WCF services, when anyone experienced in the ecosystem knew it had abandonware written all over it.

Of course, other ecosystems have their own versions of this. But Microsoft might be the only one that essentially controls an entire ecosystem and still manages to give devs whiplash.


Exactly. Yes I interviewed at a company where Silverlight and WCF was the future and they'd just rewritten everything in it. I didn't see it with the way everything was going. I dread to think of what happened to their business when the rug was pulled out overnight.

"Microsoft says you need to start funding your entire product to be rewritten from scratch"


I don't know how long ago you're speaking about, but Silverlight and Flash were at their end of days even back in ~2006. Apple just nailed the coffin shut. That's not Microsoft's fault, there was a sea change in the industry away from browser plugins and these types of platforms.

Apropos WCF, well we got JSON and no-one outside of "enterprises" were doing SOAP/WSDL et al any more and it was pretty much done. That's not really a fault of the vendor, the world moved on. At the end of the day, as a software development house you have to make strategic choices, regardless of vendor. And you can still build WCF apps today, sure maybe not in .NET Core, but they'll still be supported for donkeys years by MS, but who'd want to?


Silverlight wasn't even born until 2007, so it's hard to see how it was dead in 2006. :)

Or maybe you're saying it was DOA?


> Or maybe you're saying it was DOA?

Pretty much. The only time I've ever seen Silverlight apps in the wild were for streaming use (Sky/Now TV for example).


Very true. I don’t trust anything new MS is putting out. The risk of being abandoned soon is just too high.


Not that my opinion means much, but I'm a UI/UX guy and I've always had a much better experience learning and using C# as opposed to Java.


What would you choose for server code?


Probably Go at this point in time. They care about API stability, keeping scope focused and about technical improvement instead of marketing. And the language is nice to use! There are warts but they take a measured structural approach to resolving them and caring about the change.


Go might care about their own API stability, but it took them years to stop doing direct syscalls on macOS (where it is not a stable ABI); and last I checked, they were still doing that on BSDs.

As for technical improvements, well... it's a language that took, what, almost a decade to add generics? And that's because the designers were claiming that everybody else is doing them wrong, and they want to figure out how to do it right. Now that they're finally adding them, turns out that they look exactly the same as everybody else's.


You have to do direct syscalls if your lowest level is not C. I don’t see your point. Perhaps vendors should have stable ABIs?


Vendors do have stable ABIs: libc. You don't have to be writing in C to make calls into C libraries, and it would be rather strange for a language to be unable to use the C ABI, yet claim to be a systems programming language. You might notice that no other language has this problem.

Besides, Go does use libc on macOS (now), and always used Win32 API calls on Windows, so it clearly can be done. It's just that for a while, they've decided that being fast on macOS was more important than respecting the ABI stability guarantees from the OS.


Are you saying that .NET API isn't stable? I have code that works from .NET 1.1 back in 2004.

I wouldn't use any of their flavor of the month tech they release from the conferences. But the .NET framework has been pretty stable over the last 15 years.


No it doesn’t. Try it on .Net 5.0.


It works with .NET Standard, .NET Core and .NET 4+ and I am pretty sure when I do the upgrade it will just work.


I was going to say the same, but you said it much nicer than I would have done.


.NET/C# are great platforms, it's the median .NET/C# developer and the general culture of working at one of these shops that ruins it.

Just decades of over-engineering OOP "best practices" baggage that is impossible to shake off.

Why do people "move" to new languages? Mostly to leave the baggage behind.


Good points there. I have to agree.


Wherever you run to, you will get buggered around a bunch and then run somewhere else.


It is a bit sad that so-called "native AOT" was not included in the release. When I heard last year that .NET 5 would support single-file binary, I expected it to be the same as something like Rust or Go would offer. But no, Microsoft changed the meaning of AOT and moved the goal post, introducing the real AOT as "native AOT". Thankfully they are aware of the issue, so I hope to finally see in when .NET 6, which will also be an LTS release, is released.

https://visualstudiomagazine.com/articles/2020/08/31/aot-sur...


I find CoreRT covers a lot of my current AOT needs.

For me, one of the blockers was Winfows.Forms not working with AOT, which appears to have been resolved very recently:

https://github.com/dotnet/winforms/pull/4177


There is also going to be a crossplatform (including mobile) UI library for .Net 6: https://devblogs.microsoft.com/dotnet/introducing-net-multi-...


Does anybody knows if .NET 5.0 will be included in Windows as a Windows Update as the .NET Framework ?


It won't be. .NET Framework is an embedded component of Windows. .NET Core is something apps have to package along with them. (Due to this, .NET Core apps are larger than .NET Framework apps, but tend to have less compatibility issues in theory, since they bring everything they need with them.)


.NET Core apps don't have to package the framework. They can also require it to be installed separately, although it's still not an OS component in that case.

https://docs.microsoft.com/en-us/dotnet/core/deploying/#publ...


Are you sure it will not be distributed with Windows? Is 4.8 the last version being distributed with Windows, requiring all future .NET framework versions to be packaged with the apps?


The idea is that its installed separately from windows, in the same way one might install java.

So because you can have mutiple versions installed, or you can package it with an app , then the updates can be more rapid.

The problem with .Net framework is that everything, including system services rely on it - that means it needs years of testing and can't have any breaking changes. Basically they decided that packaging .net as part of windows was a mistake.


I feel their mistake was not allowing apps to call a specific version if needbe, not with the general decision to include it in Windows.

While .NET Framework updates annoyingly force every app to use it, I expect .NET Core to make it super difficult to secure apps that decided to hardcode some specific ancient release of it and allow no way to override it.


.NET Framework was built to support and did allow systems to have multiple versions installed side-by-side and apps to call specific Framework versions and was built to support many versions side-by-side. But that turned out to be more difficult in practice than in theory. Multiple side-by-side Frameworks turned out to be more hassle than useful, and even with just 3 versions of the Framework that were built to live side-by-side (1.x, 2.x, sort of 4.x, long story) that was often pointed to as a source of bloat in Windows.

The mistake to include it in Windows was partly why more versions weren't made (because then they'd have to be serviced for the length of a Windows version's service lifetime; because then they'd contribute way more to Windows bloat).


There will be no future .NET framework versions so I believe he's correct. .NET 5 is a new version of .net core


4.8 is the version of a .NET Framework. What we have now is essentially .NET Core 4, with a misleading name of .NET 5 that confuses people. .NET Core is not being provided with Windows Update, like someone already said, you need to package it yourself with your app.


Yeah, .NET Framework will see no new features, so they'll continue to patch bugs in 4.8 probably for over a decade to come, and a lot of new apps will just not use it, and include .NET Core runtime instead.


It also means that the first major vulnerability in .NET Core will be an unmitigated clusterfuck, as admins will have no automated tooling in place to update .NET Core with security patches.

"Just recompile with the new version!" I hear the cries already, said by people that have never had to support literally a thousand applications in a government data centre, half of which were built years ago by a vendor that is long since bankrupt.


Publish your apps without .net core (--no-self-contained) and install/manage runtimes as you usually would.

If you have thousand apps, you probably also have a CI/CD system and you can gain fine grained control on your runtime management needs with .net build/publish.

MS does quite a few things poorly but they have done a solid job of operating in large enterprises.


"Publish your apps" isn't the issue: I support a lot of applications I don't build or have the source code for. And they may be on my network for over ten years.

The problem is if developers are publishing self-contained apps with .NET Core, IT staff will be up a creek on vulnerability mitigation. While being able to pin specific .NET Core versions is nice for developers, being able to require the most current .NET Core version be used is important for IT staff who have to support these applications.


I'm expecting as this issue plays out, there will be a way to inject an updated framework to an existing app with a utility tool. Probably first party but definitely third party.


> you probably also have a CI/CD system

Or more realistically, dozens of CI/CD systems, covering less than a third of the applications.


Nope, since this .NET 5 is NOT a replacemnt for .Net Framework 4.8


the highlights are great. 2 lowlights: - HttpClient/WebRequest ReadWriteTimeout is silently ignored which can result in infinitely hung sockets when doing synchronous network i/o - System.Speech is unsupported


Refusing to support System.Speech is why my project will remain locked to .NET Framework. Apparently the Microsoft Speech team is part of Azure now, and has decided nobody needs local speech synthesis that doesn't require a subscription to a cloud service.


IIUC, System.Speech was simply a .NET wrapper for the Microsoft Speech API (SAPI), which has been part of Windows for a long time. If you want to move to .NET Core, you can use SAPI via COM interop.

Disclosure: I currently work at Microsoft on the Windows accessibility team. We don't own SAPI or System.Speech, but we consume SAPI in Narrator (via COM in C++).


I recognize this. But so is WinForms and plenty of other Windows-specific APIs. System.Speech, being as you recognize, a crucial accessibility feature, and it should be considered extremely high priority, even if the usage percentage is low.

Microsoft should prioritize fixing this over other tasks with .NET, if they value accessibility users.

It'd be nice for a cross-platform local speech library to be available on .NET, and it'd be nice if the Speech team wasn't likely incentivized to push Azure, but at minimum, existing accessibility functionality should be understood to be crucial.


> System.Speech, being as you recognize, a crucial accessibility feature

I'm the first to push for prioritizing accessibility when needed. But there's a difference between an accessibility gap that prevents a person with a disability from completing some task, and a missing convenience wrapper for an API that a developer could pretty easily use through generic COM interop. So I don't think it's appropriate to play the accessibility card in this case.


I mean, nearly all outstanding documentation and tutorials on how to implement speech in Windows is conveyed via those APIs. I know I don't have the skills to replace my System.Speech calls with generic COM interop, I'm willing to bet that would ring true for a lot of .NET developers relying on the .NET Framework today.

At minimum, you're adding a "rewrite your accessibility code" cost to anyone moving from .NET Framework to .NET Core. How many businesses are going to do that, versus say, drop that feature, presumably due to "low usage"? Is Microsoft really serving the accessibility community here? Making it harder to add accessibility to software is going to negatively impact accessibility being available in software.

And if it's just a convenience wrapper, it should be trivial for Microsoft to reimplement it: And it'd be far less wasteful for Microsoft to do it for everyone than expect everyone who uses it for accessibility features to reimplement it themselves.


> At minimum, you're adding a "rewrite your accessibility code" cost to anyone moving from .NET Framework to .NET Core.

That would be true if it were common for applications to directly support alternative UI modalities, but that's not how accessibility generally works. An application implements a GUI, including support for the platform accessibility API (hopefully with the help of the GUI framework), and it's up to a separate assistive technology, such as Narrator (for screen reading) or Dragon NaturallySpeaking (for speech input), to use that generic accessibility support to adapt the UI for a specific need.

So if any assistive technologies are using .NET Framework, they might have a bit of difficulty converting to .NET Core. But that's a small number of applications, and in my experience, most Windows ATs use native code anyway. And using SAPI via COM interop isn't hard; you don't have to get down and dirty with P/Invoke or anything similarly low-level.


This little discussion got me curious about how hard it actually is to use COM interop with .NET. Using Visual Studio its just a right click on your dependencies, selecting 'Add COM Reference...' and choosing the library you want to use. I was then able to call SAPI very easily like this:

  var voice = new SpeechLib.SpVoiceClass();
  voice.Speak("Hello World!");

The documentation for SAPI could use some love though. The sample code that is supposed to be there is nowhere to be found.


why doesn't MS just release the SAPI wrapper as a windows only nuget package since it already exists?


One thing I've learned in my time on the accessibility team at MS is that even at a company as large as Microsoft, any given team has limited resources and only so many person-hours in a day. So every task has to have a business justification. There probably hasn't been enough demand for releasing System.Speech as a NuGet package to justify it. That's just my guess though; I haven't talked to the speech or .NET teams about this.


https://www.microsoft.com/en-us/accessibility

"Microsoft is committed to revolutionizing access to technology for people living with disabilities—impacting employment and quality of life for more than a billion people in the world."

"To enable transformative change accessibility needs to be a priority. That’s why we have begun to manage it like a business and developed our Accessibility Evolution Model to track our progress."

I feel if these statements are true, Microsoft executives should require good maintenance of APIs heavily depended on by accessibility features, and should direct the appropriate teams to prioritize this issue. Because today, in pushing developers to move to a platform that doesn't support System.Speech, Microsoft is moving backwards.


i understand that, but i'll bet if they would dump the source to dotnet/unsupported/system.speech the community would do the work for them to get squared up with dotnet core. it doesn't take a lot of demand, just 1 or 2 developers who view it as a blocking requirement for netcore migration.


That really dumb because outside of VC companies with an obsession with spying on people and selling their data, no one wants the risk of uploading audio to the cloud.


The ReadWriteTimeout issue was insanely aggravating; MS was adamant it was not a bug but rather an internal implementation detail for the longest time. I'm really glad they changed their minds because it was a very unexpectedly petty stance they were taking in the GitHub issue.


what makes you say they changed their mind? afaik it's same old story. it's really incomprehensible given that all they need to do is make a call on the underlying socket.


System.Speech has sadly not been supported since the beginning of .NET (as in dotnet, not the framework)

That pretty much means MS is deprecating it in favor of the Azure speech services...yeah, bummer.


Ouch! HttpClient/WebRequest ReadWriteTimeout never timing out is going to affect a lot of systems and result in production issues.


I wish the post was more direct that .NET 5 is the successor to .NET Core 4. I gathered this was the case but it didn't feel clear that I don't need '.NET Core' anymore to run .NET on mac/linux.


There is no .NET Core 4? There was a .NET Framework 4. 5 is how the streams are being crossed.


I really like where thing are heading with .NET.

On the other hand, I'm still missing an easier interop with other ecosystems. C# is an easy to use language and I would love to write some common code / business logic / whatever you name it in C# and expose it as webassembly and C/C++, this would cover mobile, web and desktop workloads. You can do something like that with Rust (or at least they claim it, I never tried), but with C# I don't see any _easy way_ doing it. I would be happy if someone could correct me If I'm wrong.


Maybe in a year or eighteen months, I'll be able to move to this, from Framework code that still has dependencies that nobody has ever updated to .NET Core.

This version will be nearly out of support by then...


When your code depend on stuff not migrated yet, it will no longer happen. From a .NET Framework position, .NET 5 is a huge breaking change by loosing capabilities and using different base libraries (which will not be recovered later). From a .NET Core 3.1 perspective, it is a minor update.

.NET 5 reality is: They ported 90% of their app models (e.g. wpf, winforms, ...) and dropped some other (e.g. WCF, WWF, AppDomains, ...). That is not going to change.

.NET ecoysystem reality is: What is not ported until now, will not be ported. .NET Core is now 6 years old (2014?) and everything new is currently .NET Core. Someone who wants to stay in business, has ported already 2-3 years ago.


Well, I've still got to support integrations with other Microsoft products that are in support for the next decade, and only have Fx SDKs.

Tooling in Visual Studio has sucked out loud for most of the Core stuff thus far. There's not a lot of value chasing the churn versus letting it settle out.


I am also there. Ugly.


I know .NET Core applications could run on Linux. But do they natively (without any wrappers) support Linux OR is there a wrapper to simulate windows within Linux?


It supports linux natively. Anything windows-specific requires libraries that you can optionally add.


The C# libraries either invoke the native Linux libraries directly or go throw a thin wrapper[1] that does the normal things you have to do to support multiple flavors of Unix (epoll vs kqueue, etc). Parts of the C++ runtime are written in terms of the Win32 API and on Unix these API are implemented in the PAL[2]. The PAL is not that big and I don’t the steady state performance critical code paths go through it.

1: https://github.com/dotnet/runtime/tree/master/src/libraries/...

2: https://github.com/dotnet/runtime/tree/master/src/coreclr/sr...


Someone already answered this, but yes, it's native. I've been building with C# since it was first released, and dotnet core has really opened up development on Linux for me - I build apps for Linux and Docker all the time, and they "just work" the same as they do on Linux.


It would never achieve its performance if it would wrap something. .NET on Linux is as native on Linux as is Java, Python or Go. Also priority in Microsoft has completely changed. Most .NET runs on Linux nowadays (dockerized or not).

It actually does the opposite: it is able to expose memory-alike resources (from e.g. the network interfaces) deep into its consuming programs by using abstractions like Span<T>. And while that sounds like a wrapper, it is actually (depending on the "alike") just a typed and safe pointer.


native support, no emulation etc.


Does Asp.NET identity still use a Guid for primary key? I remember it being hell trying to convert to int a few years ago which left a sour taste.


I think the default is a string now (nvarchar 450). Here's [1] a guide on changing the PK type, but the example uses Guid instead of string. I assume int could be used too, but I wonder if changing to db generated value would cause headaches/show-stoppers.

[1] https://docs.microsoft.com/en-us/aspnet/core/security/authen...


I belive it uses guid (or a string that stores guid :-/) but it's fairly easy to change it to int. That's what I have in my boilerplate-kickstart template, anyway. Still I remember I was "surprised" too.


Why would you want to.


One day I would really like to see the ability to have C#, F# and other languages all in the same project.


That will be a stretch, since they require different compilers. C# and F# projects in the same solution not good enough?


Would most people here agree that .NET is a much better platform for developing applications with a plugin-based architecture (which, I think, more or less, implies following Hexagonal aka Clean Architecture / Ports and Adapters Pattern [1]) than popular alternatives (e.g., Python, TypeScript/Node.js) due to a diverse set of comprehensive dependency injection (DI) implementations [2]? While it is possible to use a manual / non-DI approach [3], the trend seems to be in either extending this approach [4-5], or - an arguably more elegant and flexible solution - using DI [6]. Would love to hear opinions on how you would approach developing a heavily plugin-based application (including an ability to use virtualization containers, such as Docker, as functional plugins).

[1] https://www.infoq.com/articles/advanced-architecture-aspnet-...

[2] https://www.claudiobernasconi.ch/2019/01/24/the-ultimate-lis...

[3] https://docs.microsoft.com/en-us/dotnet/core/tutorials/creat...

[4] https://github.com/natemcmaster/DotNetCorePlugins

[5] https://maartenmerken.medium.com/announcing-prise-a-plugin-f...

[6] http://ewer.com.br/plugin-architecture-with-di-containers


The DI in .NET is really powerful and feels intuitive once you play around with it in a few different projects.

We have almost 100 services injected into .NET Core DI and it always feels very stable and manageable. For us, we cheated a little bit on many services and just take a dependency on IServiceProvider to get at other services at runtime. This allows for any service to talk to any other service without worrying about some monster CTOR hierarchy. We used to try to keep things "correct" in terms of dependency chain, but we have found that the real world is a lot easier to work with when you permit circular dependencies between services and maintain 1 big flat collection of them. This is basically microservices but without the RPC ceremony and associated nightmares.


Thank you very much for sharing your DI experience. What is wrong with using IServiceProvider dependency? What is CTOR (hierarchy)?


CTOR hierarchy in this case refers to a rigid structure in which you are not allowed to have any circular dependencies. E.g.: If UserService and AccountService eventually evolve into needing to talk to each other, you can wind up in this situation. Most would argue you should refactor both services, create a 3rd service or slam the 2 together into 1. I argue that both have independent persistence layers and it makes a lot of sense from a business perspective to have these modeled separately.

So, the implication is that IServiceProvider is a way to "cheat" the system by not requiring you perform an impossible circular CTOR setup where UserService requires AccountService and vice versa. DI cannot resolve dependencies which require each other. In terms of actual harms, I do not really think there are any substantial ones to note. This is technically reflection but only slightly and I haven't been able to notice any performance impact, but we don't do IServiceProvider lookups in tight loops either.


Got it. I appreciate your detailed and clear reply.


CTOR heirarchy - Constructor Hierarchy.

You have to have your constructors such that the ones needing a lot of services and in-turn those services needing a lot of other services need to be added to the service container and ensured to be instantiable.

Essentially, you build a tree of your services and have to ensure that all your constructors have objects that can be instantiated or managed by the DI system.


Understood. Thank you for clarifying.


I think Python, TS/JS and other popular languages do not block onion architectures / hexagonal architecture. Or most other patterns. It is a matter of will and the right execution.

Also, DI is not the same DI containers. If I have three components and inject two into another one, I do DI but for sure will not instantiating a DI container for it ;)


I didn't mean that non-.NET stacks are somehow antagonistic to hexagonal architecture. It's just that, based on what I have been reading on the subject, I came to a conclusion that .NET has the most comprehensive support of said architecture thanks to a diverse and feature-rich ecosystem of relevant frameworks (including DI ones), whereas alternative stacks are significantly behind in this regard.


is this an in-place update to the netcore3.1 runtime or does it install side by side for net5.0? just want to know what kind of risk it is to install. i got bit by the netfx48 in place update.


Side-by-side.

It’s effectively .Net Core 4.0, but they removed the “core” and versioned it 5.0 to make it more obvious that it supersedes and obsoletes .Net Framework 4.8 which lots of organizations are still clinging to.


to be clear- you are saying it is side by side with netcore31?


Yeah. I guess it depends on how it gets installed, but there’s absolutely nothing preventing it from running side-by-side.

I’ve been hot-swapping 3.1 and 5.0-rc2 for projects, preparing for the final 5.0 release.


so, i see 4 different runtime installs (base/aspnet/aspnet-hosting/desktop) here https://github.com/dotnet/core/blob/master/release-notes/5.0... .

where can i find instructions for doing a side by side install? thanks


This link has clearer instructions on what to install: https://dotnet.microsoft.com/download/dotnet/5.0

Just download the versions you want and install them all. Also keep in mind if you're doing actual development you need the SDK, not just the runtime.


Every version of .NET Core has always installed side-by-side with previous versions. There's no way to avoid it.


perfect, ty.


How did net48 bite?


extensive changes to the gc exposed an application issue that was complex to address. and no way to rollback the machine to net472.


Where did Java interop go?


Educated guess: shifted with net6.0-android. That is the core driver. Same as swift integration with net6.0-ios.


Even more important, what's been happening with .NET/VB?



People who think this is actually cross-platform need to consider these points:

* Debian can't package F# because it is built using MSBuild, and MSBuild is built using MSBuild. How can it be Microsoft doesn't have the resources to get it into a major distribution?

* Microsoft won't commit to maintaining any cross-platform GUI libraries. You will be relying on some random community project. Compare this with e.g. Python with PySide, which is officially supported by The Qt Company. The Qt Company is 460 times smaller than Microsoft, and they still manage.


There may well be some niggles that affect a small number of people doing some niche things under specific circumstances, but I don't think that's enough to claims it's not cross-platform.

Since dotnet core became a thing, I've been building cross-platform apps with great success - mostly Windows and Linux, occasionally MacOS too, and mostly x64, but also ARM too.


[flagged]


OK, we agree that Microsoft doesn't have their own cross-platform GUI library, but that's a very different thing from dotnet core/5 not being cross platform. Also, there are OSS GUI libraries not from Microsoft.

Frankly, at this point native, cross-platform GUIs are a niche. I used to write a lot of Windows-only GUIs (mostly WinForms, some WPF), but haven't written a desktop GUI for around 10 years now. If I was to now, I'd try one of those aforementioned OSS libs, or maybe use Electron. I know Electon gets a lot of hate on HN, but VS Code shows it's possible to do well.

Not sure if you meant it that way, but accusations of shilling are forbidden here on HN. I've no affiliation with Microsoft, I'm just a long-time, dotnet developer that loves C# and the ecosystem.


>Microsoft won't commit to maintaining any cross-platform GUI libraries.

Isn't that MAUI?


MAUI is relying on the community to provide Linux support, just like with Xamarin.Forms. I worked with Xamarin.Forms, it doesn't support hardware acceleration in its GTK backend, making it unusable on HiDPI displays (consumes too much CPU). Why would MAUI be any different?


> Why would MAUI be any different?

Speaking cynically, if it's tied enough into the main Microsoft Ecosystem it will have more community buy-in.

You see this happen with other parts of .NET as well, often to the chagrin of the OSS Devs who filled Microsoft's gaps only to be replaced by an (often inferior) solution.


How does bootstrapping work for other products in debian? GCC is built using GCC and cmucl is built using cmucl.


I’m guessing this is the issue being referred to https://github.com/mono/linux-packaging-msbuild/issues/1


Since Apple is announcing the new Macs with M1 SoC today, I'm wondering if it will support it from day 1 or we will have to wait?!

edit:

I guess I can't read. `We expect that Apple will announce new Apple Silicon-based Mac computers any day now. We already have early builds of .NET 6.0 for Apple Silicon and have been working with Apple engineers to help optimize .NET for that platform. We’ve also had some early community engagement on Apple Silicon (Credit @snickler).`

Still wondering for .net 5 though


I think this will be a patch release soonish. ARM is well supported already and the toolchain on macOS is also present already.

And they need it to support the development of .NET 6.


I’m just excited to see NET 5.0 things when checking to see windows os upgrades.

I have no idea what it means or the significance of it.


You can now write very short functions in C#:

int Factorial(int n) => n is <= 1 ? 1 : n * factorial(n -1);


Seems like .Net 6 will have cross-platform UX forms! That sounds very exciting.


I used to be an asp.net developer before. I appreciated the idea behind asp.net during its early years - making websites fast and with templating (or scaffolding). Later realized that it is bloat.

Then asp.net mvc happened, a whole new paradigm shift. But then I disliked the idea of stuff getting inherited from somewhere. *.cshtml files and all. Soon I realized what the ide is actually doing behind the scenes. For e.g. the cshtml files are compiled and there is an intermediate binary format I guess. (Don't remember well).

This is why I came to like something like express. Guaranteed there is no type safety. (JavaScript is a dynamic language). But I enjoyed the simplicity of it. Ever since I have moved away from the .net platform.

I know this code generation and intermediate binary forms are necessary. But at times I was surprised by it. You find the similar stuff for other stuff you write for e.g winforms, windows presentation apps, etc. I realize there is no other alternative way for such platforms. But I generally disliked the idea that my editor does a lot of stuff behind the scenes. This makes me too dependent on the editor. For e.g. try making winforms app with just notepad. It's not impossible; just very cumbersome.

I have respect for .net. In fact I do write console programs time to time. But I wish stuff was as simple as having a plain editor and getting started.


> But I wish stuff was as simple as having a plain editor and getting started.

This is the wrong thing to wish for.

It's like... wishing for "simpler times" of the pre-modern era. You know, the simpler times without dentistry and plagues that make COVID look downright pleasant. Simpler times of backbreaking manual labour, shovelling ditches by hand.

A typical computer can now process 10 billion general-purpose instructions per second. It is a power tool for the mind. Its entire purpose is to eliminate manual labour.

The fundamental concept of computerised information technology is to automate away the processing of information as much as possible.

If you don't get this, then you don't get IT at all.

There is nothing at all good, or somehow more "pure" about programming with Notepad, or VI, or whatever text editor you fancy.

This is like turning off the million dollar backhoe, stepping out of the cabin, and digging the ditch with your hands because you feel that it brings you "closer to the dirt" or something.


when it comes to text editors vs IDEs, that's not the point. it's not about wanting a rudimentary experience, it's more like wanting to build a PC vs buy a macbook.

when the very _syntax_ of a language is designed such that it's really only usable within a specific IDE, then all of your tooling is limited by the capabilities and decisions of that IDE.


You have fixated on a single point that the parent comment mentioned, hyperbolised it, and gone off on a complete tangent. I think you should take a step back and attempt to understand the overarching point the parent comment was making, because I don't think you get it.


You can use Notepad and run "dotnet run" to run your .net core programs, including winforms. .cshtml files can also be compiled at run-time or at build time with .net core.


ASP.NET Core is build like Express. It's all just middlewares. You don't have to use the Razor View Engine. It's really worth another look.


You could combine Node.js with TypeScript to regain type safety.



> It’s already in active use by teams at Microsoft and other companies...

This means nothing anymore. MSFT says that about everything they release and users are still often left with the feeling that they are guinea pigs testing a very unfinished product.

> For Visual Studio users, you need Visual Studio 16.8 or later to use .NET 5.0 on Windows and the latest version of Visual Studio for Mac) on macOS. The C# extension for Visual Studio Code already supports .NET 5.0 and C# 9.

Reading this makes me wonder who is still using Visual Studio. That IDE is so bad it's beyond believe. .NET 5 is a standalone runtime which anyone can install. You can write C# in a text editor and build and run it from the command line. Why the hell do Windows developers need to install an entire new IDE in order to use the latest version of the .NET runtime? It's ridiculous beyond belief.

Really shows that Visual Studio Code is the future. All you have to do is install the latest runtime and update the plugin so it shows you suggestions for all the latest language features. No need to install a new version of Visual Studio Code itself.

> .NET 5.0 is the first release in our .NET unification journey.

This might cause confusion. .NET 5 is basically .NET Core 3.2 but has been renamed to .NET 5 so that .NET Framework 4.x users can finally get convinced to move to .NET Core. They just dropped the Core to make it look more appealing to them, but it's still .NET Core.


> Reading this makes me wonder who is still using Visual Studio. That IDE is so bad it's beyond believe

Eh? Although I prefer Rider, I don't see how you can make that claim. Yes, it has perf issues, and at least in the past has had some stability issues too, but it's absolutely crammed with features and offers a great dev UX for everything from source control to debugging.

> Why the hell do Windows developers need to install an entire new IDE in order to use the latest version of the .NET runtime? It's ridiculous beyond belief.

To be clear, this is only for existing Visual Studio users. Microsoft has done this for major versions of dotnet core too, and while it's a minor inconvenience, I have believe there is a good reason for it. "ridiculous beyond belief" is a bit strong.

> Really shows that Visual Studio Code is the future.

VS Code and VS are like chalk and cheese - I'm a big fan of VS Code, but for developing with C# I much prefer the features of a full IDE, Rider or VS. Like a debugger, and applying changes during a debug session.


Visual Studio is as objectively as possible not a bad IDE. If you have unlimited budget and are only doing C# and DotNet, only benefit of VS Code is start up performance. Of course for just about anything else VS Code is as good as Visual Studio.

Can't speak to how prod ready this is, would never use a major upgrade like this in production within the first 6 months of it's release. Bless those who have the way but that's madness to me.


F# programmer here. I still find VS superior to anything on VS Code, and a lot of people prefer the VS Code Ionide F# IDE.

Most importantly I could do everything in F# I need to do (or can do in VS), debugging, refactoring, interactive tests, on the free Community Edition of VS. It just happens I use Professional because my company pays for the license.


I use Rider now, VS got to the point of being so slow that frequently I'd have to wait 0.5 seconds for each character I was typing to appear. The performance is utterly, utterly terrible. Yet, all I see from the VS team on Twitter is "look at this new useless feature we've added", when all I needed was for the code I'm typing to appear.

God help you if you ever wanted to rename a symbol, it was Schrodinger's rename, it would either complete immediately, or never complete, lock up VS, and I'd have to restart. So I had to stop using it, and eventually I stopped using VS altogether.

Rider has its own quirks and annoyances, but it's quick, and lets me type even when it's doing stuff.

There seems to be something very wrong with the architecture of VS.


I've been using Visual Studio 2010, and then 2017 and 2019, with a pretty big solution loaded, and never ran into issues like you mentioned. For me it was always pretty snappy, including refactoring actions.

OTOH I've hardly ever used or installed Resharper (or Rider). When I did try Resharper, I found Visual Studio was dog slow, very much like what you described, and it provided me with very little benefit compared to what I already had in VS.


Actually Visual Studio is quite fast again. It's just that Resharper does its very best to slow it down. I don't believe this will ever change. Well except maybe Microsoft will fully deprecate COM-based in-process extensions.



Of course. However, I’m not sure this will achieve the desired speedup, because part of Resharper still needs to run in-process to integrate with Visual Studio. And it integrates a lot. And I’m sure at least some of the APIs it uses are actually not asynchronous. So, only limited room for improvement.


I'm using newest VS with Roslynator (instead of Resharper) and it doesn't feel slow once it "warms-up"

I do have NVMe M2 Disk if that matters


I never used Resharper, just plain old VS. It was dog slow, I’ll never go back, it’s not just the performance, but lack of stability, crashes, lots of exceptions, etc. It just feels like a project that’s not been given the love, and one that’s just having more and more features built on top of shaky foundations.

The other day one of my team (who still uses VS) complained of cut n paste not working (it would paste half the selection).

The basics are broken in my humble opinion


I switched from VS to Rider a few years ago and every time I have to go back for some old crusty project I get grumpy.

Rider does more of what I want in an IDE for dotnet core. The console is integrated rather than being another window. better vim plugin. better md/json/yaml/helm/k8s file editing.

VS Code is the future of the msft lead dotnet IDEs but rider is so good


Disagree, I've used VS since it was called VC++ and it's gotten steadily better. If you can afford to run a large-footprint IDE I think VS is pretty great. VSCode is an entirely different thing and I like it as well.

But then, I also like Vi and Eclipse so maybe I'm just too forgiving...


It sounds like you just don't like IDEs.


> This means nothing anymore. MSFT says that about everything they release and users are still often left with the feeling that they are guinea pigs testing a very unfinished product.

There is some truth to this but when it comes to the core .NET stuff that is pretty solid. However .NET core before version 2.0 was mess.

> Reading this makes me wonder who is still using Visual Studio. That IDE is so bad it's beyond believe.

That is your opinion man. Personally I really like VS.

> Why the hell do Windows developers need to install an entire new IDE in order to use the latest version of the .NET runtime? It's ridiculous beyond belief.

They aren't installing a whole IDE. It is an update to the existing VS 2019 version. The newest version of the IDE (which is 16.8) has support .NET 5.0. The vast majority of people that are using .NET core are using the latest Visual Studio 2019 already and this is a minor update (it takes maybe 10-20 minutes to install, so run the updater make a coffee and time you come back you should be good to go).

> Really shows that Visual Studio Code is the future. All you have to do is install the latest runtime and update the plugin so it shows you suggestions for all the latest language features. No need to install a new version of Visual Studio Code itself.

VS code and other alternative .NET IDEs doesn't support many of the things that are typically used around the .NET world e.g. SQL database projects don't work in Rider or VS Code (I just tried it with Rider) and I suspect that a lot of the tooling around that doesn't work. There probably a load of other stuff (that I don't use) that doesn't work either.

It really depends what you are doing.

> This might cause confusion. .NET 5 is basically .NET Core 3.2 but has been renamed to .NET 5 so that .NET Framework 4.x users can finally get convinced to move to .NET Core. They just dropped the Core to make it look more appealing to them, but it's still .NET Core.

I know a lot of people that don't work with Microsoft tech think we are all dumb but we aren't all that dumb. Moving projects from .NET Fullfat to .NET Core maybe a large undertaking (depending on the project) depending on how the project is strutured and what tech it uses.


Can vscode generate code behind files?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: