I spent some time learning dotnet core this year and with the slow-grind progress Microsoft has made it really does look like the technology stack might start to replace Java, Go, Rails, NodeJS, over the next decade. You can really feel Microsoft's experience in language development and enterprise software development coming together to provide better ecosystem ergonomics than other frameworks and toolchains.
Specifically I'm talking about tools like LINQ, dotnet core libraries, VS and VS Code integration, and the standard library and common library packages.
I still think it has a long way to go but its still a huge potential upside, which is to say nothing of how its coming to dominate the games industry as well.
Is it as good as rails? When I run a rails new command, I can setup my postgres connection, nodejs and webpack libraries, rails core library in one command. Rails has database migrations with conventions i like - the timestamp and a file name that represents the action to be performed against the database (e.g. AddNameToBlogs) which that can also be rolled back.
I also really like the Rails restful router which is strongly tied to its controllers.
I could go on. I really like that Rails gives me strong, opinionated conventions but also allows me to configure things to be liking if needed.
I want to love .net core - I really like F#. Any other rails dev out there make the transition to .net core?
> Rails is optimized for day 1, .net core is optimized for day 1000.
Sure, that’s a standard argument for statically-typed kitchen-sink enterprise-marketed languages and platforms like Java/JVM and C#/.NET. But having come in to maintain things on day 1000 (or, in some cases 5000) for projects in such languages and, also, languages that are far less strict and have less features designed to require/support tooling, it's not been my experience that there's really that much of an advantage in practice.
This 1000%. I'm both an ex-ruby dev, and a current C# / F# guy. Ruby is great for throwing things together and installing a load of functionality through Gems. It's 3 years later when your trying to maintain and refactor that it all comes back to bite. RubyMine does help so you can at least do rename refactorings most of the time.
I wouldn't rely on "setup everything in 1 command" as indicative of platform quality but .NET does come with the CLI and the unmatched Visual Studio, both of which have plenty of templates and tooling to get you started quickly.
Everything you mentioned can be done with some packages and a few lines of code, and the entire framework does have some opinionated conventions but with much more freedom than Rails. The fact that you can have MVC, Razor Pages, raw controllers, and Blazor all in the same app shows just how much freedom you have.
That freedom is/was overwhelming for me. With Rails, you can choose the relational database and the frontend libraries you want to use - that's really about it. There's a default path I can take if I want to get up and running quickly - it comes with strong opinions and sane set of defaults.
I think Microsoft is moving in the right direction though. Scott Hanselman has some .net core 101 videos out on Youtube that are good to get up and running.
I’m an ex rails dev (from about 2010, so a while back, but still)
ASP.NET MVC (which is now ASP.NET Core) is a fairly carbon copy ripoff of what rails was back then. Initially I was annoyed about this; there were efforts to run Ruby on .NET via IronRuby and thus get a nice rails-on-windows environment, and ASP.NET MVC came along and sucked all the wind out of that like a tent collapsing. Controllers, Views, Routing are all the same, and like rails can be reconfigured if you prefer.
Later on though, I’m happy with it. A new blank project doesn’t quite have all that stuff, but you can add the postgres driver, migrations, etc with a few lines of code.
You do lose out on all the more advanced Ruby meta-magic things that rails does like with_scope on ActiveRecord, but in exchange you get a 1000x performance boost, and static typing - especially now with null safety in C#8 is really helpful as your codebase grows. The productivity hit you take compared to rails is actually pretty minimal I think, and those other benefits outweigh that and some. I would chose ASP.NET core over rails for any web project at any time these days
I had about a month of experience with Rails many years back (hobbyist), and have used ASP.Net Core extensively (professionally).
The default web template is far more spartan than what Rails provides out of the box. The workflow you're talking about can be done by installing packages and configuration (5-10min maximum for experienced .Net developers), and I'd wager good money on such a workflow being available as a "dotnet new" template.
Yes, I recently transitioned to net core. First of I'd say that trying to find a 1:1 feature/convention parity between the frameworks won't work. E.g. in aspnetcore you'll add an attribute to model and after generating a new migration the column change will already be in-place. Most of the time it works out-of-the-box, you can always edit the migration if needed
What I found lacking (at the time) was specifically a better Webpack integration, similar to rails.
Also I got used to having rspec, in net core world you have xUnit which is something like ruby TestUnit.
At the start I also found DependencyInjection a bit confusing and also the fact that in Aspnetcore the models/entities aren't like AR models where you have a bunch of validation/instance methods inside it. In .net you try to separate everything in repositories/validators/viewModels.
But yes, unlike Rails, .net won't have an opinionated way on how to do things.
All-in-all I'm quite satisfied with the experience. Though I will say that Rails is still a tad more "batteries included" type of thing.
I guess I'm being selfish as a developer and thinking about my own ergonomics and dev experience. Is Rails as fast as .NET? Probably not, but if it can serve web pages in under 200ms, it's fast enough.
Agree, and I wouldn’t trade rails for a perf boost if I had to go to a horrible Java web stack (or a horrible C# web stack prior to .NET core)
Right now though, the dev ergonomics and productivity of .NET core are not actually very far behind rails at all, so you can serve your pages in 2ms instead of 200ms without suffering for it.
In some regards actually, perf helps a lot. Running a large bank of tests for example - those are going to complete a LOT more quickly in .NET which means you can iterate faster and have a better dev experience
It already has replaced those (or prevented them from ever catching on) in thousands of companies. .NET was always fast and highly productive, and the last 5 years of .NET Core have taken it to the next level.
The language stack is unmatched in terms of how quickly you can make a high-quality product across many different platforms.
I built an adtech platform doing millions of requests in 2010 in dotnet. Then I did it again doing billions of requests in 2012. Nothing else at the time other than java could come close with the same amount of effort.
To be fair: .NET (Framework) was not fast in the past. It was not. It was really slow. It was exclusively on Windows, IIS, some worker module, piped through a Webforms optimized System.Web dll and most of that code not optimized for memory usage etc.
But that is over with modern .NET Core. Nothing of above is valid anymore and the result is wicked fast as your link shows as evidence.
The webforms stuff was slow, but MVC was quicker and has been around for a long time, along with basic HTTP handlers. If you stepped down a 1-2 levels of abstraction, you could get very good performance.
But yes, those tricks are now obsolete and the platform is incredibly fast.
I know. But System.Web has much technical debt it cannot give up because WebForms is still around in the .NET Framework.
I would recommend some early talks from Damian Edwards or David Fowler about ASP.NET Core. They are very explicit about that (in the end they build Katana replacing System.Web on .NET Framework to allow a better ASP.NET MVC performance, which lead to Project "K", which lead to (ASP).NET Core).
Interesting, are there significant performance fixes in the mix?
I always got the impression .net was never particularly performance focused: there were some bits you could use if you wanted to go a bit faster, but it doesn't seem designed from the ground up to be fast, and AoT compilation seems to be perpetually on the back-burner.
Compared to other languages like C++ or Rust, which have been built for speed...
Edit: changed my comment because it was a bit too argumentative.
Saying "built for speed" doesn't really mean much. .NET created the async Task model that other languages adopted and has all kinds of performance related features from Span/Memory APIs to SIMD vector operations.
AOT doesn't have anything to do with performance but helps with startup time and packaging. The .NET JIT now has tiered compilation with secondary passes that optimize hot methods with much more input about the environment, including knowing that it's a hot path. AOT can't do this.
C++ and Rust are faster because they don't have a managed runtime, and Rust's major innovation was moving as much of the memory management into a strong build-time analysis to ensure it's properly access. You can build low-allocation or more manual memory management with .NET and get very similar performance. RavenDB is an example of an fast document database built in .NET Core: https://ravendb.net/
Your comments are surprising since you seem to have experience F# and other languages and yet it sounds like you haven't used any modern .NET version.
> Your comments are surprising since you seem to have experience F# and other languages and yet it sounds like you haven't used any modern .NET version.
Most of the stuff I've written in .net(core) has been in F# and C# (v8) and runtime v3.1. So much pain playing 20 questions with libs whose runtime requirements were thoroughly and widely distributed across all the runtime versions in a quest to get them to co-operate. I guess I'm just not a huge fan of the language in general, it's all a bit too enterprise-design-pattern-clunky-objects and implicit-mutation all the way down, not to mention verbose. I know it's finally, finally getting a half-decent implementation of pattern matching and immutable records, but there's nothing that stands out to me about it that makes me want to use it over literally any other language I know, which I guess is fine as it's primary target is being a "boring" enterprise language, which it excels at.
> AOT doesn't have anything to do with performance but helps with startup time and packaging.
Not sure I'd agree with this: putting code through an optimising compiler like LLVM ahead-of-time means you can apply more performance optimisations for runtime.
> The .NET JIT now has tiered compilation with secondary passes that optimize hot methods with much more input about the environment, including knowing that it's a hot path. AOT can't do this.
That's fair, but if you type-system and language design already tells you everything you need to know, you don't need to wait until code gets hot-enough to swap it in, you just pay the compile-time cost and have it go at peak speed the whole time, my argument here might be veering dangerously close to the 'sufficiently-advanced-compiler' argument hahaha.
> Saying "built for speed" doesn't really mean much. .NET created the async Task model that other languages adopted and has all kinds of performance related features from Span/Memory APIs to SIMD vector operations.
Sure, it's got some fast bits but in my experience very few libraries make any use of them and by default most things are heap allocated, with plenty of pointer-indirection right? Compare that to Rust where far more is stack-allocated and numerous other optimisations and shortcuts get applied so the things you're accessing on the heap still aren't too slow. The async-task model is just a model and has nothing to do with actual run-time performance though right? I had a go playing around with Span when I last wrote stuff in C#, but I found it difficult to do much with it as the compiler either wanted way more things to operate on Span-types (some of which was out of my control) or I needed to do things with iterators that required me to turn it right back into a 'heavyweight' IEnumerable class, which kind of defeated the purpose. Entirely plausible I was using it wrong though.
> RavenDB is an example of an fast document database built in .NET Core: https://ravendb.net/
Ok that's cool, I hadn't heard of this, I'll check it out.
I have worked with .NET and Java since they exist, jumping between stacks either when moving between jobs or consulting projects, nowadays I tend to be more focused on .NET.
Yet I don't see .NET getting on Java turf as much as many Internet thinks it can.
It remains mostly a Windows stack (so many enterprise third party are yet to release Core stuff that is actually cross platform), and there are plenty of platforms without any kind of .NET support, yet you will find a Java vendor on them.
Also ironically Microsoft is now a OpenJDK member and they are the ones doing the Apple Silicon support.
"Mostly Windows" is a limited view. Most modern .NET developments end up in a docker/kubernetes Linux container. In the first year of .NET Core some shops did Windows deployments but now, that is long gone.
But I agree. .NET will not replace Java or any other serious language.
> But I agree. .NET will not replace Java or any other serious language.
I think you have to specify a domain to make that statement reasonable.
I think .NET is used more than Go for example. Maybe more than NodeJs also for server systems. Less than Java. If we compare to C++ it is very much down to domains. I don't hear about many enterprises using C++ for server backend code but of course they exist. And so on. .NET (or C#) may already be "leading" against a lot of other languages and then the question is if the others will replace C# or not. And if it will take some of the market from languages like Java which I think it will but not all.
We occasionally use C++, but instead of the holistic approaches that tend to be discussed with everything in language X, we just write a native library that is then safely used from Java/.NET.
Basically what everyone usually advocates as "Python" in performance critical stuff, but we get the the productivity and JIT/AOT tooling from Java/.NET eco-system as well.
SharePoint and Sitecore are .NET Framework only. Only CSOM is Core and Sitecore just released their first Core support last month, and most enterprise projects ends up plugging into them.
Forms, WPF still have issues running on top of core, UWP is stuck on Core 2.1 compatibility and .NET 5 is focused on desktop deployments with UWP future uncertain.
The whole “serious language” thing is ridiculous. You are reading this on a web service originally developed using Paul Graham’s custom made-up dialect of LISP called ARC, which runs on top of Common Lisp.
It’s pretty much as far away from “serious language” as it’s possible to get, but here you are happily using it anyway.
To back this up: so many open source tools are being actively written and maintained on the JVM (Kafka being the biggest one that springs to mind), apart from Unity, I don't really know anything that anyone (that isn't an enterprise Saas) uses that's written in .Net.
.NET is starting to look nice on Linux and macOS. I'm very disappointed that AOT compilation has apparently fallen on the wayside, though.
CoreRT was supposed to bring full AOT to C#, and that project is still in alpha with a disclaimer saying there are no plans to make it production-ready. LLILC, the compiler that targets LLVM, is also not production ready.
Officially it's been pushed back. The priority has been the unification of all the different runtimes and platforms first, which is coming with the big .NET 5 release.
Is there a document describing this plan? Will this replace .NET Core, CoreRT, etc.?
As someone who isn't a .NET developer, the sheer proliferation of runtimes and platforms is super confusing, and I can never remember what's what between, .NET, .NET Native, .NET Framework, .NET Core, Mono, CoreRT, etc. If this is getting cleaned up, that's great news.
.NET windows/desktop framework stopped at version 4.8 and .NET Core has spent 4 years incrementing to version 3, so .NET 5 is a merger of everything into a single framework again. Mono will still exist since it's used by Xamarin for mobile and Unity for games.
AOT is mentioned in the article and comments, and there's also this big thread in the CoreRT repo if you want to see more discussions: https://github.com/dotnet/corert/issues/7200
It's technically a merger. It's the next version of .NET Core after v3.0 but jumping ahead to v5.0 and dropping the "Core" designation, and then merging in parts of Mono, AOT toolchain, and even more of the (upgrade) bits of the classic framework.
You might be able to get pretty far with just a framework switch. The RC is already out so try it. You'll probably have to update the csproj/sln files at a minimum and fix some known API differences but it's unlikely you need an entire rewrite.
Runtimes -> This runs the IL code, includes Mono, CLR, and CoreCLR. Mono was written by Miguel Icaza to support cross platform mobile development. CoreCLR is a new version of the CLR that supports .NET core code.
Libaries/Frameworks -> .NET core, .NET framework. Two different versions of the libraries, .NET Core is newer and cross platform but older technologies such as webforms can't run on it, .NET framework supports older technologies but isn't cross platform. Plenty of libraries are usable inside either framework.
Ahead of Time Compilation Technologies -> .Net Native, CoreRT, AOT
Most new devs don't have to know any of this to be productive. Just pick .NET core unless you need to use .NET Framework for backwards compatibility reasons.
As far as .NET goes, MS has a long history of confusing messaging. As soon as something gets traction it either gets obsoleted or renamed. They would do themselves a big favor if they made it easier for people to understand the different versions.
Not only .NET, I am really pissed how C++/CX got replaced with C++/WinRT, without any effort to provide tooling with similar capabilities.
So instead of a C++ with C# like tooling for doing UWP stuff, now one gets to edit IDL files without any kind of support (you could be using Notepad for what they care), and then you have to manually copy/merge C++ files after being generated.
And on the .NET side .NET Native was looking to be how version 1.0 should have been all along, and now has an uncertain future.
Finally with Reunion, when going regularly on their issues, it seems to be turning into a major reboot, porting what they can into Windows 7 like stack, leave UWP as an improved COM runtime, and pretend that everything else post Windows 8 outside Win32 never happened.
And in this case "pushed to .NET 6" doesn't mean "not working" or anything, just not production-ready. The AOT implementation in Mono is robust and is being adapted for .NET Core, which requires changes and improvements. Not to mention the introduction of new platforms - supporting targets like Apple Silicon and WASM takes additional work.
Not sure this will happen given Node/Typescript/VSCode (also Microsoft, fun how they’ve taken over Node right?) but the dotnet platform sure is rich. I had the opportunity to work with it this year and I found it to be very nice! Not surprised to see it gaining more and more momentum with libraries like this.
The coup de grace is functioning and sane server side rendering of ReactJS apps by c# applications. If Microsoft can make that easy and simple it will lead to a huge churn as many software companies replace less efficient nodeJS applications for that use case. Typescript will always have its placed client-side but there will be huge user benefits if we can have more and better server side rendering.
You can already do that by running Node with SSR (or any npm/js code) in .NET using Node Services [1] or use ReactJS.NET that packages everything for you [2].
But Microsoft won't officially do any more since they have Blazor which lets you run C#/.NET in the browser with both client-side WASM and server-side SignalR/websocket running modes. It's much more advanced and functional than React already. [3]
The server-side model doesn't have this issue and the client-side WASM model is brand new with the first production release a month ago so it will get better soon.
The browser runtime is trimmed down to the APIs actually used, has dynamic async loading now, and it's around the 1.5MB mark which is competitive with the big SPA payloads.
But those "big SPA payloads" contain the whole app. With Blazor you incur a 1.5Mb overhead before you've written any app code. That's simply unacceptable on many devices so Blazor's current implementation is only relevant for internal apps.
Same with Blazor. The mon.wasm runtime is around 400kb compressed and the biggest DLL is mscorlib at 600kb. The actual DLLs for the app are just a few kb.
Yes you can be smaller with JS but not by much these days, and given the amount of functionality you get with Blazor it's hard to compare on file size alone. Also if you're just making internal apps with controlled usage then server-side is better anyway.
> It's much more advanced and functional than React already.
I’m sure this is true, but I can’t imagine there’s much of a community around it? The thing is with React you have a huuuuuuge community and ecosystem. You can share code with React Native. It’s so deep...
Well Blazor is new so React will have a bigger community, however there are millions of C# devs around the world and they can be instantly productive with Blazor so it shouldn't be that far behind, if at all.
While React's ecosystem might be huge, the quality trails off quickly and it's pretty messy even with the popular stuff, much of which is to add functionality that every app needs but isn't included in React itself. Blazor already comes with everything included, but can also use the full power of C#, the massive standard library, plenty of 3rd-party packages, and the tight integration with the backend.
I like Blazor, but to be fair, coming from VueJs, it feels not finished. Some things are just more complicated that they should be. But things are changing, so we will see what happens in the next releases.
Yea I'm in that thread. It's no longer maintained by Microsoft but the code still exists and runs just fine, and there's not much to maintain other than keeping up with the Node releases and API changes.
.NET Core 3 can still use the existing NodeServices package, it'll just show `obsolete` warnings. Haven't tried with .NET 5 yet but will probably switch to that first linked package.
I really enjoy how TS is the same on server and client. The C# runtime is way richer, but I like the TS type system more. Not sure I’d want to use different platforms on backend/front end even if C# did server rendering like React.
Not sure what you mean by different platforms. Did you mean languages as React and Node are separate entities? Personally, if I had to hire a developer the minimum requirement would be competence in JS/React and a statically-compiled server-side language. JS-only developers are to be avoided.
Different platforms meaning JavaScript vs C# vs Java and their respective runtimes. Yes server and client JS are not the same runtime but they are easy enough to smooth over that they are practically “the same”
Yes a JS only developer is probably lacks certain experience but I am well versed in Java and have had solid exposure to C#, like they them both a lot, but today would prefer to use TS on backend and front end for any small project. The fact that I can share types and validation code (Joi) on both server and client is really powerful in my opinion. There’s only so much you have time to focus on when you’re working on a small project. Context switching two platforms is a big impediment in such cases.
Out of curiosity what if I'm good, say above average, with C# backend, but never used React commercially, though have used Vue on personal projects for the last several years, but consider my JS a bit of a weak spot?
Very fast, statically typed, clean language design with features like LINQ for data operations and fantastic async support, cross-platform, comprehensive standard library, advanced ORMs like EntityFramework and NHibernate, easy deployment including docker/self-contained/single-file, build anything from web/mobile/desktop apps, best-in-class IDE and tooling, Blazor UI component framework running .NET in the browser, solid gaming support with Unity, etc.
There's nothing quite like it, and I find myself coming back to it and doing everything faster. The only downside would be a smaller open-source ecosystem (for many various reasons) so you don't get all the "cool" libraries, although you there are more than enough to solve any problem.
I think the web front end could be a huge multiplier for them.
It's going to take a few years for Blazor to catch up & have a chance to surpass the Node ecosystem but they have a good shot.
They need more/better front-end specific tooling & a quicker feedback loop with hot reloading. There is some hope that hot reloading will make it in for next November's .NET 6 release.
As Wasm & Blazor improve, I think it has the potential to be what people like about TypeScript but with less setup required or worries about missing dependency support.
We're already building with Blazor using the server-side model and it's pretty incredible. We can get massive amounts of functionality built in minutes because it's just .NET, especially with all the existing backend logic easily shared and used by those components (via signalr).
The WASM mode is still a little rough but it's rapidly improving. Hot reloading will definitely make a big difference there when it arrives.
LINQ has nothing to do with ORMs, it's a generic querying/transform syntax that can operate on any kind of data structure, and it's expressiveness allows it to be seamlessly translated into SQL by EF and others.
Also it's hard to beat the static-typed nature of C#/EF and the fact that entities are just plain classes with no special needs or inheritance. They can be as "active" as you need them while the DbContext wraps everything nicely.
I'm dealing with some rather fun EF code inside triple nested for loops that runs several thousand queries before throwing a validation error. Now, I know that's the devs fault and not EF, but EF made it all just seem like harmless object oriented programming and not set operations.
I almost feel like I'd like EF for write operations, and Dapper for read operations. Strike a balance.
If its memory usage and execution speed are deal-breaking bottlenecks for you, then you're working at a scale few people actually work at. Here in the line-of-business web app electron mines, I get to optimize for developer productivity and simplicity.
I suspect you haven't yet seen tge power of LINQ, it's not the chaining that is its point - it's the lazy evaluation. It's not at all limited to DB operations.
Sounds like you've worked with poor code/developers. LINQ with EF translates into DB queries including aggreations and transformations. That's the whole point of compiling the query, unless someone either didn't understand the query they were writing or explicitly chose client-side evaluation.
Hello. LINQ isn't at all limited to LINQ-to-SQL. That is simply one of the numerous IQueryable providers, which is also possible to implement yourself by doing some expression parsing.
The ecosystem let's it down a little I reckon. I came over from the Java world and have dipped my toes into Rails and found the Java ecosystem far richer.
Though I do like it and think it's pretty good, it would really benefit from a bigger open source community. It's picking up, but it is on the backfoot due to starting out with a closed-source philosophy.
> By which you mean C# gets new features every year, and F# gets thrown enough tidbits to keep it looking alive?
Every new C# version makes interoperability more of a pain. The C# team have no interest in compatibility with F# and will NIH stuff that could've been taken from F# libraries. One of the biggest pain points for interop is tasks, for which there are half a dozen libraries for F# and everyone uses a different one. There's been an open issue for F# to have native task support for years, but progress is slow.
And there's definitely nobody can see the current state of their underlying technologies to know about their transitions https://github.com/StackExchange
So ... this all just sounds weird for a startup with a weird name?
F# is nice, but always felt like the ignored child whenever I've used it.
If you like F#, why not just reap all the functional benefits and write it in Haskell, or Rust which has (arguably) just as strong a functional influence (minus the syntax) as F# does, with the benefit of a stronger type system, and better performance, and these days, probably a bigger community than F# as well.
Mostly because of user experience. F# as .NET language has entire ecosystem of high quality libraries to pick from, good IDE support (not as good as Java/C#, but definitely better than Haskell) and smaller learning curve. It's also way more robust than Rust - meaning that you can make a working project with decent performance much quicker. I say that as both F# and Rust developer. IMO the language that covers similar area and may be more tempting to learn is Scala. But if you already know how to utilize .NET platform, then reusing that knowledge in F# is just easier.
I'm a big fan of C# and the .Net platform as well. Being able to mix C# code in a project is compelling. If C# had a strong native SSH(I'm aware of netssh, but something a bit more official/active) implementation, and a WinRM/Remoting implementation that wasn't hidden inside the Powershell project, I think there would be a .Net OSS tools explosion..
Same here, our latest greenfield project was done in .NET Framework 4.7.2 and for the looks of it, the next one might be .NET Framework 4.8.
There are plenty of enterprise stuff like SharePoint, Sitecore, GUI components based on commercial partners like Telerik and Component One, among many others that are still on an transition to Core.
And for those that already moved there, their stability is still at v1.0 level, better leave the "fun" to others.
Because .NET Core isn't fully compatible with .NET Framework, and there are lots of stuff which people are willing to spend money porting them.
In the consulting business "I rewrote X in Y" blog posts only happen when someone takes the time to budget the project, because there is someone doing the math of developer time x cost per hour.
Then .NET Framework has been Windows specific for 20 years, there are lots of .NET libraries that are thin wrappers over Windows APIs, or interact with COM/UWP.
Porting them to Core means just rewriting everything from scratch, and if they are to remain anyway Windows specific, there is no advantage other than stay on reboot treadmill that Microsoft has started with the UWP/.NET Core (now backing off with Reunion), so they just keep doing .NET Framework as usual.
That stuff is pretty optional in 2020. I'd argue a lot of it isn't even relevant anymore. Sure, some people will still want/need it, but if you just want to spin up an API .NET Core is good to go.
True. I guess a lot of RFPs requiring software of that nature have probably been largely dominated by the Java and .NET ecosystems for decades and your Nodejs, Python, Ruby, Go ecosystems don't have any compelling offerings or much interest from their community to work on that sort of software?
I feel like a lot of what I see online fits into either SaaS web apps, or well known open source projects / core infrastructure used in building large-scale, distributed systems.
I'm sure there's a lot more out there, but it doesn't seem to get talked about much. Hence, the comment. Meaning if you're interested in coming over to the .NET Core world fr a different background then the things that are missing from full .NET Framework probably aren't of any interest to you.
Thaxll, meant there are no large scale enterprise .net core applications out there. All the examples here are not really large scale and are not used by millions of concurrent users daily. Bing is not fully .net core. Why LinkedIn and Github are not written on .net?
Going out of beta in a few days: https://github.com/SolutionsDesign/HnD (and already in production). Customer support system. .NET core, asp.net core mvc.
> its coming to dominate the games industry as well
Really? I was of the impression that Unity C# is the old, established player in this space, while Godot (which supports C# but doesn’t force it at all) is growing among hobbyists and indies and may eventually pass the current popularity of Blender in VFX and go on to dominate the games industry in a few years or a decade.
That's seems like very wishful thinking. Why would you assume Godot would ever be able to truly complete with Unity (or Unreal). It's got 1/10th the features and probably 1/100th to engineers. Also less support, less people using it, less people teaching it, less people creating tutorials for it, etc... etc...
Is that very different from Blender 2½ decades ago? It seems to be gaining more features, support, users, teachers, tutorials, etc. over time. Anecdotally my YouTube filter bubble has lots of small indie creators who’ve switched from Unity to Godot and are creating tutorials for it. Why would you assume Unity (and Unreal, which IUUC has little to do with .NET) will keep their mindshare and Godot not catch up?
Why should a technology replace other technologies when it brings exactly the same things to the table that are possible with the technologies that it is meant to replace? Just because it gives you the feeling that you are under the dotNET umbrella?
LINQ? Clojure and Scala can offer you much more than LINQ. Java is on par.
dotNET Core libraries? I don't know, I have never heard people complaining much of the core libraries in the JVM world.
VS and VS Code integration? JavaScript and TypeScript has the best VSCode integration, other languages are just supported, including C#, F#, Java, Scala etc. JetBrains IDEs are much more powerful than what VSCode will ever offer. VS is not even cross-platform.
The C# used in game industry is a far cry from the one used in enterprise and I don't think C# is the longterm answer for scripting in game dev.
I wish Java has LINQ. It has only non-friendly non-expandable LINQ-to-Object equivalent.
Example for bad parts: Handling checked exception in lambda, no extension method support, calling ".collect(Collectors.toList())" every time, no real closure support, complex lambda type due to primitive types.
I should mention async/await for C# feature compared to Java.
IMO Scala/Clojure is too much for me, C# is balanced option.
I spent 10 years doing C# on a big system and love it. However it never got the momentum Java did so I've switched. I prefer C# over Java, I hate a lot of Spring attribute/factory/builder craziness, but Java has so much more wider support I would always choose it first now.
Same here. The Java ecosystem has almost everything you could ever imagine needing so it’s a safer bet. Maybe it’s not as shiny but it works. I don’t understand why MS doesn’t provide the ability to call Java code from .NET. It would open up a lot of libraries to the platform. Right now there are a lot of libraries that are first class in Java but have either no or only half baked .NET ports.
There was some plan to beef up their Java Interop library (currently used with Xamarin on Android) for more general usage for .NET 5, but I think it fell by the wayside. It'll probably happen eventually though.
Not to mention projects like Apache Spark, Cassandra, Elasticsearch, Druid - Java projects. Many projects (Foundation DB, Scylla DB, I could find more) will have in-house support for Java and not .NET.
The .NET equivalents to these libraries tend to be playing catch up. Xamarin Android will always be playing catch-up to Android's Java/Kotlin APIs for Android. Big machine learning projects like PyTorch and TensorFlow tend to give first-class support of some fashion to a Java API (mostly due to Android). Is there a .NET equivalent to Deeplearning4j? Hadoop? Hive? Kakfa?
In Java there is a proliferation of web projects: Spring, Vert.x, Quarkus, Micronaut, Play 2, Spark Java; Netty, Jetty, Apache. In .NET all you ever hear about is Microsoft projects ASP.NET core; Kestrel, IIS.
Working as a .NET dev, when there is an SDK, if it's not from Microsoft, it's clear the .NET SDK is a lower priority for bug fixes and new features compared to e.g. the Java/Python/etc.
TensorFlow can run on any JVM for building, training and running machine learning models. They have created recently https://github.com/tensorflow/java
Programing languages are like patterns that may or may not fit in a developers head so they will not replace each other. I'm sure every developer can read all programing languages but it's about master the language and its eco system.
I gave a talk recently on how we use it at Microsoft: https://www.youtube.com/watch?v=KhgYlvGLv9c - the talk is very short and so it does not go into many details, but it gives an overview of some internal use cases.
Wow, amazing to see this after a whole decade! I worked on this as an intern way back when it was a nascent project in MSR.
Going through the documentation, this is a complete trip. I see some of the original concepts are still intact, but the ergonomics are completely different now. Awesome!
I spent a fair amount of time doing a POC on Orleans but ultimately went with SF Reliable Actors due to some issues I couldn't resolve with a custom Streams implementation (Kafka subscriber/publisher).
I really enjoyed the simplistic development model of Orleans compared to SF, and I want to give streams a second chance on a personal project, but I'm concerned that both Orleans and SF RA will be superseded by SF Mesh - do you have any thoughts on this?
SF/RA is pretty much in maintenance mode. SF Mesh is dead. But Orleans 4.0 is going to make virtual streams custom data adapters easy to write (they say.)
That's always been my concern when looking at Orleans. Why can't Microsoft just be up front about the status of these things, so that people don't pick them for greenfield projects?
What are the actual "live" platforms for running Orleans? I don't want to deal with running clusters and nodes, the closer to 'serverless Orleans' the better for me (Functions Durable Entities etc is not the same thing)
I think if you look at the consistent development history, you can use that as an indicator. Internal teams host it on Kubernetes (Linux), Service Fabric (Windows), and other places. The "Orleans at Microsoft" talk from August covers where teams are hosting it and how they're using it: https://youtu.be/KhgYlvGLv9c
Maybe I'm looking in all the wrong places, but I'm having a surprisingly hard time finding hardware requirements.
I love that with Erlang I can throw an experimental app on Digital Ocean for just $5/mo and see if it gets traction. How much more resource intensive is Orleans and what kind of resource usage should I expect as I scale up the number of actors?
You can run Orleans on a single host or many. In fact I believe we do have a host in production running Orleans by itself that is the equivalent of a $5/mo DO box. Resource usage scales with network activity in my experience. I think there are benchmarks around, but generally the scaling factor seems similar to conventional networking models.
We have a pair of machines for resiliency, but on test and staging environments we run many times the services and experiments on one box with many other services without a problem, it maybe need a bit more memory for the .net virtual machine vs the erlang one, but totally doable with a basic machine.
I mostly found out about Orleans through Joe Hegarty who worked on Orbit in Java (EA / BioWare) its been a while since I looked at both how closely would you say they are related to one another in terms of design today? What would you say are some of the key differences between the two? My understand at the time was they were closely related.
Microsoft Research has a tradition of naming projects after cities. Eg, if you type "microsoft research project" into Google (Bing appears to work better for this) and let it autocomplete, you can see some other projects which appear to be named after places: malmo, athens, tokyo, and others.
That's a good question. I wasn't around at the time the project was formed. At that time I was working on Microsoft's internal metrics system. Years later, I became involved as an external contributor (I had left MS) and eventually rejoined MS to work on Orleans full-time.
It's very interesting to see a solution that tackles the problem of identity and state while continuing to embrace OOP. This is in contrast to the philosophy of many functional programming languages, like Clojure, that avoid[0] this by design.
I used Orleans for my last startup and really enjoyed it. The dev team is active and responsive to the community, as evidenced by Reuben Bond and perhaps others appearing in this thread.
Orleans is awesome and the virtual aspect of the actor system is something that I have found to be missing from many alleged comparative libraries (Akka, Actix, etc.) It is really the key selling point: run an Orleans cluster and then you can forget about networking altogether (not entirely true, but truer than you may believe.)
Just don't make any bugs in your code or you're in for a headache.
The ergonomics of Orleans are so nice in fact, it inspired me to try to achieve a similar effect (any POCO can be invoked across the network) by using IL weaving. It worked, but it's SCARY.
Durable Functions are not in the same category as actors. Each durable function exists to serve a single request; the model is request partitioned instead of data partitioned.
Durable Functions are great for deterministic synchronization across calls to other Azure Functions. Fan-out, fan-in, function chaining, and similar tasks are best done using Durable Functions[0].
Could you build an actor model on top of Durable Functions? Possibly. I'm not sure why you would do that, though. Azure Functions are already pretty expensive.
Haters can continue hating(or decide to grow up). Truth is I've found no ecosystem more productive than .NET in all of my coding. The only problem with .NET is the sheer amount of haters and we fare very well even though.
You all screaming performance and techempower stats don't even realise that C#(ASP.NET Core) is 6th placed on techempower's list when you check the composition of all the various benchmarks.
All of these without sacrificing readability, compile times, verbosity etc.
Or do you want to talk about the agility of the framework, speedy bug fixes, endless innovations and all. Don't even get me started on the tooling. It's unmatched in all of programming. VS2019, VS Code, Rider, dotnet cli etc. Ever heard about Roslyn?
Drop your hatred and check out dotnet's current state before jumping into conclusions.
C'mon, let's be real. .NET is supreme in the streets of code.
I wonder, what would it be worth to the world if there were an actor framework for multiple languages that offered interopability between languages? IIRC Akka.net and Akka aren't compatible with each other.
It's a powerful tool with some serious computer science work behind it; for example Dr Phil Bernstein's research added its distributed transactions, which are a fascinating solution to the pronblem of running ACID across arbitrary networks where latency is unknown and strong consistency is not guaranteed.
All communication in Orleans is asynchronous, similar to communication in gRPC, so in .NET world, you're using async/await with methods that return Task/Task<T>/etc. The boundary there is hopefully apparent: async calls can incur IO and have a cost and the 'async' keyword can hopefully make that cost apparent to the developer.
References to grains are represented by interfaces and if the machine you're communicating with fails, the grain will be re-activated on a surviving host the next time you need to call it. In other words, they are location transparent and the application won't get stuck in some failure state when a machine crashes.
I guess there's a lot to be said in regards to the rest of the .NET changes since 2001. There's no more blocking and spinning forever on a sync called from A to B and perhaps fail over is thought of better. Also it seems to architecture is basically forcing the "good path" of every service being instance based and throw away whereas .NET Remoting offered dark, unscaling singleton paths of dark doom.
I'm involved in a project that uses Orleans. Conceptually and technically it's very interesting. Nodes automatically cluster together to form a big distributed system, and the concept of "virtual actors" make it possible to represent individual entities in the data model directly as actors, without worrying whether everything fits in memory at the same time.
However, Orleans doesn't seem to play very well with Kubernetes. Kubernetes pods can be uncleanly terminated at any time for any reason. Orleans really doesn't like unclean shutdowns: it will leave behind dead entries in the membership tables, which then cause problems in the future because new nodes would contact dead members and run into errors.
The clustering protocol also seems to assume that all members have stable identities and IP addresses, but that's obviously not the case in Kubernetes, where each new pod has a new identity and new IP address. This can cause members to fail uncleanly, which in turn pushes the clustering protocol into an infinite loop.
Orleans was introduced by one person who has a background in distributed computing. Unfortunately, he's the only one in the organization with such a background. After he left, nobody could debug clustering problems.
> The clustering protocol also seems to assume that all members have stable identities and IP addresses, but that's obviously not the case in Kubernetes, where each new pod has a new identity and new IP address.
Both not true: Orleans cluster managed by a dynamic membership table which could be implemented by various tools(azure table, zookeeper, Kubernetes crds(etcd internally), etc), so nodes don't need static ip addresses
also, in Kubernetes, you can deploy application as statefulset, by this way, pods will have stable identities
> Orleans was introduced by one person who has a background in distributed computing. Unfortunately, he's the only one in the organization with such a background. After he left, nobody could debug clustering problems.
Need source and proof for this one
Please don't post ridiculous opinions about things you are not familiar with, both Kubernetes and Orleans.
I have built successful products by Orleans & Kubernetes since two years ago, and now Orleans get better integration for Kubernetes than that time
Thanks for this insight. Skimming through the documentation [0], I find no ‘issues’ section that mentions this. IMHO Microsoft behaves untrustful by hiding an issue like this.
> Early in the start-up process and at runtime, the silo will probe Kubernetes to find which silos do not have corresponding pods and mark those silos as dead [1]
This is an experimental fix in a MR [1] that comes 6 years after Microsoft announced support for Kubernetes [2].
> Orleans was created at Microsoft Research and designed for use in the cloud. [0]
I read: MS research developed the concepts of .NET Orleans 10 years ago without having Orchestrators like Kubernetes in mind. Now as an afterthought they have to come up with some patches for it not to cripple on short-lived Kubernetes nodes.
A proper Operator & Helm chart was requested in nov 2017 [3] and the issue still is open. I get the impression that it is either not possible to write a proper Kubernetes Operator without a major rewrite of .NET Orleans or Microsoft has a commercial interest for .NET Orleans not being a first class citizen on Kubernetes.
Whichever it is, I’m not going near a Kubernetes cluster hosting .NET Orleans.
Interesting take. We run Orleans on Kubernetes in production at Microsoft on multiple services. Other services run Orleans on Service Fabric in production - SF is similar to Kubernetes in terms of lifecycle. The project is open source, so things like Helm charts and k8s operators would be welcome contributions.
> why it took up to 6 years for MS to release an experimental fix [1] to run on Kubernetes?
> where I can find the outstanding issue in the release notes that [1] tries to fix?
- if .NET Orleans runs fine on Kubernetes, why is there a need for an experimental fix?
There is no experimental fix, just improvements. Things which can make life easier for developers running on Kubernetes by automating some things (setting addresses), and taking advantage of information that's available in a Kubernetes cluster (whether or not a pod has been deleted) and feeding that into the cluster membership system.
The reason the latter is useful is that it addresses something which can occur during initial dev/test, but which does not come up in production cases: when an entire cluster is deleted and redeployed with the same identity, the new instances try to contact defunct instances for a few minutes as a safety measure. The enhancement is to query Kubernetes to determine if it's worth trying to contact those nodes, or whether they're almost certainly dead.
- why Microsoft doesn't write and release an Operator & Helm chart, instead asking the community?
Helm charts aren't something we see requested often. Perhaps because Orleans is a framework which is embedded into the developer's application and not a service which gets deployed and stands alone (compared to, for example, a database). There are no separate Orleans pods, just the user's application pods. Internal users have been building applications on Kubernetes with their own Helm charts. Microsoft is not asking anybody to create those things, or anything, unless they want them for themselves.
There is a new package which adds improved Kubernetes integration and fixes the issues you mentioned specifically. It's in beta right now, but we expect a stable release soon. Feel free to contact me and I can help you with running Orleans on Kubernetes
I only ran into the Orleans Virtual Actor model a few days ago. I have an application I long wanted to move to a distributed model, and I've been struggling with seemingly to reinvent everything due to an unusual execution model.
So it was quite an eye opener to read about how Orleans introduced Virtual Actors, activations and placements of actors and how these are exactly the things I've thought about over the years. I've read about e.g. the classic Erlang actors, but the concept of virtual actors in Orleans really fit my model.
> Orleans builds on the developer productivity of .NET
As someone who has developed with .NET (using VB, C#, F#) for almost 15 years, I am not sure if I would still advertise .NET as something focused on developer productivity, when in the context of introducing bleeding edge tech. It is a great framework, but I've found much more productivity with Elixir after just 1 year of using it. C# still suffers from statefulness, null references, and boilerplate code, even after newer versions of the language provide some tools to reduce those.
F# is a good alternative to C# if you want to stay in the .NET ecosystem and reap the benefits of functional programming, but I found the compiler to be very slow, 3rd libraries poorly documented, the language excessively complex, and 3rd party IDE support to be limited (as of version 2.0 anyway).
I think maybe you stepped out of .net at the wrong time. Pattern matching, records, discriminated unions (c# 10) etc are here and on they way. Lots of functional features being integrated into C# with every update.
I'm using new c#/.net core professionally and I still think it's far less productive than elixir/phoenix in general. Also c#'s pattern matching is nothing when compared to elixir ones. They are very different languages though and that's fine, most people are familiar with OO, not with FP so for them, it might be more productive(at least in certain types of applications).
> It was created by Microsoft Research and introduced the Virtual Actor Model as a novel approach to building a new generation of distributed systems for the Cloud era.
Okay, sounds like an academic, Microsofty take on Erlang's Actors? But from docs, sounds like it's at least used in practice at MSFT:
> Since 2011, it has been used extensively in the cloud and on premises by several Microsoft product groups, most notably by game studios, such as 343 Industries and The Coalition as a platform for cloud services behind Halo 4 and 5, and Gears of War 4, as well as by a number of other companies.
> Orleans was open-sourced in January 2015, and attracted many developers that formed one of the most vibrant open source communities in the .NET ecosystem.
Academically (And, to some extent, practically speaking) Orleans Virtual actors aren't the same as Erlang (or Akka) actors.
Major differences:
- Normal actors are pretty flexible, can work in local or remote contexts. Orleans is probably best described as Sharded Actors, It's a 'guided' implementation of Actors compared to Erlang or Akka where you are given a toolkit and have to build out what you intend to do.
- Orleans doesn't have any ordering guarantees. While this is not a firm requirement of the Actor model itself, both Erlang and Akka guarantee ordering between a given Sender-receiver pair (i.e. messages sent from A to C will be sent in order)
- Akka and Erlang have the concept of Supervision. i.e. An actor may have a child, and if that child crashes, the (parent) is notified and may choose how to react (restart child, don't restart child, crash itself)
- Orleans may allow more than one activation of the same grain at the same time. A stable and properly configured Akka Shard cluster will never have more than one of the same entity actor alive at a time.
- Orleans can magically scale out with the cloud (If you're running them on Azure.) Erlang/Akka you'll have to deploy your new nodes.
> Orleans doesn't have any ordering guarantees. While this is not a firm requirement of the Actor model itself, both Erlang and Akka guarantee ordering between a given Sender-receiver pair (i.e. messages sent from A to C will be sent in order)
Ordering has a performance cost. It needs to either be maintained at all levels, or reconstructed from unordered messages at a later point. Even something simple like an m:n thread pool scheduler can ruin ordering guarantees. My view (Orleans core developer) is this: if you want ordering, await your calls. That way, you are guaranteed ordering regardless of any message reordering that can occur in the scheduling or networking layers, or due to failure and recovery of a host. So you can choose when to pay that cost and when to reap the performance benefits of not paying it (by firing off multiple calls in parallel).
> A stable and properly configured Akka Shard cluster will never have more than one of the same entity actor alive at a time.
Likewise for Orleans, but the caveat of a "stable cluster" doesn't do much for users. "Stable cluster" falls apart frequently in real scenarios, which can be as simple as a single machine being abruptly restarted. Developers must account for the error scenarios.
> Akka and Erlang have the concept of Supervision.
Orleans does not have supervision, since each grain stands on its own (no hierarchy) and have an eternal nature (managed lifecycle). Grains are not destroyed when a method throws an exception: the exception is propagated back to the caller and the caller can use try/catch to handle the exception. This is similar to what .NET developers are used to, since regular objects are also not destroyed when a method throws an exception, and the caller is able to handle the exception. My belief is that this kind of exception handling is usually appropriate, since the caller has context which can be useful in handling the error. The developer can also write per-grain or global call filters which can operate on all calls and handle any exception, so if that's preferred, then it's available.
> Ordering has a performance cost. It needs to either be maintained at all levels, or reconstructed from unordered messages at a later point.
Yep, although it's not typically as bad as it sounds even in Akka/Erlang; TCP gets you most of the way there. It does however become a problem when want to try to scale 'out' (i.e. use multiple links for message prioritization, etc.)
> Likewise for Orleans, but the caveat of a "stable cluster" doesn't do much for users. "Stable cluster" falls apart frequently in real scenarios, which can be as simple as a single machine being abruptly restarted. Developers must account for the error scenarios.
For sure, Akka doesn't do it for you. I think one of the biggest yak shaves in setting up would be picking your partition strategy and making sure it works the way you intended. But, once you do it's fun to watch the metrics graphs move when you cut nodes. :)
> This is similar to what .NET developers are used to, since regular objects are also not destroyed when a method throws an exception, and the caller is able to handle the exception.
Yeah supervision can be a bit weird to explain properly.
It all goes back to that first point though; Orleans is a very guided implementation and has very nice, C#-like bindings. Akka is in my view (Akka.NET project contributor, have also written some frameworks and in-production business apps using) more of a 'Toolkit'. You can see this in the various modules that result from it, such as Akka Streams, Spray/Play. Lagom, etc. Use Orleans if you want to write distributed code in C# quickly and within it's constraints. Use Akka if you want to write distributed code or just a quick and dirty Message-passing Scheduler/kernel.
> - Orleans doesn't have any ordering guarantees. While this is not a firm requirement of the Actor model itself, both Erlang and Akka guarantee ordering between a given Sender-receiver pair (i.e. messages sent from A to C will be sent in order)
I thought Erlang only guaranteed that if the processes are on the same node. Is that not correct?
Huh. Never actually looked into HOW it worked in erlang, but basically it looks like it works the same way as Akka/AkkaDotNet; VM skates on top of the underlying transport's ordering guarantees.
For a long time I was curious how Akka Artery preserved message ordering for it's 'dedicated large message' lane. in short it really just means you are dedicating a specific TCP or Aeron connection dedicated to those actors, there's no magic ordering sauce underneath. (just a little disappointed to find that out, but appreciate the simplicity upon consideration.)
Yep. Erlang uses a single TCP connection between each pair of nodes. Older versions had problems with large messages blocking heartbeats (and other messages) for a long time, but that's been resolved last year in https://github.com/erlang/otp/pull/2133.
> A stable and properly configured Akka Shard cluster will never have more than one of the same entity actor alive at a time.
So you're telling me that during a network partition, Akka will somehow know that the entity actor is alive and well in (one of) the other partition(s) and won't start a new one?
As long as you configure things properly, yes. By 'configure things properly', basically make sure that your timeouts for the shard regions is longer than the time it takes to shut down the entities living on the node.
Also worth noting that depending on the pattern used to send to the entity, there is a possibility of dropped messages during such a case.
the actor model isn't erlangs, it's actually a surprisingly old idea ( 1973 ). The virtual actor model is an interesting spin on the idea that simplifies a few things.
Orleans itself is quite nice, I've used it a few times on some smallish things. Looking at using it in anger for a large distributed backend system in the nearish future.
I use it too much and it has caused consternation with both American and Egyptian colleagues. Not sure what to reprogram my brain to use because it’s pretty ingrained at this point.
`Orleans builds on the developer productivity of .NET` - could someone familiar with NET development specify what those things might be? I've never used .Net and would love to find out what it does better/worse/different than other frameworks. Thanks!
- best in class IDE
- best in class debugger
- excellent cross platform support
- very fast compilers
- a robust library ecosystem
- no arguments over how to publish or import those libraries
- a build system that just works
- and all without having to install tons of third-party, fly-by-night projects.
I've worked in lots of different environments. .NET is the only one I've been able to leave, come back to after some time, and get back running in minutes. JavaScript, Python, and Ruby were all rickety houses of cards in comparison. C++ is just inscrutable. And Java might get the closest, but it still feels like being perpetually 5 years behind .NET.
As a developer mainly in the .Net ecosystem for about 20 years now, I frequently see "best in class IDE" as a headline feature. I've spent the past year with Rider as my main IDE, and surprisingly there have been very few things that I miss from Visual Studio (mostly related to profiling tools). On the other hand, there are several irritations in VS that make it feel decidedly not best in class, for me at least (extreme slowness on startup, but also on a variety of frequent tasks such as switching between files, compiler getting out of sync with reality and listing errors that have been fixed minutes ago, the WinForms editor refusing to open a screen for whatever reason, etc). While it's a decent IDE for sure, I'm not sure I'd rate it all that high, and would argue that at least the JetBrains family are on par in most cases.
So an honest question: what exactly do people see in VS as being so outstanding, that other IDEs lack?
- Mixed mode debugging between .NET and C++ (on Java world only Eclipse and Netbeans support this, on Android Google created their own plugins for Studio)
- The architecture modeling tools.
- The GPGPU debuggers
- Data structures visualization
- REPL and immediate window
- Graphical visualization for tasks, processes and their dependencies
- being able to visualize binary libraries in a way similar to .NET ones
- the GUI designers, with live changes support and introspection
- being from the same company that actually produces the OS and dev tools.
Interesting, thanks for the info. I guess our use cases differ enough that the experience is quite different, but I can easily see how if you needed strong C++ support alongside .Net, it'd be a good choice. In my own case, most of the really unique ones on your list are not present or needed, and some of the others (REPL, immediate window, graphical visualisation of tasks, etc) have equivalents which I feel are similar.
I suppose it's always going to come down on each developer's specific use cases and needs, and the average rating of the IDE will be a mix. I've just been consistently surprised with the positivity towards VS in _some_ areas, where my experience was a polar opposite. As an example, working with any sort of "relatively up to date" client side web UI tech (Angular, React) between 2014 and 2018 in VS was an exercise in pure frustration, whereas e.g. VS Code was not. At that time I would not have put VS in the top 3 for that environment, and I frequently wonder whether people just loved it out of seeing the world with MS glasses on, or what.
looks like a healthy project judging by the stats [1], also a big list of users [2].
How big of a task would learning this framework be? Is the documentation good enough? Sounds like it could be a good option for a game's network backend.
I am learning Orleans at the moment. I've created about 12 sample projects in Orleans as I am learning the concepts. Most of the samples are very small and can be run in one program thanks to the ability to host Orleans and ASP.NET Core Web server together.
It was created by Microsoft Research and introduced the Virtual Actor Model as a novel approach to building a new generation of distributed systems for the Cloud era.
Hmmm, getting a very strong Microsoft Enterprise Library vibe.
The Microsoft Enterprise Library is a set of tools and programming libraries for the Microsoft .NET Framework. It provides APIs to facilitate proven practices in core areas of programming.
Packed full of Best Practices and more Enterprisey than Kirk, it was inscrutable to me.
Based on the novel Virtual Actor Design model, I assume Orleans is Design Patterns all the way down.
My opinion is biased, as a core developer, but I do not think it's packed full of design patterns. Orleans and the core team are relatively unopinionated on design patterns that developers employ. I'm much more concerned with developers writing code which cannot perform well at scale, or which becomes unreliable at scale, where machine failures are common, than I am with whether or not something fits some set of prescribed design patterns.
> I'm much more concerned with developers writing code which cannot perform well at scale, or which becomes unreliable at scale, where machine failures are common
Would you mind expanding (or linking to a documentation) on how Orleans enables reliable systems?
I watched this talk (https://www.youtube.com/watch?v=9OMXw0CslKE) that uses an example web application called Smilr to demonstrate some of the features of Orleans. However, that talk doesn't really go into detail on how failures are handled.
For example, in the Smilr app, each 'Event' grain is responsible for notifying an 'Aggregator' grain whenever it comes into existence (or an existing one has its updated). What happens when a call from Event grain to Aggregator grain fails? Who is responsible for retrying?
Microsoft? Do something to meaningfully and significantly improve F# instead of leaving it in the corner and occasionally tossing it things? You must be joking.
I'm mostly convinced F# is there so MS can say "you can do functional things, yeah!" without needing to make a meaningful effort.
There are developers using Orleans with F#, so it does work. It is not as nice of an experience as it ought to be, though. For example, it requires a dummy C# project to hold the generated RPC/serialization code. PRs to improve the experience for F# developers are greatly appreciated.
There are a couple of types of distributed applications though, right?
There's the traditional client/server scenario where you distribute the load horizontally across many servers. If most of the server computation is stateless, and all the state is stored in the database, then you can use either the FaaS or the actor model.
Then there's the client/server scenario where the server computation is stateful, in which case you shouldn't use FaaS. You could still use actors though. Isn't this where Orleans sits?
Then there's the peer to peer architecture where all computers run the same code, whether they're in the cloud or on an end user's laptop or phone. Does Orleans make sense in this case?
Case 2 is the most popular one.
Sometimes people migrate from Case 1 when they start hitting performance bottlenecks of hitting database on every request or/and experience congestions from uncoordinated concurrent writes.
Orleans doesn't make a lot of sense for Case 3 IMO. Peer-to-peer is not a good environment for forming stable clusters and making resource management decision the way the are done in the traditional server-side case.
That document is outdated and was written with an Akka-centric view. The author from Akka's side of things, Roland Kuhn, has since moved on to Actyx, from what I can tell.
The doc predates ACID transactions support in Orleans, but it talks about the Virtual Actor model vs the Akka model in general.
Having spent a lot of time with Orleans but very little on Akka, I believe the key difference is the "virtual" part. When you write Orleans code, the presumption is it will be executed on a remote machine, but you don't have to address that machine. The Orleans host does the work of actually assigning ownership of grains and distributing the load between silo nodes. So in your .NET code you simply have to instantiate a grain like you would any other plain old C# class (actually there's usually a factory IIRC), and then you work with its async Task-based methods (a standard and boring concept in modern .NET) which may or may not communicate over the network.
That sounds a bit like Cadence/Temporal, the workflow programming platform. It doesn't really advertise itself as an actor framework, but it shares the concept of being able to run ordinary-looking functions as though they have a lifespan that isn't constrained to a single system or OS process.
Yeah, same with Cadence/Temporal. The one caveat is that you have to move all observable effects into activities so that they can be cached. Under the hood, the engine replays functions to rehydrate them, if necessary.
I'm also one of the core developers of Orleans. Ironically, I recently joined Temporal. There are definitely some similarities, but also major differences between the models, especially when it comes to the execution model and fault tolerance.
After more than a decade of working on Orleans and only three weeks on Temporal it's foolish of me to talk about what I "like better" :-). I'm working on a couple of conference talks to compare the two approaches.
In short, Orleans is biased towards quick low latency operations. Longer running workflow style operations are totally doable, but require extra application logic and thinking.
Temporal's main abstraction is a workflow. So, it's biased towards reliable execution (with retries if needed) of business processes that may take seconds or days/months.
Orleans executes application code within the runtime process. Temporal orchestrates execution of external application workers (processes) I started referring to it as Inversion of Execution.
Orleans is .NET. Temporal currently provides Go and Java SDKs.
These are just top-level differences that come mind. There are many others. But there are also major similarities.
At the end of the day, I think Orleans and Akka are conceptually different types/notions of what an actor framework is.
> Orleans uses a different notion of identity than other actor systems. In other systems an “actor” might refer to a behavior and instances of that actor might refer to identities that the actor represents like individual users. In Orleans, an actor represents that persistent identity, and the actual instantiations are in fact reconcilable copies of that identity.
Hi there (ex Akka core team here). Yeah there's slight differences what the main concept is in those libraries. It kind of boils down to Akka's concept of an actor being more low level, and then adding the capabilities that Orleans offers as "the way" in means of extensions.
A virtual actor is very similar to an entity in Akka that is running in cluster sharding and uses Akka persistence. It's just that it's not packaged up into the virtual actor concept by default. I would love Akka to provide a more hand-holding more Orleans style module to be honest, it's a great concept and way to think about systems :-)
It's the distributed networking equivalent of programming with managed versus unmanaged memory.
With Akka you manage objects actively, much like memory in C, with Orleans your objects behave much like normal Dotnet/Java objects, they are just distributed and must use async methods. Orleans adds a layer of management, where it will tear down and re-instantiate objects as required, distributes them in the cluster, etc. As a programmer you don't have to worry about it, it's all managed by the runtime.
I’ll give it three hours before the emperor is found naked, like COM, DCOM, DNA, windows.net, Patterns and Practices, Enterprise Library, Remoting, AppFabric, WCF.
It's a "Microsoft Framework", it's going to be enterprise-y Design-Patterns, convoluted abstractions, Dependency-Injection-on-Dependency-Injection and MS specific code/libs all the way down.
Lack of higher-level language features requiring code duplication/generation, a lean standard library usually necessitating 3rd-party packages, and an utterly terrible module system and fragile tooling setup.
Go is still far better than JS, but both are behind in modules, packaging and overall structure as compared to .NET. gofmt is nice though, more languages should have that.
For all it's other woes, I've had like 80% fewer issues with Go's tooling than .Net's.
- Getting packages installed: trivial
- Getting clean builds: trivial
- Build and run works near perfectly
In comparison .net seems designed to be used _only_ via Visual Studio:
- Why does it re-download the packages each time I ask it to build?
- Why does VS freak out sometimes when there's a terminal window left over from a previous run?
- Why does VS freak out so thoroughly when something changes under it.
- Sometimes it just caches your previous build and thinks it's perpetually broken no matter how many changes you make, restarting seems to be the only option.
- Getting dotnet tools to work from the command line (the way Cargo/Yarn/Julia/Pipenv/etc do) was a confusing affair fraught with instructions like "oh you've got to into this XML file and set these random configs and tell it where x/y/z dll is".
That's before we get to the whole disaster that is "add this as a reference to your project". No programming language has caused me quite as much frustration as C#/.net manages to.
We use it all the time, no VS required. You don't have to edit any XML files today, and can use a simple editor like Visual Studio Code or notepad if you want.
Do you have some big old solution that you're working with?
Specifically I'm talking about tools like LINQ, dotnet core libraries, VS and VS Code integration, and the standard library and common library packages.
I still think it has a long way to go but its still a huge potential upside, which is to say nothing of how its coming to dominate the games industry as well.