It’s nice to see the .NET Core team hitting on the same solution as we are heading for internally for our multi-repo builds - the “dependency flow” with Maestro sounds very similar to what we intend to do with propagating latest NuGet package versions to consumers via PRs, then auto-merging those PRs if all checks pass (and the repo owner has opted into this, of course).
Will the .NET Core team open source this tooling I wonder?
I think they would very much like you to buy into Azure. We’re a C# shop, have been for decades, and .Net Core really shines when you hook it up to the Azure devops package. Which is perfectly fine for us, we’re in bed with Microsoft anyway and with office365 licenses for 4500 employees, Windows enterprise for 5000 and a farm of local windows servers that’s what we would have likely done anyway.
Is having the project split over various git repos a good thing?
Personally I struggle with it, when I look for a bug, I have to do a few transverse search on github (and github search is bad), checkout the code. If I actually want to fix it, I have now a new sub-project “find how to compile the mammoth”. Then if I want to send a patch, find the local rulers and their rules, send a PR.
The execution of .NET core project has been, with all due respect, a huge mess. The naming and versioning schemes are confusing. The breaking changes are disruptive. Making ConfigurationManager a pain in the ass to use is disruptive.
.NET core has been around for years now but it doesn't inspire confidence and still doesn't seem production ready given the lack of adoption and Microsoft's incessant habit of changing and breaking things for no good apparent reason.
It's such a pity because I absolutely love the dotnet cli tooling.
I'm not sure what you mean? We've been running dot net core apps in prod for awhile now, and I'm not sure what you mean about naming and versioning schemes? Are you talking about the difference between the legacy .NET Framework and the dotnetcore stuff? Sure that could be confusing if you're expecting them to be the same thing?
Also, what do you mean not production ready? There's a lot of people running dotnet core apps in production. We have servers serving up a bunch of APIs, running Linux, and have no regrets, and I'm personally glad that I've moved off of Java framework for this.
I only see if you're trying to migrate existing apps from Framework to core. But I'm not sure that's a path any sensible person would take with any significant amount of legacy code.
Anyway, I'm not seeing the huge, insurmountable mess you're alluding to.
I don’t think this is true at all. My last two jobs have migrated from .NET Framework to .NET Core. At my current job all new services are .NET Core and there are many running in production now with no problems at all. Every piece of spam I get from recruiters has .NET Core in the description these days.
I caught up with a friend of mine recently and his company has migrated to .NET Core as well. Anecdotal I know, but it's a thing some people are doing.
Definitely exciting to observe from afar, but it can be jarring to try to keep up with as someone who only occasionally dabbles in it for web stuff.
Slight hyperbole, but waking up one morning to find the whole of Identity rewritten as razor pages was a bit of a curve ball.
Edit to add: I'm not so sure about "lack of adoption", though. I'd temper that as lack of adoption in the US/valley. Europe, the UK and Australia seem to have quite a bit of .net core stuff going on.
It's been production ready (as in fully supported) since v1. It's the first new cross-platform runtime in over a decade so breaking changes are expected, but they were very clear about where things stood and always recommended v2+ if you wanted to move with the least amount of missing APIs. With .Net Core 2.2 now, a full 5 releases in, there should be no major issue for migrating apps or building new ones.
The naming is definitely a known problem but as far as "doesn't inspire confidence" and "breaking things for no good apparent reason" - what specifically are you talking about? There's tremendous adoption with the next version of the entire .NET platform converging around Core and all the development is done out in the open with public discussions and code if you want to learn the reason why anything was done.
Is it more that things are just different from what you're used to?
We just moved into .NET Framework 4.7.1 for greenfield projects. All regular .NET libraries and Visual Studio designers just work out of the box.
And yes it was a huge mess, I see it as result of internal politics how Windows dev was supposed to move forward after that whole Longhorn vs Vista, which eventually lead to WinRT.
Eventually we will be coming back to where we started with .NET 5.
And the migration would have been easier if they just did the improvements into an increasing way.
Rule of thumb, always wait and don't jump into new tech just because.
I'll echo your point about confusion. I tried once to figure out how to build a simple .exe with C# and dotnet core, something I could do easily with c++ and gcc.
I gave up after a few days because the dotnet documentation was too confusing. There are 40 different projects with easily-confused names like corefx, coreclr, etc, and the docs are convoluted and split between developer.microsoft.com and msdn.
That was a while ago, so maybe the docs are better now but I can't shake the feeling that when your best source of info on official stuff like "how to build a self-contained application" is a 2-year old obsolete Stack Overflow post, .NET Core is just a mess.
If you mean creating a single EXE with no external dependencies whatsoever, that's something that they are discussing in .NET Core 3 but isn't straightforward right now.
It's basically the same story as Python, you have to bundle the interpreter with the application or you have to rely on the interpreter being pre-installed. With dotnet, you rely on the runtime being installed or you bundle it in as self-contained.
You don't need to know about any of those projects to build an app in C# so you may have started from a bad source. Documentation is much better now and all located at https://docs.microsoft.com/en-us/dotnet/ with github-integrated editing and comments.
Self-contained apps package the runtime with your app so there are no dependencies but having an executable is a separate option and requires the ID of the platform you're targeting. The 3.0 SDK makes it much easier. https://docs.microsoft.com/en-us/dotnet/core/deploying/
Same experience here. I wrote some C# Windows apps a few years ago, I jumped to write a REST API and the experience was painful: the documentation is confusing, the REST API projects are even more confusing, I got what I wanted but it felt like a big accomplishment to go through the complexities. By comparison I wrote that API in PHP in a couple of hours - it had more code, but my team mates were able to understand easily how it worked. I am not saying PHP is better, I am saying it is much easier to use and there are probably more examples out there of simplicity.
They like to make noise about how crossplatform and opensource .NET Core is, but their debugging infrastructure is actually proprietary. See: https://github.com/dotnet/core/issues/505
actually this is not 100% correct.
it's actually possible to make your own debugger for dotnet core since the interfaces to create one are public and opensource. it's just that the debugger that microsoft has written, is not open source.
the "infrastructure for debugging" is actually open source.
> Which make me sad, considering that FreeBSD was the first actual port the coreclr actually made. However, no one seems to care afterwards.
The FreeBSD port was a community effort, and not done by Microsoft.
Unfortunately the small team doing the port ran out of time before life caught up with them (jobs, kids, university, etc) at around 95% completion.
After that .NET Core kept moving fast, and catching up (new libraries, llvm versions, mono versions required for bootstrapping, etc) also required FreeBSD updates, and all in all was somewhat hard.
It makes less than zero sense to me as to why a BSD programmer would even consider wanting Microsoft .NET frameworks, libraries or software on BSD. You have as good or better tools and languages already there. Perhaps in the off chance of porting a program written with .NET but those are rare and the only people who seem to want this are Microsoft and Windows/.NET programmers who also happen to use BSD.
I have been programming FreeBSD systems for 15 years--actually longer--and it's never crossed my mind to even once think about using .NET or any Microsoft product. Makes no sense at all.
I've run into similar challenges in a number of (mostly microservice) systems that also avoided monorepos (for various practical and philosophical reasons). Parts of the solution space here are very familiar, especially relating to dependency management and "goldilocks" framework infrastructure.
I've especially found that most cloud CI tools (e.g. Circleci) fall flat here without a lot of additional work. Those ecosystems seem designed for a single repo; there's a big opportunity for someone to get CI right for non-trivial, multi-repo projects.
I haven't tried it, but GitLab's CI seems to support multi-project flows as an official feature[0]. I believe Jenkins also allows builds to trigger other builds.
My team is in the process of moving our Jenkins CI jobs to GitLab CI. We have moved most of them and our feeling is that GitLab's solution is more flexible and easier to manage than Jenkins'. For the really weird or tricky ones, we have a couple custom Docker images.
Right now build Java, Kotlin, .Net, .Net Core and Node projects with GitLab.
We setup Nexus to manage the code we share between projects and that has really made the process much easier to manage.
Does Arcade solve the problem of automatically updating nuget references in a csproj from command line in cases where new version of dependent package has different set of dependencies than the previous one? I mean the stuff mentioned in
I see they mentioned Azure Dev Ops. Has anyone tried it? I found it to be unusable. It's way slow. I couldn't imagine having to do any sort of admin in there.
Most Microsoft businesses which I worked with use it and I find it extremely confusing. The UI is not very intuitive and it takes me on average 5-10 clicks to get to where I want to go. Most things are not documented at all or not very well or documented wrongly because out of date and things often require hours of trial and error before something works the way the team wants it to be. I'm sure once someone has spend too many hours "learning" Azure Dev Ops then it might pay off if it actually all works, but I find it too frustrating to commit my precious time to figure things out which should be dead simple as other CI systems have proven it can be done.
Oh god Azure Devops is terrible. It's improving over time, adding new features.... but letting existing features which actually don't work continue to be broken.
It's slow, the builds don't refresh properly so you have to always f5 if waiting for a build.
Their hosted solution is also impossibly slow, we ended up running it on a Vultr $20/m machine and it's fine for us (we run 3 concurrent hosted agents). Builds are much faster.
We're in the process of migration, first from Jira to Boards. I have some serious concerns but I'm a marginal Jira user and DevOps newbie, so they all seem confusing to me.
The build pipelines & release management stuff is even more confusing but could be quite powerful. I'm pretty opposed to replacing (even ancient) Jenkins et al CI tooling that works though - seems like a lot of work for little net value.
If you're doing greenfield development in the MS ecosystem DevOps should be strongly considered; aside from the cost savings (it's usually bundled with lots of porgrams/memberships) I don't see huge benefit over say the Atlassian solutions.
It would be really nice to have nice documentation of similar size builds of our we system. When I read this post the first time I was really interested (surprised, astonished or impressed are not the right words considering this is engineering).
.NET Core was/is touted as a cross-platform solution, but it seems only command-line apps are cross platform? I don't see the point in this...
I think they should put more energy into developing a cross-platform GUI system. There's [1] Avalonia, which is a 3rd party system, but nothing developed by Microsoft...
You don't see the point in being able to develop microservices, databases, web apps, development tools, mobile apps, messaging services, embedded systems, middleware and tons more types of non-desktop GUI apps?
I agree that there's a point to console applications which can be compiled to different platforms. But that's a basic feature of countless languages... c++, python, ruby, javascript, rust, go... They all can run console applications on different platforms.
What I don't understand, is why they didn't focus on the a cross-platform version of Windows forms or XAML. The only cross-platform competitors in this market are QT and Electron. This would also promote people to develop for windows on other systems like Mac and Linux...
Will the .NET Core team open source this tooling I wonder?