Hacker News new | past | comments | ask | show | jobs | submit login
How .NET container images are maintained (microsoft.com)
141 points by alexis2b on Feb 12, 2021 | hide | past | favorite | 41 comments



container-diff[0], mentioned in the article, is a really useful image release tool, that I don't think gets enough love.

0: https://github.com/GoogleContainerTools/container-diff


Also Dive is a good tool to analise docker containers.


I wouldn’t normally nitpick spelling, but “analyse” (or “analyze” if you’re in the US I guess) is the word you’re looking for.

“Analise” has a very different meaning, one that you probably don’t want to Google on a corporate computer.


You are right. I never knew analise was a word, but I know analyse is the right spelling. I'm gonna chalk this up to auto complete error (really I don't know why I spelled it as analise).


I will tell corporate I am Portugese.


If only it didn't do its thing by extracting the containers into dotdirectories...


I am really curious, who is actually using Windows containers in a way that's critical to their business? (i.e. your main web app, not giving some janky legacy Windows thing a place to live while you migrate away from it)

I have to imagine at some point probably very soon all of the time spent maintaining and hacking in the NT kernel to work in a container world is just throwing good money at a vanishingly small segment of the market.

kudos to this team for being on the ball and supporting ARM and ARM64 images though. Now when can we actually run ARM64 stuff on Azure....


Hacker News represents only a pretty small fraction of the IT world. There are many, many companies out there doing things Hacker News never heard of because it's not related to web or startups.

What I do at my day job would be considered legacy tech here on HN but my company is a multi million dollar profitable business working in the B2B sector. Our customers don't want flashy websites, they want software that works over many years with minimal maintenance burden running on their own hardware.

Microsoft Windows is still dominant in the business world because the often external IT knows how to work on it and most of the time it just works.


There are large and successful companies with very modern enterprise apps communicating with old stuff like AS400 mainframe systems.

These and many other companies also have generations of (not quite as old) Windows applications that aren't going anywhere anytime soon either.

These companies want to manage it all in a modern way. Sometimes, it's part of the migration strategy.


> I am really curious, who is actually using Windows containers in a way that's critical to their business?

Valid question, but not exactly what the article is about. .NET Containers != Windows Containers

From the article:

> . We publish updated .NET images for Alpine, Debian, and Ubuntu


I am, there is no reason for .NET developers to deploy on Linux containers, and castrate ourselves to libraries only available in .NET Core.

Also outside HN universe, Windows servers are a thing, so it is not only .NET, there are also Java and C++ applications.


Barring GUI libraries that depend on WinForms/WPF, surely there are very few libraries now only available for dotnet framework?

I work at a consultancy for enterprise-size orgs, and Windows servers are indeed still frequently used - but almost exclusively on-prem, and I've never even once come across Windows containers being used. Seems the shift to the cloud, and Microsoft's shift to OSS and Linux love, has tamed even the most fervent of Microsoft shops. 10 years ago you'd be laughed at if you suggested anything other than Windows, but now it's the default for anything in the cloud.


Sitecore, Sharepoint, Dynamics, EF 6 and WCF in-house frameworks, and plenty of third party solutions on the Windows ecosystem.

All the cloud based deployments I have done, other than a couple of serverless microservices on AWS Lambda, have been done on Windows and that won't change for the foreseeable future in what concerns the kind of customers I work with.

In fact, it is a bit ironic that instead of getting advises from cloud experts, I am the one with my lower skill set in the domain, to have to explain to them what is to be expected for production delivery.


Is there a compelling reason to use EF6 instead of EF Core? I personally haven't used EF, or even seen other teams using it, for years now.

Similarly with WCF. It's something I used to use a decade ago (though I admit I was never a fan), but haven't seen it used in eons.


99% of enterprise projects I deal with are brownfield projects, unless it there is a business value in a code rewrite, there is no compelling value to spend money rewriting code from scratch for newer stacks, specially if they require buying third party tools to replace existing workflows, like it happens with the broken EF 6 designer on core (and non existent one for EF Core).



Partially, not everything that is part of EF 6, specially VS designers.

Don't take marketing for the full use in production environments.

Also we can keep doing this game, there are plenty of enterprise tools I can list.

.NET Core is the Python 3 of .NET world.


I've used EF Core in production for years, and seen it used in production by numerous teams, and it's been at least at stable as EF6 was, while being a lot more performant.

You are correct that you don't get a VS designer, but I've personally always favoured a code-first approach anyway, so this never affected me.

> .NET Core is the Python 3 of .NET world.

I see where you are coming from, but I don't think I agree. There is still a large segment of Python 2 devs, and for many, there isn't a particularly compelling reason to move to Python 3. Dotnet Core is different, in that there are very compelling reasons to move: cross-platform (massively important in our new cloudy world), performance, and of course ongoing support. There are for sure a small hardcore of sad dotnet devs stuck on Framework because of some legacy libraries, but I reckon the majority made the move to Core a long time ago.


The WinForms designer for .NET Core in Visual Studio is still broken and doesn't work with third-party controls. The whole WinForms team at Microsoft is afaik less than 5 developers strong who are also working on other things on the side. If you look into the bug tracker you will see that those <5 devs are not experienced enough for the monumental task of porting WinForms. There are a few volunteers that keep submitting PRs and giving insights into .NET Framework's WinForms to keep the devs from accidentally knocking things over.

The .NET Framework version is working fine and has been for years. That's why my company won't switch in the near future. Marketing and reality are just too far apart.


> The WinForms designer for .NET Core in Visual Studio is still broken

That's tooling, not Dotnet Core itself. As an aside, I think there is even a WinForms GUI editor in Jetbrains Rider now.

> Marketing and reality are just too far apart

I mean, that's your opinion, but there are millions of Dotnet Core devs for whom the reality is as wonderful as promised. If you still have to work with legacy libraries or maybe WinForms, then of course I understand where you are coming from, but I don't think it's fair to tar the whole of Dotnet Core based on some limited failings that affect only a minority.


What Scott Hanselman calls dark matter developers, don't care one second about where .NET Core stands today, the large majority doesn't come to sites like HN and Reddit, and is busy porting applications to 4.7.2 and if they are lucky already using .NET Framework 4.8, regardless how Microsoft would like to pimp the .NET Core story.

Just look at the github issues of everyone on the UWP/WinRT disgruntled with the lack of roadmap for .NET Native, or how WPF has taken one year to ramp up a new team, after they sent everyone away. Worse it seems that WPF bug fixes are being done from other side of the globe in some offshoring deal.

Real life on the trenches is not like on Channel 9 TV.


I think we'll need to disagree, as we have incompatible views on what the reality is.

I absolutely do not dispute that anyone involved in building Windows-only desktop apps is not going to be satisfied - but they are absolutely in the minority.

As a consultant, I don't just see the workings of a single org - I work across several orgs every year. When Core first became a thing, some were slow to take it on, which was understandable considering the confusing messaging from Microsoft. But it's been around for a long time now, and Microsoft's messaging and intentions became clear. What I now seen is huge uptake of Dotnet Core - way, way more is now on Core than Framework, and I haven't seen anything new being built on Framework for at least a couple of years.


I also do consultancy, on enterprise projects where using Oracle and SQL Server is a rounding error on project expenses, in a mix of Java, .NET and C++ based solutions.

On my side, I am yet to see a new project done on Core besides some tiny micro-services on department level.

As mentioned on another thread answers, still waiting on SharePoint and Dynamics running on Core.

So yeah, we'll need to disagree.


Microsoft is selling it as a package deal. .NET and VS go hand in hand.

The current LTS release of .NET Core 3.1 has a broken socket implementation that can lead to deadlocks on Linux and macOS.

Microsoft will not fix it in 3.1 LTS and recommends using .NET 5, which is not an LTS. If you are a company that is deploying multiple times a day, that's probably fine. If you are a company that keeps the lights on with about 5 devs for a few hundred customers with on-prem setups, this is just unacceptable and doesn't even have anything to do with legacy libs.

https://github.com/dotnet/runtime/issues/31570


Wow, so much for LTS I guess. I really admired the Microsoft of before (yes, antitrust; I'm not talking about that here.)

Admittedly it's just from what Raymond Chen's blogged about, but it seems to me that Microsoft had a different "corporate attitude" back then, so to speak.


Running your code on linux is cheaper. Of course legacy code/dependencies might prevent you from running your code on Linux. .NET libraries that are still being developed are either already available on .NET Core or are going to be.

I don't think Java applications have dependencies to Windows that often, and it's a pretty bad idea to use C++ in server applications.


Looking forward to see SharePoint and WCF on Linux.

Sure Java runs on Linux, that doesn't mean many shops want to build the human skill set to manage those boxes.

As for C++ and server applications, in what languages do you think database servers and similar high performance servers are written on, most of the time?


I've seen a few people answer this question in the affirmative in previous threads, but I've never seen Windows containers in the wild yet. I work for a consultancy for enterprises, so if it was commonplace I would have expected to see them at least once. Obviously not doubting anyone else; just saying I think it must be pretty rare.


I provided an answer in the blog post post comments. We see it most big corps and government. We also see Linux in those same places. If there are big Windows Server apps (typically .NET Framework) that people want to host in the cloud, then they often choose Windows Containers as the lowest friction solution, at least for step one. I work closely with the Windows Container team, and this is what I see from their customer engagements. Inside Microsoft, it is the same. Many big services at Microsoft have hard dependencies on Windows (both .NET Framework and Win32), and at the same time, many have also embraced Linux.


I don’t know anyone using Windows Containers in production.

Even Microsoft is using Linux containers for production dotnet deployments.


I would pay to see SharePoint and Dynamics running on Linux containers.

Don't judge what Microsoft is doing, with the .NET Core teams are marketing for.


Wanted to know more about the tools they use for maintaining and scanning, but I guess many of them are internal and not open source?


For the scanning, we (.NET team) use the scanning services provided in Azure Container Registry (ACR). This is an internal ACR and the results of that are internal as mentioned in the post.

All the other tooling we use is open source. You can find our build infrastructure at https://github.com/dotnet/docker-tools. There's a tool there called image-builder that provides much of the functionality. I've written a blog post on how we use Azure Pipelines to manage the builds: https://devblogs.microsoft.com/dotnet/how-the-net-team-uses-.... Between image-builder and the pipelines, there's some automation that automatically rebuilds our images whenever a parent image changes.


> Comparing image digests won’t work; they will never match.

This is a strong assertion with no further explanation. It reads like a generic truth about container images, but it's certainly possible to achieve this, as referenced later:

> Sidebar: Various folks in the container ecosystem are looking at enabling deterministic images. We welcome that. See Building deterministic Docker images with Bazel and DETERMINISTIC DOCKER IMAGES WITH GO MICROSERVICES.

I'll agree that docker makes it _really_ difficult to build and consume reproducible images (for a variety of reasons, see https://github.com/google/go-containerregistry/issues/895#is... and https://twitter.com/lorenc_dan/status/1343921451792003073 for a sampling of interesting ones), but there is more to the container ecosystem than docker or Dockerfiles.

Shameless plug: I help maintain ko (https://github.com/google/ko), which can achieve reproducible builds for go projects without much fuss. It also leans heavily on go's excellent support for cross-compilation to produce multi-platform images, trivially.

> There are two cases where the container-diff tool will report that the registry and local images that you are comparing are the same (in terms of Docker history lines), but will be misleading because the images are actually different.

While container-diff is great, it can obscure what's really going on a bit. If you're interested in uncovering exactly why the digest of the image you built is different from what was published, please forgive another shameless plug for crane (https://github.com/google/go-containerregistry/blob/main/cmd...), a tool I wrote to expose most of the functionality of go-containerregistry (https://github.com/google/go-containerregistry), which is the library both container-diff and ko use under the hood.

Forgive the sparse documentation, but it should be relatively straightforward for anyone familiar with the registry API and data structures, as the commands map pretty directly to registry functionality. Using crane, you can easily inspect the image in the registry directly to compare the manifests and blobs that make up an image.

For example, one reason that the digests might never match is that these images are somewhat strangely wrapped as singleton manifest lists: https://gist.github.com/jonjohnsonjr/ffba104ca504b5bb4a1f227...

It makes some sense to me that they might want to do this to prevent folks from pulling this on windows, but usually you would only encounter manifest lists for multi-platform images. Even if these builds were reproducible, you would have to compare the digest of what you built with sha256:9a210bb9cbbdba5ae2199b659551959cd01e0299419f4118d111f8443971491a -- not the sha256:fb1a43b50c7047e5f28e309268a8f5425abc9cb852124f6828dcb0e4f859a4a1 that docker outputs, as shown in the article.

The tag used for this example (mcr.microsoft.com/dotnet/sdk:5.0-alpine) has since been updated. Comparing this with the original using container-diff just tells us that the size changed: https://gist.github.com/jonjohnsonjr/90c2def551833c8cacf3264...

But looking at the actual manifests, config blobs, and layers using crane is often faster and more interesting: https://gist.github.com/jonjohnsonjr/283eab27d996b2f4cc04553...

My intention with crane is to be easily composable so that you can use familiar tools like tar, sort, diff, jq, etc.

(To be fair to container-diff, you can use the -t flag to show similar things.)

I realize this is not really the point of the article, but it's a huge pet peeve of mine that everyone has just given up on understanding what's going on with their images because the tooling UX makes everything so opaque. If the digest of something doesn't match, you should know why! It's as if `git push --force` was on by default and everyone has just accepted that reality.

Now to read the rest of the article :)


Good and fair call. I softened the wording on digest comparisons. I sympathize with your pet peeve and like your apt example.

I wrote the article (although you probably figured that out).

Oh, and thanks for caring so much about reproducibility. We talk about this topic a fair bit in the .NET toolchain, and have enabled it over many years of investment. A lot of it comes down to timestamps or pathing (as you know).


Nice insights and I will definitely take a deep look at ko.

Slightly off topic but since you linked to his Twitter account, everyone should check out Dan Lorencs articles. There is some interesting articles about Golang, kubernetes and other Ops-like things.

https://dlorenc.medium.com

I only wish I could get his stuff into my RRS feed so that I didn't miss it.


Dan’s the best. We work together, and we’re hiring. I’m biased but I think we have the most fun at Google. :)


Oh wow! This is such a nice comment to read. Anything in particular you'd like me to write about?


The things you are writing now are fantastic so keep that up. Anything deep into kubernetes, go, GKE, monitoring, security, etc are all good things to read about. I found your writings based on your helm article. I am not a fan of helm as it has bitten me so many times.

I don't know what you are 'allowed' to write about but I'd love to know how GKE keeps on overriding it's settings and what can be changed and what cannot. We've had problems with the metrics server in the past crashing and most of the setting we edited by hand were overwritten. Fortunately, we found one way to change something, it stayed and our metrics server recovered.

Also, insights into what happens when GKE does a node pool upgrade would be useful. We had an outage because of an upgrade a couple weeks ago that I have to take a deep dive into. I am guessing we are missing node anti-affinity.



I tried a couple weeks ago and it wasn't working. It must have been a medium thing but it's working now!

Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: