I just really wish they start investing into true static compilation for .NET.
Some serious projects [1] started adopting CoreRT [2], despite Microsoft's neglect for their own runtime. CoreRT seems to really deliver on the single-file fast-startup small-size .NET promise. Getting this project folded into the LTS .NET 6 release should be a priority.
The source generators [3] that are coming to .NET 5 easily cover 99% of our Reflection.Emit use cases, so the JIT is going to be more of a legacy burden once .NET 5 comes out.
I want Go's small size, fast compile times, fast startup, without Go's bs (explicit error checking, seriously?).
Pieces of our software get deployed over a satellite link so yeah, megabytes matter (and that's why I don't even dare to propose using .NET for those parts). Sharing code with the rest of our .NET stack is a PITA though so people are getting itchy to rewrite the rest of our .NET stuff in Go for better sharing (the Reflection.Emit parts would be replaced with "go generate" which is... a source generator). It would be good to get some clarity on the static compilation roadmap in .NET because I like my job, but I also don't want to become a fulltime Go developer.
I remember reading about this in Java world a while back. To maintain safety you need to validate all the bytecode on load, so you can't really pre-cache it. Newer versions of Java use "class data sharing", but this only caches code after the local JVM has already validated it. Doesn't help with code files you've never seen before.
You could abandon validation and pre-compile but then running applications is no safer than c. You could modify the IR to do bad things like leak memory and crash VM.
JS has the same problem. Parsing and validating JS is a large part of page load times these days.
I'm not sure source generators will be the solution you hope for. Java has supported build time code generation for ages but everybody still reaches for reflection. A few popular libraries like MapStruct use build time generation instead of reflection, but again there's like 20 other popular libraries that do the same thing with reflection.
Java has attempted to fix the reflection hole with modules. You're supposed to pre-declare the classes you're reflecting in your assembly. But at current adoption rates its going to be another 20 years before you can count on all your dependencies using the module system correctly.
Trimming unused code is easy without reflection. Its been standard practice on Android for a long time. But every time I've tried to use it on servers I run into random crashes cause by runtime reflection and classloading.
Pretty unfortunate shortcomings to Java and C#. I don't think they'll ever be truly fixed unless somebody uses to nuclear option of disabling reflection completely
Disabling runtime reflections is key. Most of what it’s used for can be done compile time anyways. But more importantly reflection enables circumventing the type system and doing all sorts of unsound things.
As long as you have a compile time option I don’t thinks it’s nuclear, it just makes sense.
For Java's case the benefits of build time code generation are greatly exaggerated. In many cases, it is a lot faster and easier to generate code at runtime using MethodHandles then build-time code generation. For example, the toString/hashCode/equals methods of record types (that are being added with Java 14). Are all generated at runtime via indy (Invoke Dynamic) instead of generating the bytecode for them at compile-time.
> For example, the toString/hashCode/equals methods of record types (that are being added with Java 14). Are all generated at runtime via indy (Invoke Dynamic) instead of generating the bytecode for them at compile-time.
Didn't know that, interesting. I remember when invokeDynamic was added specifically to make VM conversion of dynamic languages easier, I guess JDK maintainers are somewhat guilty of the same laziness as the rest of us.
An aside, from reading about JVM targets of Haxe I learned that MethodHandles have truly terrible performance especially on Android.
I don't think the potential performance advantages of build time generation are overblown, just that everyone, even language designers, don't want to deal with the added complexity.
I could be wrong of course, but because of this I think the C# idea of generators will end up underutilized as well
CoreRT is definitely the most exciting project in the .Net stable. But there seems to be some people within MS who don't want to go in that direction - mostly for the reason that it won't be compatible with all of the existing code out there.
I just wish they'll see that most users would gladly trade some compatibility for smaller size and faster execution.
Constructors are the only odd duck for which there is no efficient alternative. You either generate a delegate at runtime to call the right constructor, or you have to use reflection/Activate.CreateInstance.
What's the reason you can't use AoT compilation together with serialization? The only thing that's problematic is reflection, but I wouldn't use reflection in my serialization if I can help it.
I am not going to enter into a debate about the merits of various serializers, but if netwonsoft.json doesn’t run, that’s half of the nuget packages that cannot run in that app. That pretty much guaranties its failure in term of adoption.
Yeah I didn’t even consider text based serialization (json, xml). I know there is at least a couple of aot-aware json.net ports but obviously those aren’t going to be what existing packages use. It seems like a pretty small hurdle though. Once AoT is commonplace more packages will support it, including json.net. It might take three years or ten, but it will happen (I consider 5 years to be a blink of an eye in this space...)
C# code still uses too much reflection (serialization, asp net routing, ORM, view model binding) to be statically compileable.
They seem to be doing something against that with code generators
This is exactly what newer versions of Java allow you to do. The problem is, nobody does it. I really like C# as a language, its better than Java in nearly every way. But this is a rare case where C# is implementing a feature Java has had for a very long time. And just like in Java, I don't think it will have the impact advertised.
The funny thing is that a lot of reflection is made necessary because of the lack of a preprocessor. Considering that everything old comes back after a few decades as a new innovation with a shiny new name I wouldn't be surprised if the preprocessor came back to support static compilation.
C# and Java are both guilty of this. I don't think generators will help nearly as much as hoped, because Java has the same problem even though it supports build time code generation through Annotation Processors for like over a decade, since Java 7 I think.
Reflection is just easier. Unless you go out of your way to use libraries that use code generation, Java is just as bad as C# in this regard.
There's a few bright spots, like MapStruct for object->object mappings and Dagger2 for dependency injection that are built entirely on code generation. But these are exceptions to the norm
> Pieces of our software get deployed over a satellite link so yeah, megabytes matter (and that's why I don't even dare to propose using .NET for those parts).
Megabytes are a one-time thing, unless your app is HUGE. Once you've deployed the runtime and dependencies, you can only redeploy your application DLLs.
And a license that's going to run you at 1700€ the first year and 440€ on renewal. It's spare change for a successful company but I have to wonder if it doesn't hamper adoption, especially when there's a seemingly obvious free and open source alternative like Lazarus + Free Pascal
I'm writing a pretty serious app in FreePascal/Lazarus right now that's working out nicely. At some point, the Delphi cost may be justified, but not right now. Plus, the code injection macros in FPC (Delphi doesn't have) are addictive. I haven't tried on MacOS yet, but the transparent compile ability across Windows/Linux for desktop UI is pretty amazing.
I can see why most would not start a new project in Delphi, but in some regards it's really still unbeatable. Small, self-contained executables definitely is one.
Yes, it's a nice language. The real advantage of Delphi though is the component library, its the real killer app of Delphi. I find object pascal a bit dated, but the VCL and firemonkey (cross platform component library) are things that make Delphi awesome.
Even small utilities in dot net core are ridiculously big and have tens of dll files. Maybe a proper dependency management is needed so that only what parts of the framework that are actually used are included in the executable.
Trimming is already supported and they're investing heavily into making it more aggressive. A web app was about 17mb in one of the demos at the conference that I saw. You can get it smaller if you make it more aggressive yourself but you may have to whitelist specific namespaces if it accidentally removes that you're using that it can't detect.
I would consider this still VERY big. It's ok for backend stuff. But now you can also use .NET --Core-- with WinForms, WPF and WinUI. That means, GUI Apps.
You'll have several GUI apps installed on user system...
UWP already handled that years ago, thought.
.msix/.appx supports dependency properly. If the app target's a new .NET version, and it doesn't have downloaded, it just.... download automatically on install. UWP .NET on Windows it's also inside .appx packages.
Yeah, I often use this feature for "small" utility programs that I want to copy around without an installer and just want to make sure they work without trouble.
You basically have to directly fiddle with the flags to the IL linker to really get the size down. It's a pain. They are working on designs to make it better:
I think these changes can make self contained .NET apps compare more favorably to Go apps, at least for larger applications. It would probably take something more like CoreRT to get app size to be competitive with Rust and C.
In contrast, a pure Win32 Hello World (with GUI) can be 2KB or less with only the addition of some non-default linker options.
That's 3 orders of magnitude difference. Obviously it won't be as much with more complex applications, but it's still funny to see others here considering a dozen MB or so for doing something trivial to be small, when that's the size of a full installation of Windows 3.11 complete with all its built-in apps.
Unfortunately I have doubts. This is one of the few features copied from Java instead of the other way around. You can pre-process code in Java and generate classes at build time instead of reflecting, but I know of just a handful of libraries that leverage this.
What kind of behavior relies on reflection in C#? Is this widespread or isolatable? Is it necessary to use strings to look up symbols in C# or can you distinguish between strings and identifiers?
> We're talking about saving how much disk space anyway?
Surely this is relevant to how the customer values disk space, not the developer.
> What kind of behavior relies on reflection in C#?
Tons. Serialization, for one, And plugin systems are commonplace.
> You have zero grounds to dictate the value of resources.
Nonsense and worse words. We know how much disk costs. It is a rounding error for the overwhelming majority of people and the overwhelming majority of apps. Prioritize what matters.
>We know how much disk costs. It is a rounding error for the overwhelming majority of people and the overwhelming majority of apps.
We know how much disk costs, but at any given moment a certain non-negligible share of businesses will be working with very old equipment in some of their branch offices, self-service terminals, factory floors, labs, etc.
I agree that 17 MB will rarely be a show stopper in isolation. But resource consumption is critical if Microsoft wants .NET to be a universal solution for business computing needs. Memory is far more likely to be the bottleneck in my experience.
If a company has modern equipment in 95% of their locations, but the remaining 5% are difficult to upgrade for some reason (cost of disk is unlikely to be that reason), then those 5% will determine which technologies are even taken into consideration.
Download size and speed matters. Disk space matters especially if you're working with relatively expensive SSDs. Of course, how much of a problem this is depends on which country you're working in, company's laptop policy etc
Surely this is doable statically. What is the advantage of doing this at runtime rather than at compile time?
> Nonsense and worse words. We know how much disk costs. It is a rounding error for the overwhelming majority of people and the overwhelming majority of apps. Prioritize what matters.
Got a citation? Disk space is the only variable that people even know to complain about... Not sure what you're drawing this from but it stinks to high heaven of corporate propaganda.
>Got a citation? Disk space is the only variable that people even know to complain about... Not sure what you're drawing this from but it stinks to high heaven of corporate propaganda.
I'm not sure where "Disk space is the only variable that people even know to complain about" did come from, but games use 100-150 GBs nowadays, so things like 17 MB are basically irrelevant unless you do verrrrrrrrry specific stuff
Not everyone, let's say me, plays games. I find it insane that I am installing software packages that take up 10s of mbs or multiple gigs in some cases. It's just plain lazy in most cases that I saw. Maybe for games it's different, but I have a 256gb ssd in my macbook and it's complaining all the time it's full, without games; I don't find anything like this irrelevant. There is almost never a need for huge packages (maybe outside games, I don't know about that, again).
> I'm not sure where "Disk space is the only variable that people even know to complain about" did come from, but games use 100-150 GBs nowadays, so things like 17 MB are basically irrelevant unless you do verrrrrrrrry specific stuff
If you have a phone with 8G of space like I do, obviously games which require 150G are beyond my means. How does this work towards disk space being irrelevant?
Huh? I complain about RAM usage and garbage collection CPU time in my Electron apps just plenty. And those apps are huge too. I've never once needed to uninstall an application because I needed to reclaim the disk space. Just games and media files.
If you're talking about the server side....well I think you'd save a lot more on less vCPU than a little more attached storage.
So let me put it back on you: except maybe IoT, where do you run into problems where your app takes up too much disk space?
EDIT: Maybe it's an update/bandwidth thing? Or a Docker pull time in CI? I'm trying to play devil's advocate here...
> I complain about RAM usage and garbage collection CPU time in my Electron apps just plenty.
What does this have to do with C#? Surely you hold it to a higher standard than electron of all things—C# has been around for 18 years. Electron is just repackaging a browser as an app. Is this the standard to which microsoft holds themselves? Might as well sell scripts for google docs....
Right. It's not that "normal" C# code uses a lot of reflection, but it's useful in a lot of small places. So the chances of any given program requiring at least some is pretty high.
Yeah, I very rarely write reflection code (and 90% of the time its attribute related) but I can't remember the last time I worked on a non toy project that didn't leverage reflection somewhere.
BTW One thing I've always liked about dotnet is the binary is an executable and you run that not the interpreter/vm process. When you look at list of prcesses running you see your process, not java or python.
On windows the dotnet runtime is so ubiquitous I'm a bit surprised this is needed.
It's not just the runtime, but dependent assemblies too.
E.g. I have one app that has 4 aseemblies (the program + 3 internal libraries), 5 more "internal" general assemblies/libraries of mine, and 12 dependencies from nuget, for a grand total of 21 (+ 1 wrapper .exe) files. Now double that number if you want the pdbs (debug/symbol files) too. Without the runtime.
Shipping 21 files isn't horrible (I have seen node_modules directories in the hundreds of thousands of files), but shipping one single self-contained .exe would be even nicer.
You can kinda use IlMerge (and some hacking) to achieve a similar result, but it can break in subtle ways (especially when you try to debug something) and does not work with netcore as far as I know. And it's not officially supported either.
I'm confused about something -- does this "single-file" contain the full .NET runtime, a-la a Go executable? Or is this single-file just the full set of managed code, packed together, and still requires a runtime to execute?
Yes, it can contain the .NET runtime like a Go executable. You can already (mostly) do this today with .NET Core 3/3.1 (I say "mostly" just because it simply unzips the runtime on first run). I deploy to Linux with everything in a single file and it's been perfect - headache-less deploys. You can read more at https://github.com/dotnet/designs/blob/master/accepted/2020/... and https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-pu.... I haven't had any problems with it at all. The size is a bit bigger, but I don't really care about that and then I can just run it like anything else without worrying about making sure that the .NET runtime is on my deploy system and kept up to date.
You can also use the PublishReadyToRun flag to do AOT compilation so that startup times are faster (https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotne...). It still keeps the IL (intermediate language) around for certain things so the file sizes can be a bit larger, but if startup times are a concern, it's an option.
If we can do that (and don't pay a sluggish start up time as some mentioned we currently do) then yes, absolutely.
It doesn't bloat the executable with binaries that are already installed on the machine, and the executable benefits from future updates to the framework.
In Unix we will have single executables with the runtime bundled inside. This mode is being referred to as "SuperHost".
Windows however will have some files left on disk:
> Ideally, we should use the SuperHost for self-contained single-file apps on Windows too. However, due to certain limitations in debugging experience and the ability to collect Watson dumps, etc., the CoreCLR libraries are left on disk beside the app.
Single exes are nice, but people still don't want to download and run single exes. Once you use an installer or package manager to grab it anyway, doesn't the benefit become smaller?
I guess for some scenarios such as deploying webapps etc, it might be very handy to copy ONE file and run it. But for client apps, is there a visible improvement to having 1 exe instead of 1 exe and 5 dlls, which are the same size together?
I've written a few utilities that have been released to the public. In my eyes the most important aspect of a single executable is simplicity. I personally prefer applications that I can just drag to any folder. I think, even though I don't like the OS as a whole, macOS does a pretty good job at creating a compromise with .app folders. You can keep framework dependencies, embedded resources, etc contained within one area that you can move around easily, but you don't have to package it up in the binary.
I see the whole thing as more of an issue of your target demographic and the scale of your application. Things that run as background services would make sense to distribute as an installable package. Larger applications like Office or Visual Studio are much too large to throw in a single executable. Something else though like Acrobat Reader or FileZilla I think would make sense to distribute as a single executable. Most times I just don't want to install anything. I use FTP so seldomly that I'd rather want to just download a portable FTP client than keep something installed or even have to extract a multi-file archive somewhere.
I wrote an amalgamation tool once that took all my project files hierarchically together and built one big source file, after compiling it became one EXE file. Made it super easy to move projects to different machines for compilation.
on most windows machines where the .net framework is installed you can find a compiler.
my motivation for building this tool was to have access to some basic tools in a locked-down enterprise environment ducks
About a decade back I had to write this goofy little custom client-server deal. I hadn't done Visual Basic since 6, but I thought I'd give the client a go in VB.NET, using whatever we had at the time. Aside from feeling like a chimpanzee dropped into the cockpit of a F-22 at the IDE (which kept doing things), the people who wanted the project were fairly annoyed that I couldn't figure out how to smash everything down into a single .exe.
I ended up being so annoyed by the experience I wrote the server backend in Python and trying out the Python "compilers" for the next client.
When you have something that is going to be scattered across maybe ten or twenty machines tops for a small project, for whatever reason, people just like the .exe and I can't say I blame them.
The first project was VB.NET client, Python server.
Project two was a different problem, and compiled Python client, Python server. I dimly recall using something like "freeze" for Python to generate something with many fewer files. I changed my methodology partially based on my dislike of the IDE that Visual Basic had at the time and partially due to the criticisms of the VB.NET client I had produced.
I actually very recently wrote a project originally as a single binary .net WebAPI, but rewrote it to go because runtime memory usage was comparatively insane. That is likely WebAPI's fault though, so I'm optimistic about sticking with.net in the future for these single-binary scenarios.
The thing go has though is that single-binary cross-compilation is _much_ easier to grok. It took a lot of reading .net docs to understand what incantations I needed to pass to the compiler.
Was this on .netcore 2? If yes, then memory usage for moderate apps has gone down significantly. Turns out they just optimized for demanding apps before.
Nothing really, it just loaded up a JSON file into memory and then served 2 endpoints to query that structure. I dont have the .net version uploaded, but here is the go version [1]. The 1gb might have been while running load testing on it, I dont remember entirely. It was very consistently hitting it though.
As a .NET fan this is one less thing for me to tempted by Go for.
The other large remaining point is start up time for e.g CLIs and AWS Lambdas. .NET Core has been making great progress there but I think a true native executable will always beat it.
What is truly native is debatable. Go has a runtime too. .NET has some AOT compilation to native code now and it has trimming so that it doesn't include things you don't need.
Go will likely continue to have a bit of an advantage for short-running processes, but .NET and Java are both likely to start getting into this space. Micronaut is advertising "startup in tens of milliseconds with GraalVM". Microsoft is working on integrating all the Xamarin/Mono/Framework/Core work into one .NET and there's a lot of great stuff there. CoreRT isn't going to be productized, but it's likely that a lot of the ideas will become part of .NET in the future. We've seen announcements about compile-time code generation which will help .NET avoid reflection in AOT-compiled scenarios.
Go is very successful for a reason, but Java and .NET aren't ignoring Go's advantages. I think there was a bit of complacency for a while. Java was the open-source statically-typed platform and C# was the Microsoft one. As new languages like Scala, Kotlin, and Go came on the scene, there was renewed interest in pushing Java forward and as Microsoft pivoted away from a Windows-first-and-only state of mind, there were a lot of areas to push C# into (with help from the Mono/Xamarin folk). AOT compilation is going to be important for things like iOS development and WebAssembly which both look like they're going to be big emphases for Microsoft going forward (they've announced how they're going to unify iOS/Android/Windows/Mac development with .NET 6 and Blazor seems like one of the more exciting WASM attempts).
Again, not taking anything away from Go, but I think Java and .NET will both be making strides in startup times and AOT compilation.
.NET 5 unifies Mono and .NET Core, not .NET Core and Framework. Previously Mono and .NET Core had different base class libraries and machine code generators (Mono using LLVM).
Mono is an implementation of the .NET Framework. While the code of v5 might be based on Mono and Core, v5 is also unifying the APIs between Core and Framework.
Most of this is already true because of .NET Standard 2.0, which latest of Mono, Framework, and Core already implement. Large swaths of the extant OSS libraries for .NET have already converted to .NET Standard 2.0. There are mostly only niche use cases left that today that you can't build into .NET Standard libraries and import into programs compiled for any of the extant CLRs.
Currently the single file solution is a zip that unzips into directory with many files. It was a great stop gap solution but not the final thing (hopefully).
Okay, didn't realize that was the case. I had used the single-file solution a few times, but mostly just for the convenience of deployment, never really paying attention to any other aspects of it.
There is a flag that makes it into a single file. But the load time of this exe is very slow. It looks like it just packs all the assemblies together and extracts them during load.
I just did this for a simple project. I indeed noticed that the startup time was extremely slow. The subsequent run-time was faster though, although still not that fast. So something interesting was going on via a per-terminal session basis.
"Which is why people should use a language that isn't riddled with issues like this."
You can't pick a language based on one characteristic. Show me a language that doesn't have a long series of stupid issues. Usually you have to pick the lesser evil.
Some serious projects [1] started adopting CoreRT [2], despite Microsoft's neglect for their own runtime. CoreRT seems to really deliver on the single-file fast-startup small-size .NET promise. Getting this project folded into the LTS .NET 6 release should be a priority.
The source generators [3] that are coming to .NET 5 easily cover 99% of our Reflection.Emit use cases, so the JIT is going to be more of a legacy burden once .NET 5 comes out.
I want Go's small size, fast compile times, fast startup, without Go's bs (explicit error checking, seriously?).
Pieces of our software get deployed over a satellite link so yeah, megabytes matter (and that's why I don't even dare to propose using .NET for those parts). Sharing code with the rest of our .NET stack is a PITA though so people are getting itchy to rewrite the rest of our .NET stuff in Go for better sharing (the Reflection.Emit parts would be replaced with "go generate" which is... a source generator). It would be good to get some clarity on the static compilation roadmap in .NET because I like my job, but I also don't want to become a fulltime Go developer.
[1] https://github.com/dotnet/corert/issues/7200#issuecomment-62... [2] https://github.com/dotnet/corert [3] https://devblogs.microsoft.com/dotnet/introducing-c-source-g...