Hacker News new | past | comments | ask | show | jobs | submit login
First Rust code in the Windows 11 kernel (thurrott.com)
206 points by wojtczyk on May 13, 2023 | hide | past | favorite | 165 comments



On a related note, Azure just endorsed Rust to replace C/C++ as the non-garbage-collected language of choice.

https://azure.microsoft.com/en-us/blog/microsoft-azure-secur...

> Rust as the path forward over C/C++

> Decades of vulnerabilities have proven how difficult it is to prevent memory-corrupting bugs when using C/C++. While garbage-collected languages like C# or Java have proven more resilient to these issues, there are scenarios where they cannot be used. For such cases, we’re betting on Rust as the alternative to C/C++. Rust is a modern language designed to compete with the performance C/C++, but with memory safety and thread safety guarantees built into the language. While we are not able to rewrite everything in Rust overnight, we’ve already adopted Rust in some of the most critical components of Azure’s infrastructure. We expect our adoption of Rust to expand substantially over time.


This is great but I do wish that people could see Rust as more than a better C/C++. It's an amazing language for web services and CLIs.

It's not just that it doesn't have a GC. It also doesn't have some of the worst features of OOP, it also has a sane std library, it also has an expressive type system, etc. Rust helps you write better code, even when compared to languages like Java or Python.


It’s as amazing for web services as C++. There might be some niche use case, but would not be my first, nor second choice there.

It is still a low-level language, and while it is expressive, that doesn’t stop the low-level constructs from leaking to high level abstractions — which is not a negative in itself, that’s the purpose of the language. It’s just a hard tradeoff when it comes to anything not needing low level considerations.


Anecdotally, async Tide (an HTTP framework for rust) outperforms any other HTTP framework or library I've ever used in the C/C++ space, including H2O.

Pair that with minimal memory issues with Rust, and guaranteed none if the framework avoids unsafe blocks.


What point are you trying to make exactly? A framework being able to process a few more requests per second than another one in a synthetic benchmark doesn't change the fact that low level languages are simply not ergonomic for making websites.


* in your opinion.


What's low level about Rust is that memory operations are exposed somewhat. So you don't just have copies or automatic memory management, you have pointers. It's not really that hard, and actually it's wayyyy easier to understand than a lot of the weird reference rules in other languages.

For example, lots of people trip on `append` in Go because it tries to hide pointer semantics.

Having to write `&` sometimes is just not burdensome and it's frankly sad that so many professional developers, many with so-called computer science degrees, seem to think that a basic understanding of computers is too high a burden.


You are fighting a strawman, and it’s honestly quite the ego to look down on the majority of people working the same profession as you.


What straw man? You made these assertions:

1. Rust is low level

2. Low level details leak through abstractions

3. This creates a difficult trade off for use cases that do not require manipulation of low level constructs

I am making these assertions:

1. Pointers are not particularly "low level"

2. The tradeoffs when pointers are abstracted away are worse

3. It is sad that trained professionals believe that pointers are difficult to understand


The trade-off in complexity and loss of productivity (particularly at prototype time) are just not worth it.


I've been using it for 2 years now and I can't see any loss in productivity. Once you get used to the patterns rust and the BC force upon you, it's just as productive as C++ (for me). But you have to get up to speed and that's a significant effort (took about 6-12 months for me).

I can't say much about the ecosystem besides it lacks something in the desktop GUI department (I use egui, it's really good, but just it's not strong enough). So, yeah, sometimes you miss a library and that can hurt productivity.

But the protections against memory and thread issues really help productivity though: I tend to make lots of mistake in these area and rust helps to to prevent many of them.


> Once you get used to the patterns rust and the BC force upon you, it's just as productive as C++ (for me)

Well, it makes sense as both are low-level languages. Hell, one might even argue that Rust is “nothing more” than C++’s RAII made into a compiler-enforced primitive.

But C++ is not considered a high-productivity language to begin with, at least not many would choose it for a regular old CRUD app.


I’ve also been using rust full time for the last couple of years after many years in the javascript / web ecosystem. I can’t speak to how it compares to C++, but I am still far less productive in rust than I am in typescript. I use rust for production systems, but prototype in typescript.


like me: for math-heavy or conceptual reflexion, I go to python to prototype things. Once proto is good, I port everything in rust. I don't know if it makes me overall more or less productive.


> I've been using it for 2 years now and I can't see any loss in productivity

While I am a big fan and all my personal projects are written in Rust, introducing Rust will likely paralyse entire teams for months, until they get used to the borrow checker. ORMs are quite clunky (at least Diesel) and web frameworks are not as user-friendly as ASP.NET MVC, and that’s 80% of what the average developer needs today.


We made a short-term decision to develop a new project in Rust only three months ago.

As we build vending machines and such things, our products usually contain multiple processors that communicate over various connections and there's a lot of interaction with real-world, physical things. Also, as there are a lot of microcontrollers involved, C and C++ are the dominant languages in our development.

So, a lot of our development time actually goes into solving the general unpredictiveness of the real world and humans interacting with actual machines.

Rust enables us to focus on solving those problems without getting held up by stupid memory issues. In our case, iterating over new software versions often is slow because it may involve restarting a big machine and doing actual user interaction with it – the time Rust takes before compiling is easily saved for us.


Personally I regard Rust as strictly less complex than most other languages (to use - not to learn). The space of programs you're searching through is smaller and more constrained (there are fewer valid Rust programs then valid Python programs, and we can make stronger statements about the behavior of a valid Rust program). Ergo, simpler. Internalizing those constraints so they aren't generally slowing you down takes an investment however.

Recently I had a vexing silent bug in a prototype, where I was intermingling strings and ints as keys in a dictionary, and thus not writing to the correct keys. Normally I would be more careful than that, but it was a quick and dirty proof of concept. Because this involved JSON deserialization, Python's modest static type checking couldn't catch it.

That just wouldn't have happened in Rust. (But of course, I wouldn't have had the particular ML libraries I needed, either, so Python remains the best option for this project.)


> The space of programs you're searching through is smaller and more constrained

People don’t write programs by scanning through the space of possible ones, so I don’t see why would that not be a hindrance, let alone help one.

Also, static types, or at least optional typing is at this point pretty much more common than dynamic typing, so I fail to see your example making Rust a better choice than “most other languages” for this reason.

Especially that the real strictness of Rust is from the ownership system, which allows basically only tree-shaped lifetimes, and that’s not a restriction like a program being type-safe, where non-type safe programs don’t make sense. It is an “arbitrary” constraint that has obvious advantages for not memory-managed languages, but is absolutely not a tradeoff I would take when not necessary, as random lifetimes are just as valid.


> People don’t write programs by scanning through the space of possible ones

I definitely search when I write programs. Do you write a program from start to finish and it works, or do you iterate and debug and sometimes find you've made a misstep & need to change your approach?

Python's optional typing wasn't capable of spotting my issue, that was my point. Note I didn't say it was a better choice. I said, "strictly less complex." I wouldn't make a claim that Rust is "better." That only has meaning relative to a set of requirements. (Furthermore, I mentioned Python was a better choice for my project.)

Tree shaped lifetimes are a very robust and easy to work with set of lifetimes. Of course you can add more degrees of freedom and write programs with graph lifetimes. That's fine, but it's pretty clearly more complex. I view allowing arbitrary lifetimes as something I don't want unless it's, well, absolutely necessary is too strong, but unless there's a compelling reason. It's not something I want to spend my complexity budget on, because it makes it more difficult to reason about and debug a program.


>Of course you can add more degrees of freedom and write programs with graph lifetimes. That's fine, but it's pretty clearly more complex.

How do you measure the complexity you're talking about here?

What if a program using graph shaped lifetimes was 10 times shorter than a program that had to use tree shaped lifetimes? Would you still say that the tree shaped program is less complex?


I don't think "number of tokens in the source code" is a great measure, no. More like, how many possibilities are there for the way this program will execute? How many potential branches are there? How many things can go wrong, and do we know if and when they will go wrong?

With Python for instance, it's just really difficult to say, because you may have arbitrary exceptions at any point. When writing long lived services, this is very significant; you need to handle all the exceptions that may be raised by you can't actually determine what that set is. I've had to read the source code of my dependencies on more than one occasion to figure this out, and still, you never get all of them before production.

With Rust this is much more manageable. Technically you could have an arbitrary panic at each step, but it's much less common to catch and handle panics than it is exceptions, and recoverable errors are handed via return values. Relative to Python, it's a breeze to figure out when errors may happen and what to do about them.

Similarly with my example before, what type are the keys in a Python dictionary? Probably the ones you meant to put there, but who really knows. What types are in the keys of a Rust dictionary? The ones you put there explicitly, full stop. There's no way for that to get screwed up or change from under you.


I kinda answered the wrong question here, that's how you would determine which program was more complex. Regarding whether trees or graphs are more complex - graphs are a superset of trees with additional degrees of freedom. Degrees of freedom are complexity. Graphs are more complex than trees and trees are more complex than lists, because we're drilling down into increasingly restricted subsets where we can make stronger and stronger generalizations.


I understand what you're getting at. A more restrictive language allows you to exclude a lot of things that cannot possibly happen and thefore you don't have to think about them any longer, which means less complexity.

But I'm still not sure whether this idea is sufficient to explain away the situation where you have to write a far longer program in order to work around those restrictions.

For instance, safe Rust cannot express linked lists in the usual way, i.e using pointers. So in order to write a linked list in safe Rust, you would have to reimplement some of the functionality that pointers provide using array indices (or similar).

This program would be very long indeed, plus it would cause many of the same safety issues that safe Rust was supposed to avoid. Essentially, what this program would do is create an unsafe memory management system in safe Rust.

I believe that the dominant factor for complexity of a program is its length. If you have two programs that produce the same output for the same input and one is 10 tokens long while the other one is 10,000 tokens long then the first program will always be less complex.

I say "I believe" because I'm a bit out of my depth here. These claims (mine and yours) clearly need formal proofs and some formal definition of complexity.


I actually don't think it's as difficult as people often suggest to write a doubly linked list in Rust (yes, I have done it and read Learning Rust by Implementing Entirely Too Many Linked Lists.). I think it's just surprising to people that something that's a CS101 data structure takes some advanced features to implement without locks.

But the thing is, you don't write C programs in Rust and you don't generally use doubly linked lists. (Linked lists turn out to be a very niche data structure that doesn't work well with modern CPUs anyway, but I digress.) You'd probably use a Vec or a BTree from the stdlib anywhere you're thinking of using a linked list.

So I don't think it's really the case that programs are significantly longer in Rust. Rust is more explicit, so you'll end up moving some things that existed as comments or documentation or just in your own head into code - that's a win that doesn't increase complexity, only exposes it. That program may look larger, but it's only because you can see it better.

It really depends on what those 10 tokens are doing. If I have a token that creates a new universe, seeds it with life, and creates the conditions for life to develop an optimal linked list - it might solve the problem in one step, but our atomic unit here is absolutely massive.

Similarly, if I compile a program to assembly I'll generally get many more tokens. But I can't really buy that I've increased the complexity of the program here.

I'm pretty satisfied with this understanding but I understand your desire for greater rigor.


>But the thing is, you don't write C programs in Rust and you don't generally use doubly linked lists.

The usefulness of linked lists is entirely beside the point. They just serve as an example for situations where additional restrictions can cause a program to be longer or more difficult to understand.

>It really depends on what those 10 tokens are doing. If I have a token that creates a new universe, seeds it with life, and creates the conditions for life to develop an optimal linked list - it might solve the problem in one step, but our atomic unit here is absolutely massive.

This example shows why I think that your claims lack a definition of complexity. You're now saying that the output of a program determines its complexity. I don't think that's a useful measure of complexity because it doesn't allow us to compare the complexity of two programs that produce the same output.


Not the particular output, no. It was how much was going on inside that hypothetical instruction, how massive of an abstraction it was, the unpredictability of the output. If you like you can think of it was a giant decision tree and imagine counting the branches to measure the complexity.


> Especially that the real strictness of Rust is from the ownership system, which allows basically only tree-shaped lifetimes, and that’s not a restriction like a program being type-safe, where non-type safe programs don’t make sense. It is an “arbitrary” constraint that has obvious advantages for not memory-managed languages, but is absolutely not a tradeoff I would take when not necessary, as random lifetimes are just as valid.

Rust's tree-shaped lifetimes are a big benefit for me, as someone who values not wasting RAM.

(Seriously. Why did I have to upgrade my computer to 32GiB of RAM when I haven't appreciably changed what I use it for since I bought it wth 4GiB of RAM and sized that amount based on intent to run a VirtualBox VM or two? ...JavaScript bloat. ...though I will admit that I also upgraded the CPU, RAM, and motherboard in the transition from 4GiB to 16GiB... but that was because a RAM socket went bad.)

I think of borrow-checker errors as "Wait a minute. Can you clarify what you intended here? Did you want multiple ownership? A copy? Was accessing it this far down a mistake?"


There are infinitely many Rust and Python programs, just as there are infinitely many English sentences. Search is a really weird way to talk about programming.


I'm gunnuh address this because I think these are interesting nitty gritty details, but note that these are metaphors expressed in a shared vocabulary that happens to be technical, let's not get too caught up in how many Rust programs there are, that's a more literal interpretation than I intended.

If you cap the number of tokens, the are fewer Rust programs. If you prefer, things that may be done implicitly in Python must be done explicitly in Rust - there are fewer valid choices when constructing a program; or equivalently, as you write a Python program, the options for your to proceed fan out faster.

Really we aren't searching the whole space of programs though. We understand that adding no-ops to a program doesn't move us out of the equivalency class of that program (unless we're actually using those for timing or another side effect, of course, but then they are no-ops in name only). We're only searching through programs that reasonably solve the problem at hand. Maybe that space is infinite, I don't think so.

I think of programming as being a fuzzy beam search (among other ways I sometimes think about programming). Your salience may vary, feel free to drop in a metaphor of your choosing there.


We can model programming as search, but programming as humans do it is more like design than search. I mean this in the qualitative and experiential sense.

I don’t sit in front of the screen as candidate programs flash by until I see the one that I want.

I also don’t edit by AST transformations from one valid program to another, like edges in a graph.

I really don’t see the possible program space as all that relevant.


> I also don’t edit by AST transformations from one valid program to another, like edges in a graph.

I don't really see how else you would edit a program? That's what you're doing when you type at your IDE, right?

But I think you're understanding how I think of it, feel free to take it or leave it. I'm not gunnuh die on a hill of insisting my way of thinking of it is right, it's just right for me.

> I really don’t see the possible program space as all that relevant.

In my example, I fell into a silently invalid state. In a more constrained environment, I wouldn't have. It's easier to go bowling with the bumpers turned on, and if your goal was to make as many strikes as possible (rather than to be sporting), surely you'd only ever bowl with the bumpers.

It's a Murphy's law thing. The more invalid states you have, the easier it is to get mired there.


Most programmers (including myself) are editing unstructured text and after that text is written the editor will tell them if it’s valid code or not.

Edit: I cannot reply, but there are structured program editors out there you may find interesting!


Yes, of course I do too, I live on the same planet, but that's an unstructured interface to changing the AST.

(You can reply to comments when the reply button is hidden by clicking the link to go directly to the post. I consider it a polite request by dang to consider whether a conversation is getting too heated, which I don't think it is here at all. Cheers, I'll try to check those editors out.)


I'd rephrase this as Rust enabling reasoning about program validity locally, while many other languages (regardless of type system, e.g. C, C++ and Python/Ruby/JS are typical examples) don't allow you to reason about program validity in localized increments. This promotes more modular designs, clearer interface boundaries and less "change anxiety".


> enabling reasoning about program validity locally

About certain few properties only. It is impossible in the general case.

I agree regarding “change anxiety” compared to dynamic languages, but that’s just static typing. Due to lifetime annotations leaking into API boundaries, the rest is not true though.


This really depends on your use case for it, as all things - it’s hard to just make blanket statements like this


On par with "Rust helps you write better code" in the parent comment.


Not at all. I added this context:

> It also doesn't have some of the worst features of OOP, it also has a sane std library, it also has an expressive type system, etc.

You made a baseless assertion, I at least gave examples of features/antifeatures that helped me draw my conclusion.


There is no noteworthy loss of productivity after some practice in my experience, except when comparing with languages like Python. But among languages I would consider for serious projects Rust is simply difficult to compete against. In the end you always pay the price and often knowing it upfront can help.

With prototyping it's a double-edged sword: it can be a bit more cumbersome to change types, but the same type-checks are helpful for pointing out details during refactoring. For me it helps keeping things simple at first, expanding types later on when needed is easy.


I obviously disagree based on years of writing Rust services.


I'm creating an operating system in Rust and have had more productivity than ever before, especially when I started writing it in C. YMMV.


All true but it’s not as productive as a GC language from the ML family. If you don’t need the performance / latency guarantees I’m not sure it’s worth it.


The way I have seen it explained before: Rust is not the perfect applications language, but there actually aren't any good mainstream applications languages out there, so Rust is perhaps the best we've got.

What I want:

    * Powerful, strict typesystem (typeclasses, sum types, type-inference, closures)
    * Constrained mutation, no 'spooky action at a distance'
    * Safe multithreading
    * Fast
    * Memory-safe
    * Native binaries
    * Large, deep ecosystem
    * Great tooling
    * Jobs
Rust comes closer than most.


Funnily enough, Haskell also fulfills all the listed qualities.


That is not a coincidence :) I like Haskell a lot but at this point I am willing to bet that I will never be paid to write it. It fells like community of 'people like me' have collectively decided that Rust, not Haskell, is their best bet for a better life, and I'm fine with that.


How many jobs are out there where they expect you to know Haskell?


More than for Rust, given its age.


Not sure that this is true now, but willing to bet my career that this won't be true in 3 years.


Depends on how much people feel like writing userspace code with the usability of a language designed for kernel, drivers and GPPGU programming.

https://wiki.haskell.org/Haskell_in_industry


Rust is awesome but the ecosystem and job market are tiny.


> a GC language from the ML family.

What other option is there? I thought the same and wanted a higher-level GC'd language than rust, and played some with Ocaml, Haskell and F#. In theory they should be more productive and I'm sure are for things like compilers and parsers where they have top-quality libraries, but for most things I got the impression that even if I took the time to learn well one of those languages, the quality of libraries and documentation and tooling is so far ahead in rust that it more than makes up for the productivity hit of rust making you concern yourself more with low-level details


Scala? It has access to every JVM library, and that may well be two orders of magnitude larger than Rust’s.


Scala is great, but the tooling is what ruined it for me. SBT is far from simple, Scala 3 basically reset the IDE suport, the compiler is abysmally slow (if you think Rust compiler is not fast, don't look at Scala which is like order of magnitude worse).

And also the cost of FP is quite high on JVM. Paying 10-20x performance price for functional transformation chains over similar ones in Rust is a bit too much for me.


Scala suffers greatly from the "it lets you do too much".


F# has all of .NET available, too.


I come from traditional "unix hacking" background and don't know at all the JVM or .NET ecosystems, and share the irrational distrust of them many unix hackers have, though I'd get over it I could actually get stuff done faster with them. But some projects I do are lower level or need high performance or reasonably small staticly linked binaries, or instant startup time (cli utilities), so rust is a better choice. For most of the rest a JVM or .NET language would be fine and should be more productive as it's a higher level language, but I don't know if I'd use it enough to learn the language and ecosystem and tooling enough to be as productive as in python or rust which cover pretty much anything I need to do at the moment.

I think that is also why rust ecosystem will keep expanding and improving a lot. They said about python that it's not the best language for anything but 2nd best for everything, which isn't really true as python can't be used when you'd need C or C++ but rust can really be used for anything.


> “unix hacking" background and don't know at all the JVM or .NET ecosystems, and share the irrational distrust of them many unix hackers have

I sort of understand the distrust of .NET, but why the JVM? It was/is pretty much the epitome of open-source.


> I sort of understand the distrust of .NET, but why the JVM? It was/is pretty much the epitome of open-source.

Not the OP, but basically two things:

First, for Java's formative years, https://www.gnu.org/philosophy/java-trap.html applied. (On a related note, see also http://endsoftpatents.org/2014/11/ms-net/ )

(TL;DR: The JVM was NOT "the epitome of open-source" for many years and it's still struggling with the knock-on effects of spending so many years being one of the only things that you couldn't install through your package manager for purely legal reasons.)

Second, I can't remember which blog post it was, but Eric S. Raymond has mentioned that the reason he found Python much more appealing than Java is that Java's standard library embodied an attempt to push people to write portable code in ways that made life difficult when your goal was explicitly to do POSIX-specific things.

As a Linux user who saw C# for the "Microsoft's take on Java after the Visual J++ lawsuit" that it was and, thus, never really saw any interest in it, I can say that C# gives that same impression of "ill-fitted for POSIX-native stuff" (packing its CLR bytecode into .EXE files doesn't help, even before you get to the assumption that you'll have Wine and Mono fighting over who should open .EXE files when double-clicked.)

Rust, by contrast, produces truly native binaries, has Cargo come standard and makes `cargo add libc` or `cargo add nix` trivial and reliable, etc. etc. etc.

Third, the JVM's tunings, optimized for long-running processes, and the start-up time and approach to memory allocation that resulted, gave it a reputation for being slow and bloated. The POSIX ecosystem has a history of encouraging composition of short-lived processes via shell scripts.

Fourth, Java has always let the quality of the GUI experience on X11 languish.

I still remember how you needed to set environment variables to work around "Java applications produce empty grey windows under non-reparenting window managers" for years and years.

I still remember when you had to open part of the JVM in a hex editor and replace XINERAMA with some nonsense string that doesn't match anything to un-break Java applications on multi-monitor systems.

TO THIS DAY, I still can't find a Java GUI widget toolkit that doesn't have a noticeable responsiveness/input latency problem under X11.

(Swing, SWT, JavaFX... that annoying fraction-of-a-second sluggishness is one reason I'm planning to write my own replacement for the parts of the new JavaFX-based version of PlayOnLinux that I actually use once I stop using the old Python version.)

I haven't tried QtJambi, but given that SWT, which should be using GTK on the backend, exhibits the problem, I don't hold out much hope.

(In essence, in the same way that Swift is ill-suited for stuff outside Apple devices, .NET has gained a stigma of "ill-suited for anything outside Microsoft platforms" (Is Unity's Linux support still only viable for targeting it, not developing on it?), and Java similar, but for JavaEE servers... and since your average POSIX developer sees AbstractThingFactoryBeans as a tired but too-accurate joke about what hell it is to write Java... you do the math.)


From your Java trap link:

“Since this article was first published, Sun (now part of Oracle) has relicensed most of its Java platform reference implementation under the GNU General Public License, and there is now a free development environment for Java. Thus, the Java language as such is no longer a trap. You must be careful, however, because not every Java platform is free. Sun continues distributing an executable Java platform which is nonfree, and other companies do so too”

Also, Stallman is quite an “extremist” and he is often himself the enemy of open-source by his gatekeeping, so there is that.

Re the GUIs: I think it says a bit also about the state of GUIs on linux, then the reverse - and I say that as someone who has been using linux on every computer I own since forever.

You made some great points, but I think the real reason is even simpler, old graybeard linux users are very conservative in their technology takes. The overabundance of C software (and the unavoidable memory vulnerabilities that come with that) even at places where it makes zero sense is clear sign of that, but so are the “systems/pulse/wayland hater fangroups”.


First, note that I also pointed to Java spending years developing a reputation as one of the only things you couldn't install through the package manager for legal reasons.

People who don't care about Stallman's zealotry do still care about that and it presented a similar lasting problem to Java's Linux uptake as the problem D had more broadly with two competing and mutually incompatible standard libraries (Phobos and Tango) in the early years.

As for the GUIs, I listed three different ways unique to Java that Java GUIs were subpar on Linux:

1. Needing weird workarounds for applications to not display blank grey windows on non-reparenting window managers. (Basically, if the WM was something like a tiling WM that didn't reparent it from the root window to a WM-provided "titlebar and borders" parent window, the app would refuse to render anything without enabling the relevant hack.)

2. Caring so little about solving known bugs that slipped through QA that, for a shamefully long window of time, users with more than one monitor literally had to hex-edit their JVM to intentionally no-op the XINERAMA detection to make applications work. (i.e. XINERAMA being the X11 extension that allows Windows/Mac-style "one desktop stretched across multiple monitors" multi-head instead of the older "Behaves like a software KVM and applications are trapped on the monitors they opened on" Zaphod mode.)

3. Having some kind of input-response latency that I've never seen in another language or toolkit, which is apparently caused by something so fundamentally Java that it's present in every Java toolkit I've checked.

As far as greybeards go, I think we'd need more data points. Eric S. Raymond is quite an old-guard guy and even had some "jumps to conclusions"-y, "it's not enough like C"-ish reasons for choosing Go over Rust (eg. No `select` as part of the standard library, ignoring how Rust intentionally keeps the standard library minimal), but had no problem with using Python where it was suitable, while he didn't like Java because it was was more of the ChromeOS or Android school of doing stuff on top of a POSIX base.


fsharp has csharp problem and sloppy with all that reflection and exceptions. sure csharp moves right dirrection, but it will take longer for ecosystem to be non slopy


Erlang / Elixir.


I tried elixir and enjoyed it. I'm don't have strong feelings either way on static vs dynamic typing, I think the current craze for static typing is a lot because of people's experience with javascript vs typescript. With elixir I of course had runtime type errors but trivial ones that show up on the first run, I don't think I had any hard to find bugs or bugs that surface later that ML typing would've caught. I'd absolutely pick it over rust for web backends, but I do very little of that, I'm mostly writing cli utilities on unix. I've been experimenting with common lisp lately and it's a lot of fun, as fast to prototype with as python (when it has the libraries you need), performance around the level of java or go, great development environment if you use emacs, instant startup time for cli utilities unlike beam or jvm. I think if it was made easier to make and cross-compile fully staticly linked binaries, and it get's something like maturin for python or rustler for elixir to have easier access to bigger ecosystem when needed it'd be great for my uses but at the moment I can't use it for much


In my case, it's because I've been burned by Python's maintainability issues in ways MyPy isn't sufficient to fix.

It's not just Rust's type system, but:

1. Unlike with Haskell, the Rust compatibility promise has set the tone for how the ecosystem approaches API breakage.

2. Go-like "statically link everything into a single binary" compilation means that "just keep using the old build until things are fixed" is a valid answer to "an upgrade broke the buld process".

3. Rust's type system enables design patterns like the typestate pattern, for encoding as many invariants in the type system for proving at compile time as possible.

4. Rust's design prioritizes removing the need for global reasoning about program behaviour.

Same reason I recently spent $10 on a used copy of the O'Reilly lex & yacc book to learn the warts of LR parsing. I plan to write my parsers via something like LALRPOP or grmtools instead of using nom (parser combinators) or pest (PEG parsing), which are currently more popular in the Rust world. (As with borrow-checking errors, shift/reduce and reduce/reduce conflicts aren't the bug, they're the feature. I read an article about how LR parsing allows the most detection of ambiguities in the grammar at compile time. Cry in the dojo, laugh on the battlefield.)

I'd rather pay up-front to avoid the stress of having a Sword of Damocles hanging over my head and feeling satisfied with Rust takes FAR less time than either burning out trying to replicate its type system in unit tests or playing bug whac-a-mole over the lifetime of the project.


I agree with this but the ownership model is really a huge learning curve for somebody who wants to make an application where it's OK if it crashes when there's a bug.

It would be cool if there was a "relaxed rust" which provided interop with standard rust, but used some designated smart pointer for everything automatically (at the crate level, for example). I'm not sure how/if this could work, but I think it would make transitioning from GC languages -> Rust much easier.


You can more or less do this - just wrap everything in arc mutexes and clone everywhere (or use Rc for strictly single threaded). I'm sure someone will point out where it doesn't work but I suspect it would work fairly well but with obvious costs.


Exactly, but I'd like to see it done automatically (possibly via a special glyph enabled via crate config?), so that users of this "relaxed rust" don't have to think about ownership at all, but can still participate in the rest of the Rust ecosystem, and there's a clear "off-ramp" to standard rust.


D tried that. The fact that so much of its ecosystem non-optionally depended on the optional garbage collection was part of what killed its chances for adoption beyond "like Java or C#, but without a giant corporation pushing for it".


Rust can have some bad effects on your brains. I noticed I can't read Java code as easily as before. E.g. yesterday I had to read it multiple times before I understood what was going on and why they are calling setters on an item that goes out of scope:

    List<Something> list = ...;
    for (...) {
        Something item = new Something();
        list.add(item);
        item.setFoo(...);
        item.setBar(...);
   }


Would you say...your Java is getting rusty?


Haha I think that's just normal slow context-switching. I always start writing semi-colons in python after writing PHP for too long.


Any sign of async functions in traits without the macro? One kindah would expect first class async support for web services and honestly something like Swift prob. has better balance between usability and speed for most web services.



async fn do_health_check_par<HC>(hc: HC) where HC: HealthCheck + Send + 'static, HC::check(): Send, // <-- associated return type { tokio::task::spawn(async move { if !hc.check().await { log_health_check_failure().await; } }); }

Hmm I think for 95% of the people sticking to something like Swift would work out better for them


Are you saying 95% of use cases require spawning new tasks within the trait implementation based on argument futures? Or are you making a more general comment about rust generics? The MVP looks about as clean as I could hope for most use cases.


I am saying that in web services you will frequently use async functions that look like async fn do_blah(db_repo: impl DBRepo ...) {...} and are executed on async-std or Tokio


That will work. It only needs the additional bounds if you want to spawn that as a new task.


so if I am doing async calls on DBRepo inside that function they can not be moved to a different thread by a runtime like tokio?


One would typically not spawn those - as I understand it you need the type constraints if calling “tokio::spawn” (or some equivalent) but not if simply calling an async function in the same task.


Why is the macro not a solution?


don't you have like extra heap allocation per call with code that macro expands to?


> > Rust as the path forward over C/C++

I hope not, but it's the popular memory safe language we have now, and ironically Stroustrup's quote applies "There are only two kinds of languages: the ones people complain about and the ones nobody uses".

I would like Val to succeed: https://www.val-lang.dev/ With a Swift/Kotlin style syntax, the learning curve is likely easier as well.


Rustbros just keep winning


Damn! If I'm correctly informed, this is before Linux is doing it (actually shipping rust code, not having the infrastructure in place).


It's easier for Windows to do it because it only targets a small number of architectures (maybe only x86-64 and AArch64 these days?), and only Microsoft need to be able to build it. The Linux kernel has a huge number of targets and the build system needs to be solid for a huge number of people.


Windows does a lot of things in the kernel that Linux lets userspace handle. Part of that is the way UI is dealt with: for Windows UI is part of the core system, whereas on Linux it's a userspace tool you can install if you really fancy it.

There's also the advantage of being able to add Rust to the build pipeline without someone complaining that you broke the build for their TI-83+ calculator because there's no Rust compiler for that platform. Windows deals with amd64, x86, aarch64, and maybe some simple 32 bit ARM, but that's about it. Nobody is going to care about support for MIPS or Power9 or Motorola 6800 or SPARC or Xtensa.


I feel like there is a way more vibrant driver developer community for Windows anyway. It could be the architecture for kernel drivers Windows has or that many people use it for malware and game cheat development. I have a fond attachment to all the tools you have in your disposal once loaded into the kernel space on Windows. win32kbase.sys being at a certain memory location for all processes, KeUserModeCallback etc. all around really fun and interesting to hack with. Doing DKOM on PEB, TEB of kernel processes or user processes to do things you shouldn't be able to do like call win32 apis from kernel pretending to be a process to draw things using GDI etc.

Even though I primarily am a Linux user and I prefer using Linux on a daily basis, dabbling in Linux kernel development failed to create this feeling of awe and infinite possibilities (and exploitability) in my mind.

I guess my point is, Windows Kernel development in Rust excites me. Same with Linux would be very cool but it doesn't create the same feeling of wonder in me. Perhaps that's why this is happening first.


https://github.com/SubconsciousCompute/poc-windows-rust-filt...

We have been playing with Rust and minifilter for Windows. Its almost there. Microsoft has been working on rust driver kit.


Not surprising. Linux is one of the few monolithic kernels.

It’s kind of a worst case scenario for adding in a new language to as a result, since it can’t be added in a very isolated way.


> Not surprising. Linux is one of the few monolithic kernels.

This is an amusing claim, given that Microsoft's first use of Rust in the NT kernel is in win32k.sys, which is the in-kernel code that used to live in userspace back when NT was actually a microkernel. So pre-NT4, which was released in 1996.


Windows still follows a microkernel like architecture with LPC across subsystems, and nowadays kernel runs on its own sandbox, while a select set from drivers are also sanboxed.

Then there are the whole set of userspace drivers, including graphics.

How are those X Windows driver crashes holding on?


The kernel/user space split in graphics drivers is pretty much the same between Linux and Windows, and has been this way for many, many years.


Windows under the hood with the NT Kernel is actually regarded among some experts as technically superior to Linux with the driver model and personalities subsystem; it’s just that the stuff above the kernel isn’t so great…


The one place microsoft's advertising branch can't get to!


Their scheduler is pure bullcrap. It favors cpu intensive applications, meaning it's good for gaming but horrible for every other use case


That makes sense if your main target use case is desktops where a small number of CPU-intensive apps are running, right? And even in many server apps where you have one main app being the reason the server is booted, this makes broad sense.

Don’t get my wrong, but me and I bleed Unix-like systems and have done for decades, but what you’re saying actually makes sense for MS as a design decision…


How so? I believe I understand Windows' thread scheduler and this conclusion isn't obvious to me at all; from my understanding it seemed perfectly reasonable.


Put windows on a 4 core system. Open two terminals. Run cargo build in one terminal. Run ls in the other terminal.ls will hang for a good 10 seconds before printing output


I would agree, if we would still be talking about Windows NT 4.0 or Windows 2000.


For now

Syscalls powered by MSNBC


The stuff above is actually pretty good, the problems come mostly from the obsession with supporting a flawed 30-year-old paradigm (the DOS world with its simpleton prompt), a failed system-management concept (the Registry), and the recent steer towards being adware.


Why is the Registry a failed concept?


It tries to be both a filesystem and a database and fails miserably at both. Entries layout is a total mess that not even Microsoft pays attention to anymore.

Relevant:

Why the Windows Registry sucks … technically https://news.ycombinator.com/item?id=32275078

https://rwmj.wordpress.com/2010/02/18/why-the-windows-regist...


Yet, the GNOME folks decided to copy it.


Using a database is not a bad idea, especially for many reads and few writes.

The problems with the windows registry are orthogonal to that.


Never underestimate the lack of imagination in the Linux-desktop world.


Agreed. Years and years of objectively bad UX and UI with the likes of GNOME, KDE, etc. I cannot understand why the organisations creating Linux distros have never at any point thought to hire a designer or two so their desktop apps don’t look like someone just slapped them onto a window with either far too much padding (even more than HN loves to complain about the web) or not enough so usability is harder.


The issue is likely getting the people working for free to listen to a designer


I think the problem is that they listened too much to designers.


No they really didn’t. No designer is making UIs that don’t align, are full of inconsistencies, and have confusing UX.


GNOME is infamous for blindly copying things that appeal to them without underlying the originator's rationale and, in the process, making a mess of things... they just usually lean more toward copying Apple.

(eg. Changing their save icon to an "arrow pointing at hard drive" one that's harder to visually distinguish from their download icon because they don't understand that you shouldn't privilege affordances (which only give benefit while learning a new system of symbols) over consistency with existing established iconography like the "save icon" (diskette) used by every other system.)

...or GNOME 3 ruining the "Cancel/OK in the bottom-right corner of the dialog, in that order for LTR languages" they borrowed from Apple with their "action buttons in dialog header bars" idea. (TL;DR: Dialog boxes are laid out to read like paper forms, following the writing order of prose text in the user's native language. Action buttons go in the reading-order-terminal corner for a similar reason to why the signature field on a paper form is at the bottom and why business letters are supposed to end with what you want the reader to do.)

Likewise, for dialogs where they stretch OK and cancel to full width. Now you've required the user to move the cursor much further during normal operation.

See also https://uxmovement.com/buttons/why-ok-buttons-in-dialog-boxe...

This is quite literally stuff that was covered in my introduction to Human-Computer Interaction course at university.


How so ? I am curious to know more



Citations needed.


VMS under the hood


How’s Rust for desktop application development? Are there frameworks like MFC or ATL or WTL or WinRT? Is COM development or using UWP significantly more difficult in Rust than C++ or C#?


There are ok bindings to some toolkits such as Qt and Gtk, which are the most mature right now. But there is promising progress also in full rust GUI libraries for example Slint or egui


I should have mentioned I’m only interested in native Windows APIs. I want to use UWP features, lots of COM (both as client and server), etc…

I see Microsoft has done some work and there’s a Windows crate, but I’m not seeing IDL support for COM and it looks like you end up with a ton of unsafe blocks. It makes me wonder if a programmer who sticks with smart pointers in C++ is all that much worse off than a Rust developer on Windows?


There is a lot more to Rust than just “safety”. That gets talked about a lot, but Rust is full of great features that make it worthwhile to use even when you are working in unsafe Rust.

On top of that, it should still be possible to write the application logic in safe Rust, even if you need to use unsafe for FFI with Windows stuff.


Are there any Windows GUI applications written in Rust that are a good place to start for someone who wants to learn more?


Good question. Specifically GUI code tends to have many cycles in object graphs. Do idioms already exist in Rust to deal with advanced cases of that?


Huh? linux is the windows kernel or what?


Shows up? I haven't used Windows for close to 15 years, but I have not heard that their kernel would be open source now.


Ok, we've made the title not show up above.


Interesting. I thought editorializing titles is not accepted on HN.

My criticism was directed towards the article, not the submission.

Edit: Ah, the guidelines say unless it is misleading.

Well, I would not have called it truly misleading, just typical headline style you can often see. I did not like it, but I did not feel misled.


I'm glad you noticed the "linkbait/misleading" exception in that guideline because it's an important one!


Win32k.sys in rust would kill so many future bugs. RIP.


Why Rust? Doesn't Microsoft have their own similar language, C#? And then isn't there some lore about the reason Windows Vista took so long to release was due to them trying to write the entire OS in C# or something and then having to redo everything?


> And then isn't there some lore about the reason Windows Vista took so long to release was due to them trying to write the entire OS in C# or something and then having to redo everything?

It seems like you're confusing a few different things there.

The 'redo' step that you're talking about happened because in the early 2000s Microsoft was getting blasted by security breaches left and right.

Pretty much every senior developer in the Windows org was pulled to work on security patches. An in between update to the consumer version of Windows (codename Longhorn) was planned to hold customers over until the team had the time to do a proper iteration on the next OS.

As the senior devs were wrapping up their security fixes and coming back to see about shipping Longhorn they decided that it wasn't up to their standards and more importantly they didn't think it was moving in the right direction because it didn't include all the security fixes they had been doing.

Longhorn had essentially 'forked' XP pre-security fix and it was decided that it would be easier to throw out all of the Longhorn code and start fresh then to try and port all the security fixes over.

So they just that: they threw out everything that had been worked on, 'reset' the project to be based on the latest code with all the security fixes present, and then started the development of Vista fresh.


A small correction on the redo step.

What you are talking about was called the Windows Security Push and this is how it worked. Basically, every developer (not just the senior ones) had to help review every line of code in Windows. First, we were shown presentations which explained why security was important, where Microsoft products had failings, what were common security bugs and how to look for them. I think the presentations were done my Michael Howard. They were very good.

Then we were given a copy of Writing Secure Code to read. It enumerated all of the know types of security vulnerabilities and told us how to fix them. It also taught us how to write a threat model, validate input from untrusted sources, reduce our attack surface, use the principal of least privilege, etc.

Finally, we spent three months reviewing Vista's code. Each team was responsible for reviewing its own code. We filed bugs as we found them and then they were fixed.

Note that porting the security fixes to Longhorn from Windows XP took very little work. Windows had one code base and you could move changes from release A to release B (Windows XP and Windows Vista in this case).

Also, Longhorn was reset at some point but all of the work was not thrown out. Basically, some teams reduce the scope of their work (i.e. cut features) and some projects were cancelled. The reset did not occur because the security fixes were missing. It ocucred because Longhorn was an out of control project which had been going on for 2 to 3 years and was not close to shipping.


> Longhorn had essentially 'forked' XP pre-security fix and it was decided that it would be easier to throw out all of the Longhorn code and start fresh then to try and port all the security fixes over.

Microsoft internal emails from Jim Allchin (who ran Windows at the time) to Bill Gates reveal that the issue was performance.

> LH is a pig and I don’t see any solution to this problem.

https://web.archive.org/web/20210427171552/http://blog.seatt...

Eventually, the solution was to drop the use of a garbage collected runtime (desired for memory safety) and go back to C/C++ (necessary for performance).

Now they are moving to a language that gives you the memory safety without taking the performance hit.


Longhorn was slow but I doubt garbage collection was the main cause for its poor performance. First, it was mostly written in C and C++ (remember, Microsoft does not rewrite Windows, it evolves it). Second, the reason it was slow was some teams checked in code which was not ready. What I mean is the code was buggy, had exceptionally poor performance, etc. The code should never have been shared with the entire Windows organization.

Note teams could have private source code repositories and Windows builds. Changes did not have to be shared with the entire Windows team. The better teams shared their changes when they worked and were ready. On Vista, a lot of teams shared things which were far from ready. Note that this was unusual and did not happen on Windows XP.


> Longhorn was slow but I doubt garbage collection was the main cause for its poor performance.

Longhorn was refereed to as Cairo.net, as a callback to a previous Cairo project from the 90's that failed to ship and as a reference the drive to improve security through the use of memory safe garbage collected code.


I never heard that term internally (it's possible it was used in some parts of the organization, but it was not used in my part). I know some new APIs used .NET and I suspect that there were some apps which used .NET/C#, VB, etc. However, I do not think that accounted for the performance problems or the massive increase in memory use (from about 128 mb to 1-2 GB).

If you are going to claim garbage collection caused performance problems, you need to state what part of the operating system used .NET or GC and why using those technologies caused problems.


> I suspect that there were some apps which used .NET/C#

Several of the tentpole features of Longhorn were based on .NET and managed code.

> First and foremost, while Windows Server 2003™ embraced managed code by being the first operating system to ship with the .NET Framework preinstalled, Longhorn is the first operating system whose major new features are actually based on the .NET Framework.

... Microsoft appears to have concentrated their development effort in Vista on native code development. In contrast to PDC03LH, Vista has no services implemented in .NET and Windows Explorer does not host the runtime, which means that the Vista desktop shell is not based on the .NET runtime. The only conclusion that can be made from these results is that between PDC 2003 and the release of Vista Beta 1 Microsoft has decided that it is better to use native code for the operating system, than to use the .NET framework.

https://web.archive.org/web/20051212045841/http://www.grimes...


C# isn't even remotely similar. A managed language running in a heavy VM runtime versus a direct-to-machine-code compiled language...


I don't think runtime is that big. The classlib is.


Have you heard of native AOT compilation?


1. That still needs a GC, and thus heap space. Unless you make everything stackalloc which is impossible in kernel programming

2. It is heavily pinned by unnecessary metadata. Well, actually some maybe necesdary to facilitate GC but it is bloated anyway

3. RyuJIT is not as optimized as LLVM, and RyuJIT produced the AOT code. There in an abandoned project called LILAC attemping to use LLVM as a second AOT generator but it was never heard from again after 2019.

4. Native interoperability in .net requires JIT as well, to generate things like call sites and runtime thunks (function pointers to restore CLR context). Not only that wastes more resources but also that not all P/Invoke calls can be AOT compiled.


oops here's some typo:

1. Well technically speaking, you need a unified heap space. Just like in Linux kernel you have vmalloc and kmalloc which does virtual memory and real, page-aligned physical memory. So this is technically means I will have to pre-allocate everything. But it is not gonna work, how do you save space?

2. "necessary". I should have enabled autocorrect on my phone, it does more good than harm this time if I had it on.

3. It's actually LLILC (https://github.com/dotnet/llilc). I just remembered how to pronounce it. But if the name is wrong, then you have to correct it.


- Singularity

- Midori

- Oberon

- Mesa/Cedar

- Topaz

It is a matter of actually wanting to make it happen, or giving up to the voices that think otherwise.


That list just shows people have been experimenting with such things for decades, including in a major push with Midori: over 100 devs across Microsoft and Microsoft Research, and while things have been learned and fed into the development of runtimes and operating systems it hasn't changed the fact that all the major OS kernels are written in non-memory-safe languages. Rust, on the other hand, is starting to directly impact these kernels.

Joe Duffy himself said with Midori they "started with C# and .NET [but] were forced to radically depart in the name of security, reliability, and performance" and that at one point they had 11 different garbage collectors. The jury is still out (and has been deliberating for an awfully long time) on whether you can build or even contribute to a successful, mainstream, general-purpose OS kernel in a garbage-collected language but a distinctive selling point of Rust is that you can start adding memory safety without having to. And the fact that this is actually starting to happen to more than one such kernel while Rust is still a fairly new language could be interpreted as evidence that maybe, just maybe, the GC really was the problem all along.


Joe Duffy also said at his Rustconf keynot that even with Midori running in front of their eyes, many on Windows kernel developer team dismissed it as impossible.

Maybe, just maybe, the problem isn't technical, rather human, and that can only be tackled one funeral at a time.


Programming is a human activity and will remain so for the foreseeable future afaics. I've had a taste of the approach to development required to get good performance in GC'ed languages and it wouldn't surprise me if it's just incompatible with the way most system programmers think. And if that's the case it's a real problem for that approach and I doubt funerals will help it unless there's a concrete reason for later generations of the kinds of developers needed for kernel development to think differently. "You can't get memory safety without GC so they'll be forced to learn" no longer applies (if it ever did) thanks to rust.


As proven by them being draged to write Swift and Java/Kotlin code on mobile devices, or the way WASM is being pushed into CNCF projects, a little steering with more painful alternatives if they keep to their old ways, helps a lot.

By contrast newer generations don't think C and Assembly is the only way to program a mobile device, or embedded system, see MicroPyton and JavaScript in the maker community among school kids.


Never underestimate the inertia of developers whose livelihood depends on code only they know and have written.


Still requires a substantial runtime.


Which in many cases has already been proven by making it part of the kernel, even if Xerox, DEC/Olivetti, ETHZ, MSR projects weren't made available to wider audiences.

One can do a Google, and push it no matter what, or give up and take the adoption path that is easier to bring the safety luddites along for the ride.


I don’t fundamentally oppose such a thing, but I assume doing so would require very careful effort. It’s much harder than integrating Rust.


For me, Rust is a nicer Ada like effort, a kind of compromise.

We can wait the usual progress rate of one funeral at a time until a new generation buys into fully managed OS, or another Google/Apple big spender forces them into developers no matter what, which most likely I will not live through.

So we're left with the compromise of allowing only userspace for managed languages, while adopting something like Rust for the "only over my dead body folks" anti any form of automatic memory management languages.

Given that even the success stories of bare metal deployments of managed runtimes doesn't change their minds anyway.


Yeah idk why people are so against this, I guess old habits die hard.


C# is much more capable and fitting this purpose today than 10 years ago (perf, codegen quality and infrastructure have improved by orders of magnitude). However, it cannot compete with Rust when it comes to systems programming.

Mainly because it has different tradeoffs and its high level abstractions and other features like LINQ, interfaces, non-struct generics and async/await are very much not zero-cost, unlike in Rust.

In addition, it does not offer crucial features one would want in kernel development: deterministic memory usage, compile-time safety guarantees for writing concurrent data structures and general systems-programming-first language design.

While C# is a perfectly viable choice for writing userspace OS components, applications, UI and etc., it is a bad choice for kernelspace when C++ or Rust exist.


Microsoft never tried to rewrite Windows in C# during the Vista project. I know this because I worked on Vista and my team wrote code in C++.

Vista did include some new C# APIs and Frameworks. The best ones were Windows Presentation Foundation and PowerShell (it uses a lot of .NET technologies).

Vista took a long time for a lot of reasons. I do not know all of them but my observation was there were a few basic problems:

1. Teams were allowed to check in buggy code. This led Vista to be unstable while it was being developed. Note the Kernel code was usually very good. The shell was frequently difficult to use because of the bugs and because of poor performance.

Note that when it shipped, it worked much better than it did during development. I remember being shocked at how well it worked. I also remember going back to Windows XP and realizing I actually liked Vista better (it had a better user interface, and I liked the improved Windows Update app).

2. Very poor project management. Basically, no one knew what needed to be done or how long it would take.

3. A lot of overly ambitious projects and features. Some teams really tried to do revolutionary things. Sometimes they succeeded (The Windows Display Driver Model is an example of this). Often, they failed (Media Foundation is a great example of a mediocre API which came out of Vista).

4. Some teams took dependencies on immature APIs or frameworks. The problem was the framework and the applications using it were being developed at the same time. This led to a lot of reworked because applications kept on having to be updated because their dependencies changed or were cancelled.

5. Poor leadership - Will Poole led the Window Client team (AKA the desktop version of Windows). Wille Poole previously led the Windows Media player team. My impression was he valued political skill, politics and empire building over competence and technical excellence.

After Windows Vista shipped, he was "promoted" to working on Windows for Emerging Markets. After that, he "managed" his administrative assistant. He then "retired".

I am sure I missed a lot of things and I certainly do not know everything because I worked on a small part of a huge project. Windows had thousands of software engineers, program managers, testers and managers and they worked on a lot of different things.


It's funny how you can tell who's garbage and who was kickass at Microsoft just based on what they're doing nowadays.

Dave Plummer from Dave's Garage channel on YouTube is a great example of this. Sure, just creating Task Manager alone would get you on the books for being a kickass programmer, but everything else he did, and the fun YouTube stuff he's doing now, you can tell he was really a force behind the scenes in the early days.


Technically WPF was part of .NET Framework 3.0 and ran “fine” on XP, no?

(Fine meaning as well as it ran anywhere else, which was not well at all)


You are correct. It was part of the .NET Framework, and it did work on Windows XP. The team which created it was part of the Windows organization and I believe it required changes to Windows to get it to work or run fast. I suspect this is why it has not been ported to Linux like the rest of .NET Core / .NET 7.x .


I preferred Vista interface to Windows 7.

Vista SP1 fixed most problems -- but you needed 2GB RAM.


You should read about: https://en.wikipedia.org/wiki/Midori_(operating_system)

Tested and Failed. There are lot of interesting learnings and blogposts from this project and lot of them are actually landed in different products of Microsoft


Tested, and successfully run the Asian Bing network nodes during its lifetime.

If anything, it failed at the Windows business unit politics.

Unfortunely it lacked the kind of management that has pushed Java no matter what on Android, or either Swift or the high road on iOS.


C# is akin to Java, not Rust.


Good then, given that PTC and Aicas have been seling bare metal JVMs for the last 20 years, with soft real time deployments.


Maybe because it solves about 80% of the PITA security bugs that have beset Windows and other systems software since the dawn of software development? C# is garbage collected with a big runtime. It's not even an option.


Xerox PARC would think otherwise, pity it lost to a PDP-11 OS.


Rust is gaining in mindshare and C++ background curmudgeons still believe OS code can't be in GC supporting languages. But maybe it's better not to look a gift horse in the mouth, replacing unsafe code with something better still works toward slowly establishing mem safety as table stakes in OS code.


Any large company has bored people who will sneak in random languages and frameworks. The Windows network stack used to have an embedded Prolog interpreter, just because. I'm willing to bet there's a small Brainfuck codebase somewhere in the depths of Google.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: