Hacker News new | past | comments | ask | show | jobs | submit login
ARM64EC: Building Native and Interoperable Apps for Windows 11 on ARM (windows.com)
134 points by sjmulder on June 29, 2021 | hide | past | favorite | 96 comments



This is not like Universal 2 on Apple's transition, where you can have x64 and ARM versions of the program in the same binary (a "fat" binary). In Universal 2, the app runs as either 100% Intel or 100% ARM, depending on platform.

Instead, what ARM64EC is, is that you can put some ARM code inside a normally x64 binary and ship that to ARM customers like Surface Pro X users. ARM64EC doesn't run on Intel. It's just meant to speed up the porting effort of Intel programs to ARM because you don't need to make everything ARM, just some of it, to go. The Intel bits will be translated using Microsoft's slower-than-Rosetta-2-on-already-slow-hardware translator.

How many people will use this? Considering how successful Windows on ARM has been, coupled with how apathetic most Windows developers are... I give it a low chance, but who knows?


Microsoft's biggest problem, IMHO, is the question why on earth a developer would give a rip about supporting Windows on ARM.

It's not like Apple, who said that they were going full ARM on all devices, get on board or run 30% slower than you could, and we won't have that translator forever. The investment to support Apple ARM is obvious in how it pays off.

Whereas, this? You're looking at how few people have Snapdragon devices, the (by comparison to M1) truly lethargic performance, slow translation, and the fact that your x64 binaries will still work with the translator. What's the business case for adding ARM support? Almost nobody uses it, and those that do don't strictly-speaking need it for your app to work.

The only reason Microsoft has really given for supporting ARM is that you'll do it out of the mercy of your heart for those few ARM users, in the hopes that you'll help build their ARM platform for them. Tough sell.


I mostly agree that that's the current state of things, but I'm a little more optimistic about the future.

The software support for ARM in Windows has taken big leaps and bounds in the last ~6 months; this, the more general x64 emulation, Office, WPF, and probably many others that I'm forgetting. Hardware-wise, we're finally getting a cheap ARM developer kit and we're seeing some very cheap ARM laptops like the Galaxy Book Go.

The whole thing's frustratingly slow compared to Apple's transition, but I would not be surprised if ARM is a sizeable chunk of new Windows machines a few years from now.


I would hope, but not to spoil your hope, but I'm less optimistic.

The cheap hardware development kit comes with a Qualcomm Snapdragon 7c. For a development kit, that chip is a joke. It's slower than almost every Windows on ARM laptop you can buy right now, which already aren't known for having good performance. It also comes with only 4GB of RAM and 64GB of storage, hardly enough to even put Visual Studio on it.

Microsoft's idea is that you'll only remote-deploy to it for testing because it's not powerful enough to actually do most programming on it. Compared to the $499 Apple Developer Kit which gave you an A12Z, 16GB of RAM and a 512GB SSD... this thing better be, what, $200 at most? It's hardly a development kit if you can't actually develop on it.

You also need to remember that Microsoft and Intel's fates are bound together. Intel can put up with Microsoft's affairs and flirtations, but if it ever started getting serious, Intel won't permit it to ever get to even minor threat level.


VS doesn't currently run on ARM (which sucks, but that's definitely a very high priority for the team; I suspect the recent 64-bit migration is setting the stage for ARM support), so there isn't much point in speccing it to run VS. Agreed that remote development/debugging is not ideal.


Visual Studio ran on Windows on Arm64 since the very beginning, it had broken local debugging for quite a while though... (and of course, doesn't run natively but under binary translation)

Officially though, it is not considered as supported.


Like I said though about Intel making sure it never becomes a viable threat:

https://arstechnica.com/information-technology/2017/06/intel...


Snapdragon 7c DevKit looks poor, but it enforces app developers to optimize their app to poor cheap machines. This is somewhat good thing but personally I don't want to live with 7c.


Unless they just don't buy the development kit, because like I said earlier, there isn't much motivation to support Windows on ARM anyway. If your experience is miserable trying to support something you have almost no business case for, why bother?


I don't know why you criticize this improvement by saying "but who buy ARM Windows". Their improvement is in progress to sell more ARM Windows devices in the future.


Why is their translator slower than Rosetta 2? What would the technical reasons be?


There are a few reasons. To put it simply, Rosetta 2 is a much more technically advanced translator.

1. It came out with 64-bit support right out the gate when Microsoft had spent over 2 years on their translator only supporting 32-bit apps, and only just recently rolled out 64-bit support for translation.

2. Unlike Microsoft's translator, Rosetta 2 scans the file ahead of time and essentially creates a small "map" of the binary to help speed things up. This results in a ~15-20 second delay you open an app in Rosetta 2 for the first time, but much faster performance afterward, while Microsoft's translator doesn't do any of this pre-mapping and has no idea what's coming next.

3. Apple's M1 SoC implements many "shortcuts" within the hardware, such as having native ability to remap Intel memory ordering. This means Rosetta 2 doesn't even have to convert x86 to ARM fully, but instead can convert x86 binaries into a hybrid of x86 and ARM together and can skip steps like memory reordering that Microsoft's translator has to do.

Some Windows fans initially said that comparing Microsoft's emulator to Rosetta 2 was unfair because Rosetta 2 "is cheating." Of course, this line of reasoning didn't last long because any cheats to get to the same end results are fair game.


> This means Rosetta 2 doesn't even have to convert x86 to ARM fully, but instead can convert x86 binaries into a hybrid of x86 and ARM together and can skip steps like memory reordering that Microsoft's translator has to do.

I mean, there's no real "hybrid" thing going on at all, it's just a matter of what instructions get emitted by the compiler to match the externally observable semantics. In fact are already ARM platforms that use sequential consistency (TSO) for the memory model besides the M1; Nvidia's Jetson platforms are one such example, and a Rosetta-like translation layer on that platform would be able to take advantage of it all the same.

I guess it's just a matter of wordsmithing at the end of the day, though!


> Rosetta 2 doesn't even have to convert ARM to x86 fully

Think this is just a typo - you meant the other direction?


Fixed. I meant x86 to ARM.


Are there finally fat binaries? Once again you would need to build and deploy separately for ARM. Put some logic into your installer or your software, which version to install.

macOS, iOS and Android have built in solutions for years, to just publish Packages with multiple architectures and the system executes the appropriate binary.


On modern Windows, you can have multi arch packages as .appxbundle or .msixbundle.


And so far nobody is using that.


Yeah, MSIX is a mess and I'm not surprised by the low uptake. It's evolved from the ultra-locked-down AppX days slowly and in a very messy way; documentation is often out of date/incomplete, VS tooling is buggy, and the kicker is that App Installer (required to install an MSIX package with a double-click) doesn't even ship on all versions of Windows.

I have mostly learned to live with it, but I certainly do not like it.


It is not being used, because Microsoft isn't Apple so everyone keeps using Installshield and friends.


Also all installer solution is somewhat hated.


The whole point of the new ABI is to not require that much rebuilding. You can keep the main application in x64 and only the performance-critical functions in a ARM64EC library. Or you can have a ARM64EC application that loads a x64 plugin or third party lib.


Yeah, but ARM64EC applications won't run on a x64 Windows. So you need to be able to deploy both (x64-version and ARM64EC-version).

Or did I mix up something?


ARM64EC is so that you can ship a semi-ARM version of your app without having to do a full ARM port. You still need to ship two versions of your app - you can't create a binary that runs on x64 and ARM like Apple's Universal 2.


Exactly how I understood it. So you can not just xcopy deploy an EXE (or on macOS an .app). You need to build an installation package that wraps both architectures and install it. During installation the correct architecture is chosen.

My prediction: there won't be any ARM builds for most of the applications for the next 10 years. It must become super easy, like in Xcode or Android studio, where you just tick a check-box for another architecture and it gets build and bundled automatically.


Nope, you can't. You would need to make an installation package and install the right one. No unified .app for Windows.

Honestly, I've become more appreciative over the years how on Macs everything is a .app in the Applications folder instead of being scattered across the OS, and if you want to back them up or uninstall them you can just move them around like files. I was hoping just a little that with Microsoft's sandboxing efforts we'd get a sandboxed .app-like file for Windows, but no luck.

The biggest problem, I think, that Microsoft has is that if I'm a company, there is literally almost zero reason to support ARM. It's technically complex, the userbase is tiny, the performance is slow on the few computers that have it, and those users don't technically need it for your app to run. So why bother?

It's almost the same level of asking a company to support Linux machines. There are probably more Linux machines floating around than Windows on ARM. It would almost make more financial sense to port your app to MacOS than Windows on ARM if you do cost/benefit.


For software developers it must be easy and cheap to support ARM. Nobody will change their complete development and deployment pipeline just to support a few ARM users.

Every packaging framework I used on windows introduced it's own problems so far. It can take literally hundrets (!) of development hours until your Installshied, MSI or MSIX package works on all your customer systems. If you have a big user base it can happen that customer support spends thousands of hours to anlyse and fix installer issues.


Windows on ARM already had CHPE binaries which mix ARM and x86 in the same address space - i wonder if this new ABI is a related effort,


> Are there finally fat binaries?

No, it's more than that. It's a mix-and-match of x86, and arm binary code, so you don't have to keep a two separate copies of every binary.


> It's a mix-and-match of x86, and arm binary code, so you don't have to keep a two separate copies of every binary.

That's what a fat binary is. Multiple architectures supported by a single binary. In the simplest case, simply by including both, though some de duplication of data segments can be managed.


> though some de duplication of data segments can be managed.

Not just duplication, but they want to make the gigatons of binary only software, and libraries available for WinARM without their authors doing a thing.

Most commercial companies would simply not care enough about WinARM to move a finger.


Is it though? It is a binary format for Windows ARM, so you can mix x64 and ARM code there.

x64 or x86 Windows will still require the old binary format. So you once again have to build and deploy two versions.


So is there any reason at all to use the original ARM64 ABI? Adding another ABI seems like a huge undertaking. Isn't the original ARM64 windows ABI a fairly recent invention, too? Too bad they didn't make it "ARM64EC" from the get-go?

Will there be two versions of every system dll, one for each ABI?


Compatibility with the rest of the ARM development & tooling world is one fairly large reason to stick with the original ARM64 ABI. I would imagine, for example that GCC would not now work for ARM64EC and therefore you're getting locked-in to a Microsoft world by using this.


It hardly matters, it isn't as if one OS executable is blindly being put to execution in another platform.

Native code isn't bytecodes.


I can actually imagine a SysV x64 flavor of ARM64EC becoming a thing in the future, assuming of course there’s no patent barrier.

GCC would not work for now without the devs having even seen it, but there is no reason they would not add it. It does support stdcall on x86 after all!


https://www.linkedin.com/pulse/emulators-paradox-new-archite...

ARM64EC will be a documented ABI for 3rd-party compiler developers.


DLLs shipped with the OS (and quite some 3rd-party ones too) are ARM64X. This means that they expose both the ARM64 and ARM64EC ABIs.


ARM support for Windows seems like it would predate this emulation effort by quite a while, but I'm not sure.

Historically system DLLs on windows aren't vulnerable to this kind of API skew because there's a special ABI specifically for windows APIs referred to as "stdcall", everything exported from user32.dll etc is exposed via that ABI. I would expect the same to be true here. Note also that historically there aren't separate copies of all the system DLLs on windows for 32 and 64 bit, most of the 32 bit DLLs are full of thunks that jump to the 64-bit ones if you're running on a 64-bit Windows. So that approach would also be possible here.


Not the case. Actual thunking is only done when calling into the kernel, you can't load a 32-bit DLL into a 64-bit process or vice versa.


I think GP is thinking more of how Windows 9x did 16 and 32 bit, which was full of funky thunking and only really had one set of libraries AFAIK (awaiting inevitable correction).



I'm not sure what you mean, it says right there in the article that the 32->64 stuff is done in user mode and that there are thunks in the 32-bit binaries.


Microsoft is very likely going to make a second attempt at mobile hardware, but this time with a different strategic approach - one that allows everyone who bet on Windows 11/ARM to be able to run whatever the like without the headaches so common for Android/iOS.


Surface Duo is basically that.

In what concerns tablets, everyone that I see not carrying an iPad most certainly are carrying hybrid/foldable laptops with Windows on them.


I wonder if the next version of the raspberry pi would serve as a good system to run Windows on Arm on. If so, I could see opportunity for it to capture a large part of the low-end Windows market.


You can already do this (unofficially) on a Raspberry Pi 4 with Windows 10 [0]:

It has also now been done (unofficially) on Windows 11 and it works. [1] Obviously not for the faint of heart and is completely 'unsupported'.

[0] https://www.worproject.ml/

[1] https://www.youtube.com/watch?v=zLb0d7zTsRY


Indeed so, but I'm talking about a fair bit of speed improvement with better io (eg. usb type c) and offical support of Windows


Well you could wait for them to eventually support it. (Could take years)

Or you could do it yourself, today.


If this happens, the RPi-way of booting could end up enshrined as the way to boot ARM devices in the same way the PC BIOS did until EFI came along. But maybe not, Microsoft will probably support EFI on ARM all the way. They do, don't they?


That's backwards because the raspberry pi SoC is backwards. The reason there is a "RPi-way of booting" is that the RPi is not an "ARM computer", it is a video core "computer" (technically the video core is just a graphics chip) running a proprietary OS called threadX that just happens to have ARM cores.


Yea. First Windows 8 RT ARM devices used UEFI + Secure Boot. I don't see why they would change the model.


AFAIK Windows ARM devices are currently all EFI based.


> even a module can freely mix and match ARM64EC and x64 as needed

Looks great. It helps ARM migration for software that heavily depending native plugins maybe like DAW.


Is there any decent ARM CPU in works for this Windows on ARM that can be comparable to M1 in terms of performance?


Nothing you can buy today. However, I imagine AWS' ARM Neoverse based Gaviton2s are quite spritely, but you can't run Windows on them yet.

I suspect the successor to the Qualcomm 8cx Gen 2 will integrate X2 cores, which should give the M1 a run for its money.


> I suspect the successor to the Qualcomm 8cx Gen 2 will integrate X2 cores, which should give the M1 a run for its money.

And by that time I guess M2 will drop... As much as I dislike Apple's walled garden, I'm having hard time finding a reason to choose Windows on ARM over it.


Quite possibly, but progress isn't linear. Apple managed to leapfrog ARM and Qualcomm this generation, but there is nothing to say that some manufacturer won't be able to leapfrog Apple in the next. No doubt, they've had teams carefully dissecting the M1 and drawing inspiration for future generations.

However, yes I suspect Apple will retain dominance for at least the next few years. Thankfully for incumbents though, Apple doesn't want to be an OEM, else they'd be in some real trouble!


You, and everyone else. The SQ1 and SQ2, which are upclocked versions of the 8cx by Microsoft, are lethargic compared to an M1. And the device they come in is more expensive.

https://www.cpubenchmark.net/compare/Microsoft-SQ2-vs-Micros...


> I suspect the successor to the Qualcomm 8cx Gen 2 will integrate X2 cores, which should give the M1 a run for its money.

Qualcomm has been trying to catch up to Apple ever since the Apple A7. I don't think they'll manage to do it but I'd love to be wrong. They are probably catch up to the M1 when the successor or even the successor of that one is available.


I find it troubling that it seems like Qualcomm is the only viable ARM option right now, and even then they are a ways behind the M1 at the moment


Classic (Pre Mac OS X) Mac applications were 68k code blobs that lived in the resource fork of an application (The resource fork was basically a type/id database where arbitrary data could be loaded into memory and purged (ore prevented from being purged) when memory was low.)

When the PowerPC transition took place, much of the system remained in emulated 68k. PowerPC applications were typically data fork (regular file) demand paged, but they could also live as resources. The point being, once you got some data in memory, you were able to call it 68k -> PPC or PPC to 68k using something known as the MixedMode Manager.

Since a big part of the way the classic MacOS was designed was to supply function pointer callbacks, there was a tedious phase where you had to wrap any function pointers supplied to the system with a thunk that would perform the ABI translation for you.

You typically wouldn't have to write those yourself - all the toolbox APIs had a macro / inline defined that would create the thunk for you - the legacy of this can still be seen in the vestigial CarbonCore/MixedMode.h and related headers.

Anyway, it was fairly trivial to wind up with code that not only mixed processor architectures and ABIs, but also supported in process loading of third party code (like PowerPC accelerated photoshop plugins, or After Dark screen saver modules.)


Microsoft one step closer to conquering the mobile space (who wouldn't want an Android-compatible, ARM Windows phone that also serves as a desktop?)


I was happy with the ARM Windows phone deal, without having to deal with Android Java, NDK primitive tooling that makes me miss Symbian C++, JNI boilerplate, that resource hog duo known as Android Studio/Gradle.

But that is now gone, and I never bothered to make use of my DeX enabled devices in such context.


I also would have liked that but developers never jumped on that train. Most apps reside on Android/iOS so the appeal for the consumer and developer is that they also work on a Windows phone

Android Java performs really well nowadays. It will be messy to make everything so cross-compatible but I appreciate being able to use my Windows apps on a phone that also integrates with my Android apps. Also, it might save me having to upgrade my desktop. My mobile device might be all I need from now on


Android Java performs really well when one is happy to deal with Java 8 subset and cherry picked features up to Java 11, while Java 17 is getting ready to be released, Java is getting a JNI replacement, explicit SIMD, value types, GPGPU support, all stuff that will never show up on Android Java, Google's own J++ flavour.

I am looking forward to see if "Google for Games Developer Summit" will finally bring any improvement to the now 10 year long NDK usage pain of writing JNI boilerplate by hand, and C APIs to what is actually written as C++ underneath.


Kotlin on Android isn't too bad, but it wouldn't be as necessary if Android's Java compatibility had kept pace with the times. My guess is that they halted that train once the lawsuit from Oracle really picked up steam.


Except the little detail that Kotlin and Android Studio wouldn't exist without Java ecosystem, and JetBrains proved their skill in doing the full stack implementation when they borked the design of Kotlin/Native with an incomptible memory model.


Agreed. I am one of those who is happy to deal with Java 7/8. Newer versions of Java also come with their own set of problems. But the very old Java 7/8 runs really fast on Android and also on Android emulators


Windows 11 doesn't run on any device with a screen smaller than 9", according to the Minimum System Requirements.

Can you brute-force it? Yes. Mobile devices? Maybe the tablet market, but Apple has that market owned in the US for premium tablets, and Microsoft's closest competitor is a more expensive and much slower device called a Surface Pro X, or a Surface Pro 7 with Intel even though the tablet experience on that thing is pretty abysmal. It's a much better laptop than tablet software-wise.


My wife has one of the HP ARM WindowsPhones that can serve as a desktop ("continuum"), which was cute but not actually all that useful in practice; it's a shame they never admitted that running Android apps was more important until it was far too late.

The original migration strategy was "make everyone write in C# for UWP, then it'll run everywhere". While that's almost true, only Apple can force that kind of migration.


Make everyone write C# with UWP would never work out like that, because UWP uses .NET Native AOT compiled, and there are several UWP APIs that were only exposed to C++/CX (now replaced by C++/WinRT).


I don’t want an android anything.


Microsoft could make the uptake of their ARM64 hardware much more likely by offering all their software products on it for free for a few years until it gains some traction. I quite like my 1st gen SPX, but it's certainly got some limitations. Windows 11 preview works pretty well on it so far, FWIW.


Super interesting! I wonder how close it is to the Rosetta 2 ABI mapping.


the idea of mix-and-matching architectures in one address space is the cool thing here.


Great to have some healthy competition, as long as they don't force us to login using an Outlook account.


At this point I feel like supporting Windows in any way is probably immoral.


Do you mean "supporting Microsoft", or do you mean being a part of the Windows ecosystem by writing software that runs on Windows? Could you explain why you feel it is immoral?


Because WinTel is so tied to closed source, backward binary compatibility, and binary distriboution, Microsoft is forever trapped in the x86 space.

For Microsoft to move to ARM, they will need to the whole Windows ecosystem, and every software maker to create, and maintain ARM versions of everything.

ARM64EC is a very desperate move to try working around that.

On Linux, it's simple. ./configure, make, and go for a coffee break.

Their decision to go with binary-centric model now came back to bite them. They have no future in the ARM dominated world.


> Because WinTel is so tied to closed source, backward binary compatibility, and binary distriboution, Microsoft is forever trapped in the x86 space.

Because Linux is so tied to open-source, just-recompile-everything software distribution, it will never take over the Desktop. Linux is forever trapped on the server (and embedded systems which break with the aforementioned assumptions).

> ARM64EC is a very desperate move to try working around that.

This is a narrow view. I wish software developers would spend more time in the "real world" to see how a lot of software is actually being used. Not everything is SaaS in a web browser.

> On Linux, it's simple. ./configure, make, and go for a coffee break.

Yes, and when you return, you will most likely find yourself with a set of build errors unique to your system, not a working binary.

> Their decision to go with binary-centric model now came back to bite them. They have no future in the ARM dominated world.

I don't think so. They'll actually make it work without breaking a lot of stuff. I value that. It's one of the reasons why I don't use Mac OS.


Microsoft have, for some years now, realised that shipping bytecode is the future, with easy cross-platform targeting; they've been gradually working towards that with C# and dotnet core. It just takes a very long time to turn the tanker around, and they care more about not forcing all their developers to do the work of transition and their customers to upgrade.

> ./configure, make

Nothing involving autoconf is ever simple.

But for simple apps, "dotnet build" now achieves that level of simplicity.


Microsoft will never achieve that thanks to WinDev political agenda against anything .NET, that is was already the original design of .NET when it was called Ext-VOS, that is why Managed C++ was part of the package since the early days.

Ext-VOS was supposed to be the runtime that would unify VB, C++, COM and everything else that was going to come.

Their sabotage of projects like Longhorn, Managed DirectX, XNA, Singularity, Midori, C++/CLI stagnation, C++ only APIs on WinRT, are a clear indication that it will never happen.

I would be gladly be proven wrong, but I have watching this movie since Visual Studio.NET was released.


Dotnet doesn't work on WinARM


Are you sure? It seems like that's at least at a working proof of concept stage, if not beyond.

https://sinclairinat0r.com/2020/02/05/compiling-net-core-for...

https://github.com/dotnet/runtime/issues/36699



> They have no future in the ARM dominated world.

Beware: microsoft has so much money. Wars are lost, if you have enough money, the other side will eventually lose. I don't think they can be written off just yet. Even considering Linux has been running well on arm for a long time, the arm world is not irrecoverable for microsoft.


Nokia also had sooo much money


Yeah, that is why they are still around and own Bell Labs now, UNIX's birthplace.


The competition had more.


Apple was only about 90 days from bankruptcy when they finally brought Steve Jobs back. They then managed to get a loan, and scrape together the funds they had to bring us Mac OS X, then the iPod, then an iPhone, then an iPad bringing themselves back to stability. Cash isn't everything, talent, luck, and strategy plays a factor.


Cash isn't everything but it is definitely critical. If apple were 90 days shorter on money, the world today could be a bit of a different place.


Is not about closed or open source. It’s about MS past priorities. Apple is also closed source, but the need to transition from PowerPC to Intel, forced them to have multi-platform app packages (universal binaries). Nothing prevents MS to do the same. The legacy of Wintel apps reflects the MS priorities in the past. They have UWP, but in the usual MS way: ambiguous messages to the devs adopting the platform.


> Nothing prevents MS to do the same

There is a big difference between Apple's transitions between architectures and failed attempts by Microsoft - Apple is also a sole creator of its hardware, not only software. After they announced transition no more PowerPC would be sold few years later. Microsoft can only dream of banning hardware producers from making x86 laptops/desktops


> the need to transition from PowerPC to Intel, forced them to have multi-platform app packages (universal binaries)

Classic MacOS had them before that, during the transition from 68k to PPC.

NextStep called them Multi-Architecture Binaries since Next ran on 68k, x86, and a couple of RISC CPU flavors.


How would Apple be any different, in particular when talking about closed source and binary centric models? I think the core of Microsoft's difficulty lies elsewhere: two factors that come to my mind are fragmentation and willingness to maintain software compatibility ad libitum (kind of like Apple maintaining Mac OS Classic mode on Apple Silicon).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: