Hacker News new | past | comments | ask | show | jobs | submit login

I'm skeptical about moving existing apps seamlessly between x86 and ARM processors, because you'd need guarantees about process memory layout that I don't think any current compiler makes. Imagine the memory image of a process running on the ARM chip. It has some instructions and some data:

    |        Data          |         ARM insructions        |
You could certainly remove the ARM instructions and replace them with x86 instructions. However, the ARM instructions will have hard-coded certain offsets in the data buffer, like where to look for global variables. You would have to be sure that the x86 instructions had exactly the same offsets. For another issue, if the data buffer contains any function pointers, then the x86 and ARM functions had better start at exactly the same offsets. And if there are any alignment requirements that differ between x86 and ARM (I don't know if there are), then the data had better be aligned to the less permissive standard on both chips.

None of these problems are impossible to solve. They could be solved easily by adding a layer of indirection, at the cost of some speed, and then Apple could go back and do the real difficult-but-fast implementation later.

However, why would it? When its ARM cores are essentially desktop-class, there's no need to have an x86 chip other than compatibility with legacy code. Looking at Apple's history, it seems pretty clear that it likes to have full control of its own destiny, and designing its own chips is a logical part of that, so having its own architecture could be considered a strategic move too.

So given the difficulty of implementing it well, and assuming that Apple eventually wants to have exclusively Apple-designed ARM chips in all of its products, if I were in their shoes, I wouldn't bother to make switching work. I might have a product with both kinds of chips, but I would just have the x86 chip turn on for x86 apps, and off when there were no x86 apps running, and know that eventually those apps would go away. (And because I'm Apple, I have no problem pushing vendors to switch to ARM faster than they want to, so this won't be a long transition.)

However, an even cooler move would be to make LLVM IR the official binary representation of OS X, and compile it as part of the install step of a new program. That gives Apple several neat capabilities:

1) They can optimize code for the specific microarchitecture of your computer. Maybe not a huge deal, but nice

2) They can iterate on their microarchitecture without having to care about the ISA, because the ISA is an implementation detail. This is the technically correct thing that everyone should have done years ago (yes, I'm annoyed).

3) They can keep more secrets about their chips. It's obnoxious, but Apple would probably care about that.

So, there's my transition plan for Apple to move to its own chips. It probably has many holes, but the biggest one is still the question of what Apple gains from this. Intel still has the best fabs, and as long as that's true, there will be some advantage in sticking with them. Whether the advantage is big enough, I don't know. (And when it ends in a few years, then who knows?)




Older enough programmers will remember the DEC Vax to Alpha binary translators. When DEC produced the Alpha you could take existing Vax binaries, run them through a tool, and have a shiny new Alpha binary ready to go.¹

Given such a tool, which existed in 1992, it seems simple enough to do the recompile once on the first launch and cache it. Executable code is a vanishingly small bit of the disk use of an OS X machine.

Going forward, Apple has a long experience with fat binaries for architecture changes. 68k→PPC, PPC→IA32, IA32→x86-64. I don't think x86-64→ARM8 is anything more than a small bump in the road.

As far as shipping LLVM and letting the machines do the last step, that should make software developers uncomfortable. Recall that one of the reasons OpenBSD needs so much money² for their build farm is because they keep a lot of architectures going because bugs show up in the different backends. I know I want to have tested the exact stream of opcodes my customer is going to get.

¹ I think there was also a MIPS to Alpha tool for people coming from that side.

² In the sense that some people think $20k/yr for electricity is a lot.


Yep, and Apple has already done dynamic binary translation once before, during the PPC to x86 switch.


And 68k to PPC.


  Going forward, Apple has a long experience with fat binaries for architecture 
  changes. 68k→PPC, PPC→IA32, IA32→x86-64. I don't think x86-64→ARM8 is anything 
  more than a small bump in the road.
Using the lipo[0] tool provided as part of the Apple Developer tools, it's pretty easy for any developer to create a x86/ARM fat binary. Many iOS developers have used this technique to create libraries that work on both the iOS simulator as well as a iOS device.

[0]: http://ss64.com/osx/lipo.html


> As far as shipping LLVM and letting the machines do the last step, that should make software developers uncomfortable.

Why? This is how Windows Phone 8 works (MDIL) and Android will as well (ART).

On WP 8 case, MDIL are ARM/x86 binaries just with symbolic names left in the executable. The symbolic names are resolved into memory addresses at installation time, by a simplified on device linker.

Android's ART, already made default on the public development tree, compiles dex to machine code on device installation.


This was also the entire premise of the Transmeta CPUs


> However, an even cooler move would be to make LLVM IR the official binary representation of OS X...

It's worth revisiting http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/0437... ("LLVM IR is a compiler IR"), from a core LLVM developer, explaining why LLVM IR is unsuitable for this task.


Ah, thanks for posting the email. You're right. :-)


> However, an even cooler move would be to make LLVM IR the official binary representation of OS X, and compile it as part of the install step of a new program.

I've wondered the same thing. In this respect the IR is analogous to Java bytecode, or C# CLI. In this context these share conceptual similarities, allowing for multiple languages to target the same runtime.

This would possibly open up iOS to being able to be more easily targeted by languages-that-aren't-Objective-C. As long as it compiles down to LLVM IR then this "binary" becomes language agnostic. (Actually, for all I know things like RubyMotion do this today. I haven't delved into it to find out.)


> I'm skeptical about moving existing apps seamlessly between x86 and ARM processors, because you'd need guarantees about process memory layout that I don't think any current compiler makes. Imagine the memory image of a process running on the ARM chip. It has some instructions and some data:

There was a paper at ASPLOS 2012 where they did something like this, but for ARM+MIPS [1]. Each program would have identical ARM and MIPS code (which took some effort), with identical data layout.

1 - http://cseweb.ucsd.edu/users/tullsen/asplos2012.pdf


> However, an even cooler move would be to make LLVM IR the official binary representation of OS X

The IR isn't architecture portable right now. IE, you can't use it as a live interpreter language, because the code it produces make assumptions on the target architecture before final binary translation.

It would be fantastic if Apple would fix LLVM so the IR was portable, it would be amazing for general purpose software if you could ship LLVM IR and have your end users compile it or have web services do it for target devices on demand.


Google's Portable Native Client essentially does something similar: https://developers.google.com/native-client/dev/


>However, an even cooler move would be to make LLVM IR the official binary representation of OS X, and compile it as part of the install step of a new program.

So a user installing Firefox or Chrome or some other complex application would need to wait for tens of minutes before they can use their application? It's more likely they'll just re-use the existing dual arch architecture but instead of PPC/x86 it'll be x86/ARM…


OpenStep supported four processor architectures before it got ported to PowerPC, and used to support "quad fat binaries" that would run seamlessly on all four architectures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: