Apple almost certainly have a MacBook Air based on one of these chips at prototype stage.
There are many reasons in favour, including keeping negotiations with Intel over price interesting, but probably the main thing preventing them running with it would be an inability to manufacture the chips fast enough, which is a major concern in mobile land. This is why lots of Android devices exist in variations based on different chips as hedges by the device manufacturer on long term availability.
I don't think that Apple will completely get away from x86 for a long time. Attempting to emulate the x86 on an Arm would be terribly slow.
I do however, think that they will eventually include both arm and x86 processors in the Macbook Air. That way, backwards compatibility is preserved and low power apps can run on the arm. In the current Macbook Pros they have dynamic switching of GPUs, there's no reason they couldn't use the arm as a coprocessor or even run the full OS.
Here a few technical points:
* LLVM - You can compile once for both both architecture and then the JIT will take over compiling for the specific architecture (Take a look at llvm for the OpenGL pipe line in OSX)
Full screen app- When an app is full screen (if it's a low power app) then the x86 could sleep and the arm could switch to running the app.
App nap - If all x86 apps are a sleep, switch over the running the arm processor exclusively
*Saving State - It's possible to save the state of apps in OSX, a similar mechanism could be used to seamlessly migrated between processors.
This is pure speculation, but it is feasible. There would be many technical challenges that Apple would have to solve but the a capable. The advantage Apple has is that they have absolute control over both platforms.
> I don't think that Apple will completely get away from x86 for a long time. Attempting to emulate the x86 on an Arm would be terribly slow.
It's not you're emulating an entire operating system: the operating system (and many libraries) are native, but the application code is emulated. It's faster than you think, and Apple has already done it twice: once in 1994, and once in 2005 (exercise for the reader: try extrapolating).
Apple's applications would be 100% native long before the ARM version shipped. Some intensive tasks—text rendering, audio/video/image encoding and decoding, HTML layout, JavaScript—these would also be native on third-party apps, since you just have to write the right glue into the emulator. This would be a lot easier than the 2005 switch from PowerPC to x86, which involved emulating a system with 4x as many GPRs and the opposite endian: ARM has 2x the GPRs as x86-64.
Sure, a bunch of apps will see reduced performance. Some will break. But remember: Apple has only been on x86 for ten years. We had the same problems during the PowerPC->x86 transition: you had to wait about two years to get a version of Photoshop that ran on x86 + OS X.
I'm willing to bet that Apple has been testing OS X on ARM for years now.
>but the application code is emulated. It's faster than you think, and Apple has already done it twice: once in 1994, and once in 2005 (exercise for the reader: try extrapolating).
Well in PowerPC->x86 transition x86 was the faster chip. The emulation cost was discounted by some amount. If you go from x86->ARM, ARM is the slower chip, so there's never going to be _improvement_ in performance compared to x86. I don't see why you're equating the two.
At one point I had a machine which allowed movement of data between apps running in Windows 3.1 on a DX4 and Risc OS on a StrongARM. I'm wondering how hard it would be with OSX (a far stricter OS than Win3.1 or Risc OS) to allow a sort of process level separation, where the OS and any ARM compatible processes are running on an ARM, but you keep an x86 around and spin it up for certain processes.
I do however, think that they will eventually include both arm and x86
processors in the Macbook Air. That way, backwards compatibility is
preserved and low power apps can run on the arm. In the current Macbook
Pros they have dynamic switching of GPUs, there's no reason they couldn't
use the arm as a coprocessor or even run the full OS.
Battery life of the Macbook Air is already outstanding. I think I want something more than just better battery life for all that complexity, but eventually it will all be just "taken care of" by the toolchain, so why not?
I think the low hanging fruit would be running iOS apps native on the Macbook Air.
I'm skeptical about moving existing apps seamlessly between x86 and ARM processors, because you'd need guarantees about process memory layout that I don't think any current compiler makes. Imagine the memory image of a process running on the ARM chip. It has some instructions and some data:
| Data | ARM insructions |
You could certainly remove the ARM instructions and replace them with x86 instructions. However, the ARM instructions will have hard-coded certain offsets in the data buffer, like where to look for global variables. You would have to be sure that the x86 instructions had exactly the same offsets. For another issue, if the data buffer contains any function pointers, then the x86 and ARM functions had better start at exactly the same offsets. And if there are any alignment requirements that differ between x86 and ARM (I don't know if there are), then the data had better be aligned to the less permissive standard on both chips.
None of these problems are impossible to solve. They could be solved easily by adding a layer of indirection, at the cost of some speed, and then Apple could go back and do the real difficult-but-fast implementation later.
However, why would it? When its ARM cores are essentially desktop-class, there's no need to have an x86 chip other than compatibility with legacy code. Looking at Apple's history, it seems pretty clear that it likes to have full control of its own destiny, and designing its own chips is a logical part of that, so having its own architecture could be considered a strategic move too.
So given the difficulty of implementing it well, and assuming that Apple eventually wants to have exclusively Apple-designed ARM chips in all of its products, if I were in their shoes, I wouldn't bother to make switching work. I might have a product with both kinds of chips, but I would just have the x86 chip turn on for x86 apps, and off when there were no x86 apps running, and know that eventually those apps would go away. (And because I'm Apple, I have no problem pushing vendors to switch to ARM faster than they want to, so this won't be a long transition.)
However, an even cooler move would be to make LLVM IR the official binary representation of OS X, and compile it as part of the install step of a new program. That gives Apple several neat capabilities:
1) They can optimize code for the specific microarchitecture of your computer. Maybe not a huge deal, but nice
2) They can iterate on their microarchitecture without having to care about the ISA, because the ISA is an implementation detail. This is the technically correct thing that everyone should have done years ago (yes, I'm annoyed).
3) They can keep more secrets about their chips. It's obnoxious, but Apple would probably care about that.
So, there's my transition plan for Apple to move to its own chips. It probably has many holes, but the biggest one is still the question of what Apple gains from this. Intel still has the best fabs, and as long as that's true, there will be some advantage in sticking with them. Whether the advantage is big enough, I don't know. (And when it ends in a few years, then who knows?)
Older enough programmers will remember the DEC Vax to Alpha binary translators. When DEC produced the Alpha you could take existing Vax binaries, run them through a tool, and have a shiny new Alpha binary ready to go.¹
Given such a tool, which existed in 1992, it seems simple enough to do the recompile once on the first launch and cache it. Executable code is a vanishingly small bit of the disk use of an OS X machine.
Going forward, Apple has a long experience with fat binaries for architecture changes. 68k→PPC, PPC→IA32, IA32→x86-64. I don't think x86-64→ARM8 is anything more than a small bump in the road.
As far as shipping LLVM and letting the machines do the last step, that should make software developers uncomfortable. Recall that one of the reasons OpenBSD needs so much money² for their build farm is because they keep a lot of architectures going because bugs show up in the different backends. I know I want to have tested the exact stream of opcodes my customer is going to get.
␄
¹ I think there was also a MIPS to Alpha tool for people coming from that side.
² In the sense that some people think $20k/yr for electricity is a lot.
Going forward, Apple has a long experience with fat binaries for architecture
changes. 68k→PPC, PPC→IA32, IA32→x86-64. I don't think x86-64→ARM8 is anything
more than a small bump in the road.
Using the lipo[0] tool provided as part of the Apple Developer tools, it's pretty easy for any developer to create a x86/ARM fat binary. Many iOS developers have used this technique to create libraries that work on both the iOS simulator as well as a iOS device.
> As far as shipping LLVM and letting the machines do the last step, that should make software developers uncomfortable.
Why? This is how Windows Phone 8 works (MDIL) and Android will as well (ART).
On WP 8 case, MDIL are ARM/x86 binaries just with symbolic names left in the executable. The symbolic names are resolved into memory addresses at installation time, by a simplified on device linker.
Android's ART, already made default on the public development tree, compiles dex to machine code on device installation.
> However, an even cooler move would be to make LLVM IR the official binary representation of OS X, and compile it as part of the install step of a new program.
I've wondered the same thing. In this respect the IR is analogous to Java bytecode, or C# CLI. In this context these share conceptual similarities, allowing for multiple languages to target the same runtime.
This would possibly open up iOS to being able to be more easily targeted by languages-that-aren't-Objective-C. As long as it compiles down to LLVM IR then this "binary" becomes language agnostic. (Actually, for all I know things like RubyMotion do this today. I haven't delved into it to find out.)
> I'm skeptical about moving existing apps seamlessly between x86 and ARM processors, because you'd need guarantees about process memory layout that I don't think any current compiler makes. Imagine the memory image of a process running on the ARM chip. It has some instructions and some data:
There was a paper at ASPLOS 2012 where they did something like this, but for ARM+MIPS [1]. Each program would have identical ARM and MIPS code (which took some effort), with identical data layout.
> However, an even cooler move would be to make LLVM IR the official binary representation of OS X
The IR isn't architecture portable right now. IE, you can't use it as a live interpreter language, because the code it produces make assumptions on the target architecture before final binary translation.
It would be fantastic if Apple would fix LLVM so the IR was portable, it would be amazing for general purpose software if you could ship LLVM IR and have your end users compile it or have web services do it for target devices on demand.
>However, an even cooler move would be to make LLVM IR the official binary representation of OS X, and compile it as part of the install step of a new program.
So a user installing Firefox or Chrome or some other complex application would need to wait for tens of minutes before they can use their application? It's more likely they'll just re-use the existing dual arch architecture but instead of PPC/x86 it'll be x86/ARM…
OpenStep supported four processor architectures before it got ported to PowerPC, and used to support "quad fat binaries" that would run seamlessly on all four architectures.
> I don't think that Apple will completely get away from x86 for a long time.
The switch from PPC to x86 was less painful than many thought. I don't see any particular reason why the "fat binary" approach (which is technically gross, but quite practical) would not work for switching from x86 to ARM.
Developers will hate it, but the whole app store farce shows what Apple thinks of developers.
The real challenge will be figuring out the compatibility question for third-party software. Do you make everybody recompile before their stuff works on the new hardware? Do you ship an emulator for x86-64 for a while, like they did when they went PPC->Intel?
OS X and its predecessors have already been ported to run on a bunch of architectures: x86-64, i386, PPC64, PPC32, PA-RISC, SPARC, and Motorola 68k. All, or nearly all, of the CPU-dependent parts of the OS are the parts that are shared between OS X and iOS. Making OS X run on ARM would be trivial for Apple, and I'd be shocked if they didn't have it up and running already as a hedge.
The emulation question is the main reason -- not the only reason, but the main reason -- I don't see Apple really moving in this direction with the Mac line any time soon. But I don't think it's just a matter of third-party software for OS X that's the issue; Windows compatibility is a pretty big deal. Apple doesn't push Macs as being great Windows machines currently (although they did for a while), but they're certainly aware of this use case and how it's been a minor but measurable plus for the Mac line. Even as much as Apple is known for burning down the old to make way for the new, I think they'd be pretty cautious with this one.
The notion that this CPU is really about giving ample power to the iOS line makes more sense to me. There will eventually be an "iPad Pro," whether or not it gets stuck with that moniker, and it won't have to worry about backward compatibility with anything other than previous iOS devices.
I don't see low-end users (think entry-level MacBook Air) running virtualization or Boot Camp. The well-known Mac developers will recompile their apps as soon as Xcode is updated, and huge apps like Office will probably get a pre-release version beforehand. So a cheaper ARM MBA should be usable for most of it customers.
> I'd be shocked if they didn't have it up and running already as a hedge.
This seems very likely to me. They presumably have a fair bit of the work done with iOS, which shares a kernel codebase and so on, and really, it'd be worth the effort just to have it to show to Intel executives during negotiation.
That was my thought, too. It seems like if they do this then they'll be going down the same path as Microsoft did with the Surface RT and Surface Pro, right?
Not exactly. Remember that the Surface RT deliberately restricted the applications that could be run via Win32 on the familiar desktop. Just about every third party application needed to adopt the new RT APIs and run "Metro"-style.
If they go down this road, Apple will not put constraints like that on their Mac developers: I'd expect that all the most commonly used frameworks would continue to be available, and porting most applications would amount to a recompile against a new Xcode.
I never once gave much thought to the idea that Apple would somehow shut out unsigned apps from its desktop computers, for the main reason that there would be too many existing apps that would suddenly lose functionality and cause a backlash.
With a port to ARM, though, Apple can simply say that the developers haven't ported their app to the new architecture yet, and that more apps will become available as developers catch up. Meanwhile, all those apps that currently require some kind of lower-level access could be banned.
On the other hand, I think that would be a huge boon for non-power users; it would be nearly impossible for malware to get onto the computer, and even annoyances like Adobe's and Microsoft's auto-updaters would finally get funneled though the App Store, which would prevent programs like that from constantly occupying memory/CPU/network.
For this to work, though, I think Apple would be wise to include some kind of developer mode feature (perhaps even forking over the $99/year fee) that would allow unsigned or potentially dangerous apps to run.
All of this (admittedly wild; sorry, coffee is kicking in) speculation definitely has me excited for the future of Apple, though! I haven't felt like that since Steve was around.
"All of this (admittedly wild; sorry, coffee is kicking in) speculation definitely has me excited for the future of Apple, though! I haven't felt like that since Steve was around."
Wacky. Your vision of shutting out unsigned apps has me looking for the exits. I've been an Apple user since the 80s and I'm pretty sure this would finally make me jump ship. Apple's current Big Brother trajectory frightens me to no end.
I'm pretty sure this won't change your mind, but my view is more-or-less "better the devil I know." Trading in Microsoft and Adobe's phone-homers for Apple's is fine in my book.
I don't use Microsoft or Adobe software as it stands. Getting rid of a couple of third-party updaters I don't even use in exchange for giving Apple complete control over whether or not I'm allowed to run something is an awful tradeoff. It's just barely tolerable on iOS because I use those devices more as appliances than computers, but it's not workable for me on a real computer.
I wouldn't imagine porting OS X to ARM would be the hurdle as most of it is already done in that iOS is most of OS X. Apple has full control of that source code base and probably already has done this, like they did during the transition to IA-32.
It would be the third party applications that would be a hurdle. Without a transparent layer like the deprecated Rosetta, they would not be able to run existing applications and would have to explain that a new MacBook Air can't run all of the old MacBook Air applications.
Maybe if they come out with an ARM powered laptop, they will call it something other than MacBook Air to avoid that confusion.
Don't forget that Apple has spent years getting everyone on Xcode, and disciplining them to avoid the parts that would make a recompile & test straight-forward.
They also have a "good enough" office suite (for many people), as well as the iWork apps.
Such a device could potentially also run iPad apps without much change.
Given Apple's history with this stuff, there's a very good chance that that's already done. Intel was supported internally for years before the PPC -> Intel change.
Makes perfect sense, really; not only would it be a hedge against having to adopt Intel, but just the value of having it around to show IBM people when negotiating over the PPC would probably have been worth the cost.
I don't think they will go that path. It would make much more sense to introduce a desktop version of iOS itself. iOS has already everything that's needed for a light desktop OS. It would provide much better experience than ChromeOS, there are already thousands of applications doing about anything and it would require very little effort to adapt iOS to run as a desktop OS and to adapt the existing applications. I think that a hint for that is the recent port of Microsoft Office. If users want more sophisticated OS they can just use 13" MacBook Pro. It's quite easy to guess that the next version will be thinner and lighter and probably cheaper - very similar to the current MacBook Air.
The volumes for desktop/laptop are almost trivial compared to the volumes for handheld. (At least for Apple.) So making the processors in volume for a Macbook Air would probably not be a problem.
Also, Apple probably have little problem getting preferential treatment from manufacturers.
There are many reasons in favour, including keeping negotiations with Intel over price interesting, but probably the main thing preventing them running with it would be an inability to manufacture the chips fast enough, which is a major concern in mobile land. This is why lots of Android devices exist in variations based on different chips as hedges by the device manufacturer on long term availability.