It's of course impossible not to respect Linus' opinion and first hand experience in this space, but doesn't this whole post completely ignore the 100 ton blue whale in the room? Namely smartphones. That's an entire enormous segment of the industry and it's nearly 100% (or entirely 100%?) literally develop-on-x86-deploy-on-ARM. Smartphones also fit
>"This isn't rocket science. This isn't some made up story. This is literally what happened"
right? I mean, I can see arguing that going up into the cloud is different in some ways then going down to smartphones (although the high end ones are now going to outperform plenty of old dev machines in burst power). There are certainly differences in scaling and such. But the maturity of the tech for cross development of high level software isn't the same as it was in that era either. And if we're talking about bottom-to-top revolutions, embedded and smartphones seem to be at a lower level and much higher volume then PCs.
Finally there is clearly an upcoming disruptive fusion event coming due to wearable displays. When "mobile" and "PC" gets merged, it certainly looks like ARM is in a strongly competitive position for some big players, and having more powerful stuff up the stack will matter to them as well.
None of which is to say he won't be right at least in the short term, but it still is kind of odd to not even see it addressed at all, not even a handwave.
Smartphones are a good argument for both views IMHO. Native development (as in native machine code executables) on Android is still a terrible experience even though they had a decade to fix it. It's much better on Apple platforms, maybe because they actually cared about developer-experience and native code is a "first class citizen" there.
It goes beyond the different instruction set of course and most of the time this is indeed mostly irrelevant (unless you've arrived at processor-specific optimizations), but the "develop on the same platform you are running on" still has the least painful workflow IMHO.
I think the ARM-based Macs are inevitable, although it might be called "iPad Pro Developer Edition".
This is Jeff Atwood's argument: https://blog.codinghorror.com/the-tablet-turning-point/ ; Apple tablet performance at Javascript is now catching up to and exceeding desktop performance. Apple have also sunk a lot of money into developing their own processor line, and they have experience in force-migrating all their customers between architectures. At some point you might not be able to buy an Intel-based Apple laptop any more. Given the immense brand loyalty among web developers, they are likely to shrug and carry on .. and start demanding ARM servers with high Javascript performance.
I just wonder if Apple can design laptop chips that perform well (per watt) at 45W TDP or desktop chips at 2-3x that and with multiple sockets. If not, then what’s the point?
I won’t move to an ARM Mac, personally. I will move to Windows or Linux on x86 for all the reasons Linus gives and also for games. Sorry, but an ARM Mac may finally push me where crappy keyboards and useless anti-typist touch bars have not quite done.
Native (NDK) development on Android is hard on purpose, as means to increase the platform security and target multiple SOCs.
NDK level programming is explicitly only allowed for scenarios where ART JIT/AOT still isn't up to the job like Vulkan/real time audio/machine learning, or to integrate C and C++ from other platforms.
In fact, with each Android release, the NDK gets further clamped down.
I would like a better NDK experience, in view of iOS and UWP capabilities, on the other hand I do understand the security point of view.
Yeah, right, just like the PS3 was intentionally hard to develop for to keep away the rubble. That worked out really great (at least Sony did a complete 180 and made the PS4 SDK a great development environment).
As long as Android allows running native code via JNI, the security concerns are void anyway. If they are really concerned about security, they would fix their development tools (just like Apple did by integrating clang ASAN and UBSAN right into the Xcode UI).
One article is about enforcing the exvlusive use of public APIs. The rest is about hardening the C/C++ code of AOSP. I dobnot see any "clamping down" here. What am I missing?
Except they allow nearly everything for regular Android apps since libc lets you access nearly every syscall.
Nothing was meaningfully "clamped down" there. You can't directly syscall some obsolete syscalls anymore, and you can't syscall random numbers, but nearly any actual real syscall is still accessible and nothing indicates that it won't be.
As long as libc can do it so can you, since you & libc are in the same security domain. Or anything else that an NDK library can do in your process, you can go poke at that syscall, too.
It'd almost always be stupid to do that instead of going through the wrappers, but you technically can
This is mostly setgid/setuid, mount point and system clock related stuff. Except for syslog and chroit, I see no syscalls that you should be using in a user process anyway.
So technically, this is clamping down Android, but it seems like a pretty reasonable restriction and far from a heavy handed approach.
I bet Apple's experience moving from PowerPC to X86 gave them a leg up as well, and in both cases (PowerPC/X86, MacOS/iOS) they had the power to force developers to cross-develop to maintain access to their platform. Nobody is in a position to force server-side developers to switch to ARM.
From the other perspective too, Google clearly seemed to want the flexibility to change the details of Android architecture on a "whim", seeming to settle on the Linux kernel at the last minute and expecting to support both ARM and x86 and whatever else they felt they wanted. Google's focus on the JVM/Dalvik and making Native hard in Android seems quite intentional, forcing developers to cross-develop in a different way by obfuscating as much code as possible into a virtual machine that they could 100% control abstracted from underlying architecture and even kernel.
What do you think can be improved in Android? Unlike iOS, Android is actually running in diverse hardware. All iOS devices are Arm, where as Android will run on x86. That alone makes it more of a hassle.
The command line C/C++ toolchain is fine, at least now where this is basically reduced to clang and libc++.
The problem is basically everything else:
- The ever changing build systems. And every new "improvement" is actually worse than before (I think currently it is some weird mix of cmake and Gradle, unless they changed that yet again).
- Creating a complete APK from the native DLL outside Gradle and Android Studio is arcane magic. But both Android Studio and Gradle are extremely frustrating tools to use.
- The Java / C interop requires way to much boilerplate.
- Debugging native code is still hit and miss (it's improved with using Android Studio as a standalone debugger, but still too much work to setup).
- The Android SDK only works with an outdated JDK/JRE version, if the system has the latest Java version, it spews very obscure error messages during the install process, and nothing works afterward (if it needs a specific JDK version, why doesn't it embed the right version).
The Android NDK team should have a look at the emscripten SDK, which solves a much more exotic problem than combining C and Java. Emscripten has a compiler wrapper (emcc) which is called like the command line C compiler, but creates a complete HTML+WASM+JS "program". A lot of problems with the NDK and build system could be solved if it would provide a compiler wrapper like emcc which produces a complete APK (and not just a .so file) instead of relying on some obscure magic to do that (and all the command line tools which can do this outside gradle are "technically" deprecated).
...hrmpf, and now that I recalled all the problems with Android development I'm grumpy again, thanks ;)
MIPS support has been officially dropped but x86 is still very alive - mostly due to the emulator these days, though, but some Android TV hardware was using it for a while, too.
Native development sucking on android is mostly an android problem (and to some extent a Qualcomm problem since their smartphone SOCs don’t support anything else.)
He did address the so-called 100-ton blue whale at the end:
> End result: cross-development is mainly done for platforms that are so weak as to make it pointless to develop on them. Nobody does native development in the embedded space. But whenever the target is powerful enough to support native development, there's a huge pressure to do it that way, because the cross-development model is so relatively painful.
except that developing for ARM SBC's natively is normal these days. Even the lowly Raspberry Pi encourages you to plug-in an HDMI monitor, a USB mouse and keyboard, and boot into Raspbian, where things like Wiring-Pi further extend your cross-development reach (LOL), while you develop the code for your peripheral directly on the computer that's going to run it.
It's a bit of a Matryushka doll in that I have both Propeller chip and FPGA "Hat's" for my Pi and use both propeller IDE and the Icestorm toolchain to natively cross-develop for ACTUALLY embedded devices, as the ARM device is the main computer already and not the embedded device anymore lol
Linus is mostly wrong except for HPC. Very few dev pipelines for folks result in native executables.
The vast majority of code is delivered as either source (python, ruby, etc) or bytcode, JVM, Scalia, etc.
And the Xeon class machines folks deploy to in data center envs is a world apart from their MacBooks.
These truths are true for Linus, but not for the majority of devs.
Even those creating native binaries, this is done through ci/cd pipelines. I have worked in multi arch envs, Windows NT 4 on mips/alpha/x86, iOS, Linux on arm. The issues are overblown.
--- I accidentally deleted this comment, so, I've re-written it. ---
Disclaimer: I'm a HPC system administrator in a relatively big academic supercomputer center. I also develop scientific applications to run on these clusters.
> Linus is mostly wrong except for HPC. Very few dev pipelines for folks result in native executables. The vast majority of code is delivered as either source (python, ruby, etc) or bytcode, JVM, Scalia, etc.
Scientific applications targeted for HPC environments contain the most hardcore CPU optimizations. They are compiled according to CPU architecture and the code inside is duplicated and optimized for different processor families in some cases. Python is run with PyPy with optimized C bindings, JVM is generally used in UI or some very old applications. Scala is generally used in industrial applications.
> And the Xeon class machines folks deploy to in data center envs is a world apart from their MacBooks.
No, they don't. Xeon servers generally have more memory bandwidth, and more resiliency checks (ECC, platform checks, etc.). Considering the MacBook Pro have a same-generation CPU with your Xeon server with a relatively close frequency, per core performance will be very similar. There won't be special instructions, frequency enhancing gimmicks, or different instruction latencies. If you optimize well, you can get the same server performance from your laptop. Your server will scale better, and will be much more resilient in the end, but the differences end there.
> Even those creating native binaries, this is done through ci/cd pipelines.
Cross compilation is a nice black box which can add behavioral differences to your code which you cannot test in-house. Especially if you're doing leading/cutting edge optimizations in the source code level.
Isn't turbo boost an issue when comparing/profiling? my experience with video generation/encoding run of about 30 sec was that my macbook outperformed the server xeons... if left to cool down for a few minutes between test runs. otherwise a testrun of 30 seconds would suddenly jump up to over a minute.
the xeons though always took about 40 seconds.. but were consistent in that runtime (and were able to do more of the same runs in parallel without loosing performance)
> Isn't turbo boost an issue when comparing/profiling?
No. In HPC world, profiling is not always done over "timing". Instead, tools like perf are used to see CPU saturation, instruction hit/retire/miss ratios. Same for cache hits and misses. For more detailed analysis, tools like Intel Parallel Studio or its open source equivalents are used. Timings are also used, but for scaling and "feasibility" tests to test whether the runtime is acceptable for that kind of job.
OTOH, In a healthy system room environment, server's cooling system and system room temperature should keep the server's temperature stable. This means your timings shouldn't deviate too much. If lots of cores are idle, you can expect a lot of turbo boost. For higher core utilization, you should expect no turbo boost, but no throttling. If timings start to deviate too much, Intel's powertop can help.
> my experience with video generation/encoding run of about 30 sec was that my macbook outperformed the server xeons...
If the CPUs are from the same family, and speed are comparable, your servers may have turbo boost disabled.
> otherwise a testrun of 30 seconds would suddenly jump up to over a minute.
This seems like thermal throttling due to overheating.
> the xeons though always took about 40 seconds.. but were consistent in that runtime (and were able to do more of the same runs in parallel without loosing performance)
Servers' have many options for fine tuning CPU frequency response and limits. The servers may have turbo boost disabled, or if you saturate all the cores, turbo boost is also disabled due to in-package thermal budget.
If you have any more questions, I'd do my best to answer.
I am not sure we are disagreeing on much, but the 4 core i7 in my dev MacBook is a whole lot different than the dual socket, 56 core machines we run on.
Optimizations that need to happen, don’t happen locally, they get tuned on a node in the cluster. Look at all the work Goto has done on Goto Blas.
We agree on HPC, however I also agree with Linus about non-HPC loads. Software and developers are always more expensive than hardware, but scaling beyond a certain point in hardware (number of servers, or the GPUs you need) drives the hardware and maintenance cost up, hence the difference becomes negligible, or the maintenance becomes unsustainable. This is why everyone is trying to run everything faster with the same power budget. At the end, after a certain point, everyone wants to run native code at the backend to reap the power of the hardware they help. This is why I think Linus is right about ARM. That's not I'm not supporting them, but they need to be able to run some desktops or "daily driver" computers which support development. Java's motto was write once, run everywhere, which was not enough to stop migration to x86. Behavioral uniformity is peace of mind, and is a very big peace TBH.
What I wanted to say is, unless the code you are writing consists of interdependent threads and the minimum thread count is higher than your laptop, you can do 99% of the optimization on your laptop. On the other hand, if the job is single threaded or the threads are independent, the performance you obtain in your laptop per core is very similar to the performance you get on the server.
For BLAS stuff I use Eigen, hence I don't have experience with xBLAS and libFLAME, sorry.
From a hardware perspective, a laptop and a server is not that different. Just some different controllers and resiliency features.
Even in a bytecode language, there is no guarantee that an application is write-once run-everywhere. I converted a small app that was running on Windows with Oracle JDK to run on Linux with OpenJDK and it was not plug-and-play. It was close, but there were a few errors particularly surrounding path resolution (and yes, the Windows application was already using Unix-style paths, this was actually a difference in how paths were resolved). Similarly, there are small differences between Tomcat and Jetty and so on.
This wasn't showstopping by any means, but it did take a couple of hours to tweak it until it ran properly, and this was just a small webapp not really doing anything exceptional.
Our main line-of-business app (on Java) runs on SPARC/Solaris in production, so we have on-premises test servers so we can test this... and yes, there have been quite a few instances where we identified significant performance anomalies between developer machines running x86/Windows and our Sparc/Solaris test environment, and had to go rewrite some troublesome functions.
Correct, we need to stabilize all these factors in order to ensure stable, bug free deployment. A Good Post.
Oh, you meant that just because there is one other thing that you might slip up and forget to control for, we shouldn't bother trying to control anything? No, wait, that's actually A Very Bad Opinion.
He calls that out though "even if you're only running perl scripts". It's not the cross-compilation that's a factor, it's wanting the environment to be as similar as possible.
Even if your code is Java bytecode, that's still running on a different build of the JVM, on a different build of the OS (possibly a different OS). There is opportunity for different errors to crop up. They might be rare, but they'll be surprising and costly when they happen exactly because of that.
The question then is how successful the JVM is. I think you're underestimating it. Torvalds' attitude is certainly justified regarding plenty of other types of software though -- just building a C++ project on a different distro can be a pain.
Someone else [0] points out that Java (in the right context at least) is so successful in isolating the developer from the underlying platform, that it isn't a problem if the developer isn't even permitted to know what OS/hardware their code will run on.
Could they accidentally write code that depends on some quirk of the underlying platform? I think it's not that likely. Nowhere near as likely as in C/C++, where portability is a considerable uphill battle that takes skill and attention on the part of the developer.
> They might be rare, but they'll be surprising and costly when they happen exactly because of that.
Ok, but you can say the same for routine software updates. It's a question of degree.
he just said its not worth it and as a developer if i could chose that i develop on the platform that will run my code i will chose it even if its slightly more expensive. granted that on both i could be same level of productive.
we had those problems when developing in scripting language on windows a code that will run on linux because at some point we needed something that called native and would make us problems with different behavior. after some of that experience we tried to get everybody the same environment that is close to what will run in production.
Mainframes, have been the pioneers of using bytecode as distribution format, with the CPUs being microcoded for the specific bytecode set (e.g. Xerox PARC / Burroughs), or having JIT/AOT compilation at deployment time like IBM i and IBM z (aka OS/400, OS/300).
So while Linus opinion is to be respected, mainframes, and the increase in smartphones, smartwatches and GPS devices use of bytecode distribution formats with compilation to native code at deployment time, shows another trend.
Ya it's kind of weird he talks about how stuffing a beige box PC in the corner was the impetus for X86 servers. But the modern day equivalent of that is either a cheap $5/month VPS, a RaspberryPi, or an OpenWRT router, any of which could compile/run ARM code.
I think fundamentally, the error he's making is comparing the current market to the late 90s/early 2000s market. Back then a RISC Unix machine cost thousands of dollars. It was cost prohibitive to give one to each dev/admin. Nowadays a RISC Linux PC is $5.
Actually, you make the best case for ARM servers of anyone else in this thread.
The starving college kid in a Helsinki dorm working on his EE degree can't afford 600-1000 dollars for another Laptop/Desktop to experiment with. A 35 dollar ARM SBC and a monitor that doubles as his TV is right in his price range...
That doesn't invalidate his point. He's just saying that is basically what needs to happen for ARM servers to start taking off. The next step is for companies to start deploying ARM workstations. That part still seems to be a good way off, MS abandoning their Windows ARM port didn't help the cause.
The starving college kid in a Helsinki dorm working on his EE degree can't afford 600-1000 dollars for another Laptop/Desktop to experiment with. A 35 dollar ARM SBC and a monitor that doubles as his TV is right in his price range...
35 dollars will buy you an oldish x86 beige box that will absolutely flat out murder a Raspberry Pi performance-wise. Cheap, fast hardware is not a problem anymore.
> but doesn't this whole post completely ignore the 100 ton blue whale in the room? Namely smartphones.
This is patently false. Mobile developers do test their apps on smartphones, eventhough google and apple offer VMs. You'd be hard pressed to find a mobile app software house that doesn't have a dozen or so smartphones available to their developers to test and deploy on the real thing.
Surely this would be the same for server software: If prod was running on ARM, then you'd probably have your CI server running ARM too. But that wouldn't stop you developing on x86 if that was what was convenient.
> Surely this would be the same for server software: If prod was running on ARM, then you'd probably have your CI server running ARM too.
CI/CD is already too far ahead in the pipeline to be useful. CI is only a stage where you ensure that whatever you've developed will pass the tests that you already devised, but it's already a stage where you already tested and are convinced that nothing breaks.
The type of testing that Linus Torvalds referred to is much back in the pipeline. He is referring to the ability to ramp up a debugger and check if/when something is not working as expected. Developers do not want to deal with bugs that are only triggered somewhere in a CI/CD pipeline, and can't reproduce in their target machines.
No, I'm saying that mobile development is also a clear example that developers do want to develop for platforms that they actually can test, which was the point that Linus Torvalds made.
> That's an entire enormous segment of the industry and it's nearly 100% (or entirely 100%?) literally develop-on-x86-deploy-on-ARM.
I'm not sure I agree with. My coding environment is on x86, and I build on x86, but my Run/Debug cycle is on ARM. No one is really encouraged to test on the simulator even though it's available, you are almost entirely asked to test on your actual arm device and run it and see the results of your work.
Linus is making the argument that people want their release builds to run in the same environment as their daily test builds, and I don't see smartphone development as an exception to that rule.
I don't see this happening. PCs are tools for getting real work done. Mobiles are mostly communication and entertainment devices.
I like to fall back on this Steve Jobs quote, employing a car/truck metaphor for computers:
When we were an agrarian nation, all cars were trucks, because that's what you needed on the farm. But as vehicles started to be used in the urban centers, cars got more popular … PCs are going to be like trucks. They're still going to be around, they're still going to have a lot of value, but they're going to be used by one out of X people.
I’ve always thought PCs will become business workstations. Meaning you use them for office work but everything else will be “cars” as your quote put it. Meaning internet browsing, social media, view/edit photos, and the like will be done on some mobile device. Windows is alresdy the de facto business workstation and I don’t see it going away.
There’s already a whole generation or two who will likely have little to no experience with PCs.
Communication is also work, especially as you go up the management value chain. I think maybe people should refer to the thing that PCs do and mobiles don't as "typing".
It isn't just the keyboard which PCs hold as an advantage, it's the mouse as well. There are a lot of tasks that workers do on PCs with a mouse that can't be done reliably with a touchscreen.
Maybe an iPad Pro with its stylus could perform a lot of those mouse-driven tasks, but using the stylus for long periods of time is going to be exhausting and injury-prone. By using a mouse your arm can rest comfortably and allow you to work for long periods of time with minimal effort and no strain.
We've known about Fitz's Law since the dawn of the GUI and have decades of study on it. It's not any more efficient to need to "headshot" everything you need in an application 100% of the time, and in fact it is often rather the opposite that it gets in the way of actual efficiency.
Mousing through most "mobile" applications is great, whether "first class" or not.
Desktop and mobile OSes don't need to remain separate, and it's really past time that a lot of super-cramped "desktop apps" got the death they deserved for their decades old RSI problems, accessibility issues, and garbage UX.
It's friendly but it's not space efficient. For applications with a huge number of features, a touch UI can't handle them. Touch screens don't have right click, so you can't get context menus.
It's more than that, though. A touch screen UI for the iPhone makes zero sense on a 32" display. I'd much rather have a true multiwindow, multitasking operating system than that. Really, I wouldn't use a 32" iOS device at all. That's probably why Apple doesn't make them.
User studies from the dawn of the GUI continue to harp that user efficiency is inversely correlated to space efficiency. It doesn't matter if an application can show a million details to the individual pixel level if the user can't process a million details or even recognize individual pixels.
> Touch screens don't have right click, so you can't get context menus.
You don't need "right click" for context menus.
Touch applications have supported long-press for years as context menu. Not to mention that macOS has always been that way traditionally because Apple never liked two+ button mice.
Then there's touch applications that have explored more interesting variations of context menus such as slide gestures and something of a return to relevance of Pie Menus (which it is dumb that those never took dominance in the mouse world and probably proof again that mice are too accurate for their own good when it comes to real efficiency over easy inefficiency).
> I'd much rather have a true multiwindow, multitasking operating system
Those have never been mutually exclusive from touch friendly. It's not touch friendliness that keeps touch/mobile OSes from being "true multiwindow/multitasking", it's other factors in play such as hardware limitations and the fact that tiling window managers and "one thing at a time" are better user experiences more often than not, and iOS if anything in particular wants to be an "easy user experience" more than an OS.
(I use touch all the time on Windows in true multiwindow/multitasking scenarios. It absolutely isn't mutually exclusive.)
Sure they can, but why bother? When I use Windows, I use real Windows applications with desktop UIs. The touch UI mobile apps are a joke on a desktop monitor.
Very few people are primarily messaging as their job. Even outside developers, designers and other creatives, the majority of people work on some mix of spreadsheets, presentations and traditional docs on a daily basis. I guess you can do a little bit of word processing on a phone but it gets ugly pretty fast.
I would expand “reading” to “consumption” because mobile devices are frequently used for audio and video in addition to reading (which is probably more “browsing” than long-form reading).
I'm genuinely very sorry for missing this comment (9 hours ago as I write this) because I think it's a really important and interesting next area of development. Since this article is still front page though, I hope I'm not too late to have some discussion here particularly since none of the other replies have taken the analysis approach I do.
If we're trying to predict the future, I think one effective approach to try to not be trapped in the present paradigm is to try to extrapolate from foundations of physics and biology that we can count on remaining constant over the considered period. Trying to really get down to the most fundamental question of end user computing, I think it's arguable that the core is "how do we do IO between the human brain and a CPU?" With improving technology, effectively everything else ultimately falls out of the solution to creating a two-way bridge between those two systems. The primary natural information channel to the human brain is our visual system with audio as secondary and minimal use of touch, and the primary general purpose output we've found are our hands and sometimes feet, with voice now an ever more solid secondary and gestures/eye movements very niche. Short of transhumanism (direct bioelectric links say) those inputs/outputs define the limits of out information and control channels to computers, and the most defining of all is the visual input.
Up until now, the screen has defined much of the rest, and a lot of computer can be thought of "a screen, and then supporting stuff depending on the size of the screen." A really big screen is just not portable at all, so the "supporting stuff" can also be not portable which means expansive space, power, and thermal limits as well as having the screen itself able to be modularized (but even desktop AIOs can pack fairly heavy duty hardware). Human input devices can also be modularized. Get into the largest portable screen size and now the supporting gear must be attached, though it can still have its own space separate from the screen. But already the screen is defining how big that space is and we're losing modularity. That's notebooks. Going more portable then that, we immediately move to "screen with stuff on the back as thin and light as feasible" for all subsequent designs, be it tablets, smartphones, or watches. The screen directly dictates how much physical space is available and in turn how much power and how much room to dissipate heat. And that covers nearly the entire modern direct user computing market.
Wearable displays, capping out at direct retinal projection, represent a "screen" that can hit the limits of human visual acuity while also being mobile, omnipresent, and modularized. I'm really actually kind of surprised how more people don't seem to think this represents a pretty seismic change. If we literally have the exact same maximalized (no further improvements possible) visual interface device everywhere, and the supporting compute/memory/storage/networking hardware need not be integrated, how will that not result in dramatic changes? It's hard to see how "Mobile" and "PC" won't blur in that case. Yeah, entering your local LAN or sitting at your desk may seamlessly result in new access and additional power becoming available as a standalone box(es) with hundreds of watts/kilowatts becomes directly available vs the TDP that can be handled by your belt or watches or whatever form mobile support hardware takes when it no longer is constrained to "back of slab", but the interfaces don't need to necessarily change. Interfaces seem like they'll depend more on human output options then input, but that seems likely to see major changes with WDs too, because it will also no longer be stuck in integrated form factor.
WDs definitely look like they're getting into the initial steeper part of the S-curve at last. Retinal projection has been demoed, as well as improvements in other wearables. We're not talking next year I don't think or even necessarily the year after, but it certainly feels like we're getting into territory where it wouldn't be a total shock either. And initial efforts like always will no doubt be expensive and have compromises, but refinement will be driven pretty hard like always too. I don't think the disruptive potential can possibly be ignored, nobody should have forgotten what happened the last few such inflection points.
>I don't see this happening. PCs are tools for getting real work done. Mobiles are mostly communication and entertainment devices.
This line of reasoning though is fantastically unconvincing. Heck even ignoring the real work mobiles are absolutely being used for, and given the context of this article, I pretty much heard what you said repeated word for word in the 90s except that it was "SGI and Sun systems are tools for getting real work done, PCs are mostly communication and entertainment devices".
Maybe the interesting part of the smartphone ARM story is the degree to which Apple has used custom silicon to optimize speed and power for their own specific workloads and software.
Why couldn't ARM-based servers do the same thing? I understand why a generic ARM-based CPU might not win against a generic ARM-based x86 CPU at running cross-compiled code in Linux. But what if the server has a custom ARM-based chip that is a component of a toolchain that is optimized for that code, all the way down to the processor?
Imagine a cloud service where instead of selecting a Linux distro for your application servers, you select cloud server images based on what type of code you're running--which, behind the scenes, are handing off (all or part of) the workload to optimized silicon.
I don't have the technical chops to detail how this would work. But I think my understanding of Apple's chip success is correct: that they customize their silicon for the specific hardware and software they plan to sell. They can do that because they own the entire stack.
I think if any company is going to do that in the server space, it would have to be the big cloud owners. No one else would have the scale to afford the investment and realize the gains, and control of the full stack from hardware to software to networking. And sure, enough, that is who are embarking on custom chip projects:
So, maybe the result won't be simply "ARM beats x86," but rather "a forest of custom-purpose silicon designs collectively beat x86, and ARM helped grow the forest."
Not disagreeing, but answering the question of are all phones ARM: no, there Intel too. Source: had to add Intel build support for our Android SKUs to run on said phones. Some Unity stats from about 6 months ago indicated:
ARMv7: 98.1%
Intel x86: 1.7%
I think a lot of the Intel stuff has been discontinued, not sure what is actively being developed outside of ARM right now.
This is strange. Maybe like a Windows phone? Where does it get those device metrics? < 2% makes me think it might just be that Androidx86 emulator project.
I have an Asus Zenfone 2, complete with its "Intel Inside" logo. Not particularly special but when I bought it met the critera of "gorilla glass + pokemon go for below $150". Nice case too.
(and there were a lot of small brands that used to make them, that I don't believe get represented in that list)
Having said that, for it to reach >1%, it was more likely a combination of Intel Android tablets (which were fairly common for a while) and Chromebooks.
There may be people somewhere doing Android/ChromeOS/Fuchsia development on ARM Chromebooks, following the Google model of using a mostly cloud-based toolchain together with a local IDE. There’s none of this happening inside Google itself, though, yet—but that’s just because Google issues devs Pixelbooks, and they’re x86 (for now.)
But, since Pixelbooks (and ChromeOS devices in general) just run web and Android software (plus a few system-level virtualization programs like Crouton) there’s nothing stopping them from spontaneously switching any given Chromebook to ARM in a model revision. So, as soon as there’s an ARM chip worth putting in a laptop, expect the Pixelbook to have it, and therefore expect instant adoption of “native development on ARM” by a decent chunk of Googlers. It could happen Real Soon Now (hint hint.)
Actually, Linus does not ignore the smartphone space at all. In fact he refers to it by pointing out that people are likely to ONLY use cross compiling if the deployment is to a embedded device (which a smartphone is) because the native development on the embedded device may not be possible.
<quote>End result: cross-development is mainly done for platforms that are so weak as to make it pointless to develop on them. Nobody does native development in the embedded space. But whenever the target is powerful enough to support native development, there's a huge pressure to do it that way, because the cross-development model is so relatively painful.</quote>
Good point, from Linus' opinion it can still happen after ARM is king of the client market. Well, they're well on their way doing that with every client except for desktop being ARM, while Intel is having trouble with 7 nm.
He's also apparently assuming that ARM-based Chromebooks will never be a useful developer environment. I wouldn't take that bet -- a lot of the newer ones will support Linux VMs out of the box well enough to support at least a half-decent development environment (via Crostini). (You can get a Pixelbook with 8GB RAM and a 512GB SSD, if you're wondering about storage space. And while Crostini still has issues with in-VM driver support for things like the Chromebook's own audio and camera, that's stuff that server software wouldn't use much anyway.)
Between that and the much-rumored ARM Macs, this could turn pretty quickly...
I don't think he's assuming any such thing. His argument is simply that ARM can't win until it has a reasonable dev box. He makes no speculation about if/when such a box is coming.
> but with smartphones you don't have a choice. so it's different.
Exactly. Linus' point is that Arm has no real advantage in the server space to compensate for the problems with cross-development. That's completely different for smartphones, which is why Arm won that space.
The argument isn't "same instruction set". The argument is "same development & deployment environment", by the logic of which the Apple argument fails because not many people deploy to Apple servers.
So you run a Linux VM, just as lots of Mac-using developers do today. But the instruction set of the VM has to match the instruction set of the host, unless you’re in the mood for slow emulation.
> So you run a Linux VM, just as lots of Mac-using developers do today
I hear far more make do with just homebrew.
> unless you’re in the mood for slow emulation
I run an embedded OS (made for a quad-core ARM Cortex-A53 board) on both Real Hardware and on my ThinkPad (via systemd-nspawn & qemu-arm). I found (and confirmed via benchmarks) the latter to be much faster than the former — across all three of compute, memory, and disk access.
Possibly, it does seem that way for web dev at least. There's plenty of programmers out there (the majority?) not doing web dev and never touching Macs however. In a 20 year game development career I've never had cause to use a Mac for work purposes. Perhaps the share of developers using Macs as their primary development machines exceeds their 10% market share of laptops but I doubt it's a majority.
I’m not sure if this is what you were implying, but I don’t know of any x86 processors that can compete with the Arm processors that are in use, on power consumption to performance ratio. Take e.g. Apple’s A12, which compete with their MacBooks in performance, and assuredly draw much less power.
You haven't been paying attention. In order to go faster ARM started using more power. A lot more power.
Turns out power usage was never an ARM vs. x86 thing, it was purely a "how fast do you want to go" thing. ARM started at the "very slow" end of the spectrum which made it a good fit for mobile initially since x86 didn't have anything on the "very slow" end of things. By being very slow it was very low power. But then the push to make ARM fast happened, and now ARM is every bit as power hungry as x86 at comparable performance levels.
The power cost is for performance. The actual instruction set is a rounding error.
> I don’t know of any x86 processors that can compete with the Arm processors that are in use, on power consumption to performance ratio
Not anymore, but there was a time when x86 was (barely) able to compete in that area and there were some x86-based smartphones and tablets. But it was too little too late: x86 already was a niche. Developers absolute had to support ARM, but x86 was optional, so many apps were not available for x86, and that was pretty much it for those devices.
You don't have much choice now sure, but it's not as if there weren't any efforts at x86 smartphones (like the ZenPhone). Nor is it as if there wasn't a long run up of phones leading to the modern smartphone either. And even in this how is not directly relevant to the case of x86?
I mean, we're directly doing a comparison to the RISC/MIPS/etc era yeah? Couldn't back then someone say "well but with PC you don't have a choice, so it's different"? x86 got heavy traction on the back of WinTel, then moved up to bigger iron, which didn't really fight hard in the lower end lower margin space. Does there really seem to be no deja vu with that vs ARM gaining heavy traction in iOS/Android/embedded then moving up to PCs and servers, where Intel/AMD didn't really play in the lower end lower margin space? There was a period with plenty of choice in servers, but then x86 won.
And again it's not as if someone can't come up with compelling arguments, x86 has some real moats even beyond pure performance. There is enormously more legacy software for x86 for example, and the ISA for it will be under legal protection for a long time to come which complicates running it on ARM. But it's hard to say how much that matters in much of the cloud space, particularly if we're imagining 5-10 years further down the line. x86 takeover didn't happen overnight either, and the first efforts were certainly haphazard. But momentum and sheer volume matter. It just seems like something that needs to be addressed at any rate, more deeply then you have and certainly more then Linus did.
Windows does make a huge attempt to make stuff backwards compatible, however. I run a copy of Cardfile copied from Windows NT4 on Windows 10 just fine.
I don't think that in any way contradicts his position
>And the only way that changes is if you end up saying "look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on".
Sure, as soon as these merge and you have a development platform as productive as a desktop computer that allows you to natively build for ARM, then absolutely, it could displace x86. And maybe when (if) the two platforms really merge that could be a real possibility.
Intel also had KNC instruction set (AVX512-like) on Xeon Phi (these CPUs available on PCI cards). They abandoned it in favour of good old x86. One of important factors was the difficulties related to tooling, especially necessity of cross-compilation.
He's shuffling smartphones in under embedded I believe.
His thesis is that if you want a platform to take off, start shipping developer boxes of the platform. So mobile and pc will merge when and only when you can do all your development on a mobile platform.
>"This isn't rocket science. This isn't some made up story. This is literally what happened"
right? I mean, I can see arguing that going up into the cloud is different in some ways then going down to smartphones (although the high end ones are now going to outperform plenty of old dev machines in burst power). There are certainly differences in scaling and such. But the maturity of the tech for cross development of high level software isn't the same as it was in that era either. And if we're talking about bottom-to-top revolutions, embedded and smartphones seem to be at a lower level and much higher volume then PCs.
Finally there is clearly an upcoming disruptive fusion event coming due to wearable displays. When "mobile" and "PC" gets merged, it certainly looks like ARM is in a strongly competitive position for some big players, and having more powerful stuff up the stack will matter to them as well.
None of which is to say he won't be right at least in the short term, but it still is kind of odd to not even see it addressed at all, not even a handwave.