> Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud. That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment).
But developers can ssh to the cloud, and develop there, right?
And he's wrong. In the enterprise, the vast majority of devs are on Windows, for Java for example. That doesn't mean that their deployment target is Windows, most of the time it's Linux. And the development differences, chance of bugs, performance profiles are way different between OSes than they are between CPU architectures.
I can install a reasonably performing x86 Linux VM on an x86 Windows host. If the same can be said for an ARM VM on an x86 host, please share how, I'd love to learn that trick.
> Most of those enterprise devs don't even get to touch the test/staging/prod environment, yet they still do their jobs.
They can usually run at least some part of the stack locally. Maybe not the full thing, at least part of it.
> Regarding the VM, today I guess it would be a bit ugly through Qemu but tooling is always solved if there's a need.
Tooling doesn't magically fix perf. Have you actually tried that specific setup? Searching online I get results like https://raspberrypi.stackexchange.com/a/12144 which aren't impressive, and I don't count as "reasonably performing".
Those results also jive with my experience - the meager demands of mobile apps are outrageously slow inside an ARM emulator running on x86 every time I've tried it, although I'll admit I haven't tried it on Qemu specifically. I can only imagine the horror of trying to run a full ARM server stack on x86 via emulator - perf is bad enough when it's native. For mobile, the first thing I do is fix the x86 builds (so I can use an x86 device emulator), buy a real ARM device out of my own pocket, and/or educate and/or fire my employer for their outrageously wasteful use of my time if they're serious about me and my coworkers using mobile ARM emulators on x86 for any serious length of time (that kind of waste of expensive resources like developer time can't possibly be a sustainable business decision.)
You're comparing a bytecode virtual machine to a processor microarchitecture. He's talking about why a particular hardware platform becomes dominant. You could say the same thing about any interpreted language, not just Java. How about python or javascript? Java didn't displace x86.
> That doesn't mean that their deployment target is Windows, most of the time it's Linux.
I don't feel like that's true. I don't have data to say otherwise, but my impression for the majority of non-tech businesses I encounter is that they are very much still Windows shops, and still on-prem.
I've worked for a few, consulted for others, have friends working for yet others. It's a mix. Java shops go Win/Lin, .NET generally is Win/Win, for now.
He isn't wrong since most linux servers run on x86 architecture and obviously most windows desktops run on x86 architecture.
To your last point, if that is true, then what will the differences be when both the OS and CPU architectures are different? I suspect it will create even more headaches.
I don't see ARM winning the server space anytime soon or ever considering how established and dominant x86 is.
> But developers can ssh to the cloud, and develop there, right?
Then why do they insist on 32GB i9 laptops running MacOS or Linux? One must assume that there is some practical purpose, unless one is rather cynical and believes that maybe the just like shinnies.
.. develop remotely? I can, if I wish. I have an extremely high bandwidth link to servers. But there's still latency, and that makes all the difference. So in practice I always develop locally. I use Git and VNC and sshfs-mounted filesystems to access the remote, but my tools and editors are always local. I can build remotely, but not work remotely. And I still like to build locally all the same, during the actual development.
I find it hilarious that this is the developer attitude for their own work, but then they turn around and expect users to just suck it up and accept latency because cloud!
It's a difference in using a text editor remotely, compared to doing just about anything else remotely. I can log in and build remotely, and execute remotely, use remote git servers, even run GUIs remotely (testing my application through a VNC). But using a text editor remotely, with a few hundred millisecs rtt.. now, that is very different.
There is nothing stopping you from developing on x86 and SSHing to a remote ARM computer for final validation. Cross compilation has been a thing for decades.
On the level of Linux it might be hard, but thousands of iOS developers do it all of the time. On the application level if you are just doing C, my experience with writing cross platform C, if it works on one platform and breaks on the other, there is probably a bug in your software that is only surfacing on one platform usually around depending on undefined behavior or relying implementation defined behavior.
But my “experience” with native cross platform programming is two decades old - x86 and whatever the old DEC VAX and Stratus VOS platforms were running.
Not without an internet connection, not in an isolated testing environment, not without jumping through cloud specific hoops to setup accounts, not without installing your initial ssh keys, not without getting IT signoff for access and/or to spin up more cloud resources with the related charges, not without extra IDE configuration (if even possible, depending on the cloud) or resorting to raw gdb, ...
When I want to test C++ stuff on ARM, I connect an android device, because that's somehow less painful.
But developers can ssh to the cloud, and develop there, right?