Please don't use titles like this. It's against the HN guidelines: "Please use the original title, unless it is misleading or linkbait; don't editorialize."
If you want to say what you think is important about an article, that's fine, but it's not ok to use the title box for that. Here are three better options:
(1) make a text post, describing what you observed (in this the Google ad), providing appropriate links, and giving your point of view;
This ad reminds me of the way it felt to work at Sun Microsystems around 2002, when all the engineers knew that cheap GNU Linux boxes were the future. The marketing people had similar FUD-based ads about how the TCO of Linux was higher, how certain HPC jobs wouldn’t run well etc. super similar to this Intel ad today.
I remember all that! Wow, it's been a while. Yes there was a spate of things about -
"Linux might be free but look at our sponsored study which says that actually our hardware gives you more uptime and our devtools are better and our patch system does this and our FS is faster and we can cluster this amount of cores and ..."
But in the end the systems were still 5x or more what an x86 server would cost, and everyone was just having a go with linux at work and at home anyway, because they already had x86 hardware laying around.
(IIRC Microsoft tried a similar TCO gambit a few times as well)
Well even when Zones in Open Source existed almost nobody used them. That said they were only available in few places and Solaris Open Source never fully took off.
I think the reason is that they thought of them as an operational tool. Its basically a cheap VM. However, led to crazy docker adoption was its use as a dev-tool and in development making it really easy to run simply stuff.
Only then people started thinking about using this seriously deploy application on mass scale with docker. And then they realized that containers on Linux were just patchwork and the security story was a mess.
Zones were developed to serve as VM replacements, so for them security and resource management was very important.
For years and years, Intel's approach has seemed to be just add an extra 100Mhz base clock and call it a new model. You appear to be getting largely the same chip on the same process for more or less the same performance compared to a chip from 18-24 months ago.
Meanwhile AMD and Apple have come along with someone new and essentially are eat intel's lunch.
And do it with the same power envelopes as AMD! I can't run a 12600K without upgrading my PSU but a 5600X sips power and runs just fine. New Intel chips are exciting but not practical when it comes to power, cost, and availability not to mention lack of motherboards in the SFF space.
Drop x86 as the instruction set. Make a fixed-width instruction set that is closer to their current micro-ops instruction set. Then provide a compatibility layer to still run x86 based binaries.
This is an interesting challenge - problem is even if they manage to move to a new ISA can they maintain their margins. I’d argue probably not: ie the problem is losing the ISA monopoly rather than a performance / technology disadvantage.
Itanium is an interesting historical example. I think Intel learnt the wrong lesson though: ‘cling to x86 at all costs’ when it was really that Itanium was just the wrong product. If they’d pushed a high quality Intel only 64 bit RISC design instead of Itanium at the time they might have succeeded.
Edit: Just to expand on this a bit. At the time of Itanium they had a clear process lead. They could have offered the best server CPUs on an Intel only RISC ISA and invested heavily in helping customers migrate. Arm got customers from aarch32 to aarch64 seamlessly.
x86 makes decoders way more complex than fixed-width instruction set. In turn this leads to hacks like hyperthreading to allow to make decoders to run in parallel and utilize the whole core. Notice how it is absent from M1 since Apple was able to throw more decoders to single instruction flow and make single-threaded performance great.
Hyperthreading isn't just about decoders, it's about occupying all the units as much as possible. Some Arm (RIP ThunderX2), POWER and SPARC server chips have 4-wide and even 8-wide SMT.
Multiprocess python code massively benefits from hyperthreading and it's not bottlenecked by instruction decoding at all. It's bottlenecked by chasing pointers.
Does Intel have engineers on board who could innovate? I get the impression Intel has become such a topheavy marketing oriented company that all the engineers capable of designing a brand new ISA left a long time ago, and the competent engineers that are left just work on revving the x86.
To be fair, this is easy to say in hindsight; but it also does not account for companies that successfully embrace new technologies/paradigm shifts/etc, and for all the fads that incumbent companies rightfully dismissed.
Edit: I felt they'd found one organisational solution to this after the netburst bust, what with their R&D branch in Israel (?) working on an alternative (pentium m and what would become the 'Core' architecture) without too much disturbance and avoiding the pitfalls of working on competing products?
I was and still remain pretty confused. From an outsider's perspective, my thought was Intel had incredibly massive profit margins pre Ryzen. So my thought was all Intel has to do is slash margins and it can continue to sell chips. Is it not that simple? I mean there has to be some price point at which whatever Intel wants to sell is what customers want to buy, right? In the mid to long term, Intel continues to innovate and eventually will get to the smaller numbers 7nm, 5nm, ...
Is the margin that Intel has not as big as I had imagined? Is the idea of reduced margins so terrible that the money behind Intel (the stock market?) would rather let Intel die than accept a significantly lower rate of return?
I know I'm missing something here and I don't know what it is...
Part of the missing picture in the datacenter market is TCO(total cost of ownership). The margin Intel used to have was driven by what it would cost you to recreate the performance they could provide by other means. If Intel had a 20% performance lead at the same power, you'd have to deploy 25% more of the closest competitor to match it. That's 25% more racks, space, electricity, heating, etc.
In the alternative, the CPU might reasonably represent 1/5 to 1/3 of total costs. How much higher could Intel price their chip while still being competitive from the TCO perspective? The answer is nearly 2x on a per unit basis using those numbers.
That calculation was relatively easy when all of the chips had very similar number of cores, and the advantage was squarely in single-core performance. It's slightly harder now, in a world where there is considerable divergence in things like number of cores and energy efficiency in addition to single core performance. But the implications of those equations have obviously swung strongly against Intel for large parts of the market. There are AMD/arm chips that are probably 2x or more the current multi-core performance of the Intel chips that were at similar prices last year. Intel has advantages around lock-in and brand, but the economic incentive of transition now can be quite staggering, and people are waking up to it.
Whether Intel will be able to make chips that would be competitive in that market, even with no margin, is yet to be seen.
This is what I thought a few months as well and bought INTC (Intel stock) on the dip ... and it kept dipping.
This is my guess now: yes, Intel is a massively profitable company. But I think what the market is factoring in is that it missed the technology shift to smaller chips (5nm?) which its competitors (Apple, TSMC, etc) are able to do. Further, Apple & ARM are eating into the mobile space, and AMD and NVidia are biting away at Intel's data center share.
I think without a major tech break-through, Intel looks like it might miss this next semiconductor shift, and that's why the market is selling because it is starting to look like an also-ran at this point. Think IBM.
I also bought Intel on the dip
- sold some AAPL to do it - and am going along for the ride. But part of my reasoning has less to do with the state of Intel's tech or even their competitive chances in the near term, than that they're an undervalued strategic asset for America in the context of an increasingly warlike China threatening Taiwan.
People don't appreciate how massive Intel's operations are.
I'd guess that they have teams making both ARM and RISC-V designs just to hedge their bets and offer something to those markets. People seem to forget that Intel used to have an entire ARM division with a license to do custom designs.
Intel is a value stock at this point. I have been loading up on shares, but it looks like the big money isn't interested in companies that make a lot, and more interested in "future potential for growth." I think some big hedge funds must be betting on intel failing.
Yeah I don’t see a break out for the stock anytime soon. I, too, just on existing company fundamentals bought a large (for me) amount of shares. I should have bought some of their debt instead because they continue to print money it’s just they’re still behind in many areas so I don’t see the stock multiple changing anytime soon.
nvidia is enabling the datacenter market to use data parallelism via hw/sw codesign investments - chips+libs for AI, ETL, SQL, network processing, etc. Those were historically too hard for most devs and hw too edge-case-y (ex: wall st was trying ~15 years ago) so were considered largely CPU-only. With new toolchains, pretty easy, so already the default in AI, and that's been extending deeper into data pipelines.
That is the different between a "manager" and a leader / entrepreneur. Most CEOs are at best half decent managers.
It is funny, the person with the idea of reduced margins, invest to innovate and compete would never have risen to the top of the organisation. For all sort of political reasons and shareholder values. I think this is the number one reason why startup matures to enterprise and fade over a long period of time. Without the vision from founders that make these bold strategic decisions.
At one point they'll have to face the "choose the lesser evil" situation so IMO doing it sooner rather than later can help them. But hey, we'll wait and see.
They aren't trying to convince you that the competitor is worse. They're trying to convince you that switching isn't worth the hassle.
It feels weird to watch Intel competing again, but we can't really expect them to sit on their hands. In a weird way, they have become an underdog while still dominating their core markets.
THey aren't dominating technically though, in engineering-driven markets. They are "dominating" because of likely anticompetitive agreements, inertia, and the like.
Their ad (already a sort of plea/admission of weakness) basically relies on that inertia and structure. I guess it is good they aren't completely dismissive of AMD and Apple+ARM.
The barbarians are at the gate. IBM was once invincible too.
I also should note that ARM is particularly dangerous in EC2/aws. Setting up an ARM instance to try out is really easy compared to an x86 instance. All of a sudden, you realize an ARM platform on Linux is little additional effort from a package standpoint. Then some software maker realized setting up ARM test platforms is near-effortless in CI builds.
- we switched our nodejs workload to graviton (c6g from c5), and it resulted in needing more instances, which ended up costing nearly exactly the same price
- ARM is annoying in practice for some stuff (we build our docker images on our CI which is on premise and not ARM)
Those are HTTP APIs which are mostly reading from MySQL and S3.
They are running on nodejs 12 (soon 16 ^^). They are using quite a lot of CPU, probably mostly due to having to parse large amount of JSON, and a bunch of cryptographic operations (mostly hmacs, AES - we are only using nodejs built-in crypto primitive) - but I don't know for sure to be honest - hence the choice of C type instances.
This workload looks really well suited for x86 cores. The x86 architecture has big advantages in both crypto and fast string parsing. It's interesting that graviton got cost parity given what you said.
My experience with today's cloud arm offerings (gravaton/ampere) is node.js overperforms on arm relative to benchmark scores. Crypto has a definite handicap on arm. I've found that AMD underperforms on node.js. Seems the arm offerings are slanted to integer style workloads.
Other packages and databases have brought me pain to use on arm. And anywhere I encounter pain I just don't migrate.
I'd say it's a good idea to try out on your specific workload and figure out what works best for your use case. The good news is that it's probably very very easy to switch and try all the CPUs offered by your cloud provider.
The only "big advantage" I can think of is that libraries like simdjson usually are still written for AVX first, NEON later. Otherwise it's not so clear cut, especially if not using any libraries like that.
Also, the datapath is a lot wider: NEON only does 16 bytes per vector, while Intel has fixed most of its frequency issues and can go up to 32 bytes per vector without throttling. AVX is also a lot more expressive, and can do more per instruction.
Haha at home I'm an AMD fanboy - personal laptop is a Ryzen 4750u - and I happen to collect ARM single board computers, so I think I'm pretty open minded when it comes to CPU choice (and btw in those c6a instances I'd love to try, the "a" means AMD ^^)
It was on one of the slides in ReInvent and I remember it was 4x%. But strange I cant find it either. ( I thought it would be easy ) The percentage was on new installation. Will dig this up once I have time to fact check.
They are scared of aggressive graviton2 marketing by AWS. I was present at meeting with AWS during which it was pushed rather strongly to us with "it so much faster and so much cheaper".
After meeting it took me effort beat sense into some people, i.e. to run some testing for our workloads instead of shifting stuff asap to graviton instances.
If those ads make people pause and run actual tests, it's a win for them
The pricing for all of the instances types is controlled directly by Amazon. I can see this developing into a competition law issue with possible anti-trust action down the line given the size of the cloud providers.
Of course they're scared. Even if Intel is able to close performance gaps, cloud computing is often an easy sell for ARM and Amazon is likely going to be able to get the chips cheaper given that they're looking for markup on the service end while Intel needs markup on the chip.
If I'm writing new software using Java, C#, Python, JavaScript, etc. it's easy to deploy to ARM. With Apple moving to ARM and 25% of developers using Macs (41-45% using Windows)*, there's going to be good ARM support for what developers need. Sure, some things might might run better on Intel and some things might not easily port, but that feels like it isn't the case for the majority of stuff that developers are going to be doing.
It seems likely that Microsoft and Google will introduce ARM-based instances. We know Google has dipped its toe in with its Tensor processor for the Pixel. We've seen reports that Microsoft has been working on ARM processors for servers. Qualcomm is gearing up for a major push into better ARM processors to compete with Apple on the desktop/laptop, but presumably they'd be happy to get cloud workloads as well. Oracle, while not a major cloud provider, has ARM-based instances already.
Intel has had a virtual monopoly for 20 years. Now they're facing a resurgent AMD stealing x86 marketshare and ARM processors that are quickly becoming a better price/performance option - and Apple is ensuring that everything developers need will be ARM compatible. I don't think it's an existential threat, but there's a huge difference between the margins you get as a virtual monopoly and the margins you get when you have to compete against strong competitors.
Plus, I'd guess that AWS is probably making sweet margins off those Graviton instances. Let's say that their margin on Intel instances is X and that margin on Graviton is 2X. That leaves AWS a lot of room to start pushing Intel out of their datacenters as old hardware is replaced. Let's say you're a company spending $100M/year with AWS and you're renewing your contract. Maybe Amazon suggests that they could give you a 20% discount off Intel boxes and a 40% discount off Graviton boxes - which they say are already 20% better price/performance so it's basically 52% off!
It also gives AWS negotiating power with Intel. If 20% of AWS customers move to Graviton, you can go to Intel and say, "we're finding Graviton processors really cheap to make and operate and customers are loving them...more and more they're realizing they don't need your Intel processors...if we don't get cheaper Intel processors, we might have to accelerate our incentives to get people onto Graviton boxes...the more we accelerate that, the more Google Cloud and Azure will likely want to prioritize ARM boxes..."
They're scared because it's the type of thing that snowballs until their lock-in is near meaningless. If ARM becomes 50% of the server market, there goes Intel's ability to demand high margins. 5-10 years from now, most deployments could use Intel or ARM based on price/performance rather than instruction set - and that makes life a lot harder for Intel.
The problem with this ad/page is that the ship has already sailed. Once Apple introduced its M1 MacBooks, we were getting first-class ARM support for everything developers would need going forward. It didn't happen overnight, but with Apple going all-in on ARM chips and with Macs being such a big player in developer usage, the war was lost that day. Now it's just a matter of time. Once all your devs are running and testing their stuff on an M1 MacBook, why not deploy it to an ARM instance?
I think the raspberry pi started moving compatibility/arm adoption far earlier than apple's migration. Maybe not the dev tools, sure, but I've seen a steady stream of more and more projects supporting arm and I attribute that to the pi.
This is true, don't know why you're getting downvoted.
Raspberry Pi, BeagleBone and various arm based embedded systems played a much higher role in getting most linux, BSD tools working under arm.
Same thing that’s happening now with RISC-V and SiFive’s boards.
It’s not apple M1 that’s making this migration possible, its just the cherry on top of a cake.
The cake was all those consumer arm boards and dev kits which got in the hands of various OS developers, software maintainers. Who put in their time to port their codebases to arm and ensure compatibility.
It's probably because linux on arm has been around longer than the Pi, but a lot of people seem to hold this idea it was pioneering.
It was pioneering in how low-cost and mass market it was at launch, but before the Pi there was plug computing, largely based around Marvell Kirkwood processors. There were also a variety of hackable NAS devices, I used some with Orion5x CPUS in. Then before plug computing there was the linksys NSLU2 with its Intel Xscale IXP chip, and all the hackable linksys routers going back to 2002, and that's just some of the stuff I've played with over the years. Before that I imagine there were previous generations of linux ARM support.
The Pi and other hobby boards are really great, don't get me wrong, but there's a tendency to credit them with a bit too much.
Plug computing and Linksys routers weren't cheap, and the Pi was. That's the entire difference when your target audience is kids with a couple of dollars worth of allowance or broke college kids.
> Plug computing and Linksys routers weren't cheap, and the Pi was.
For their time (20 years ago), they certainly were cheap. $100 or so for a small power efficient box that could run almost a whole Linux distribution was an amazing bargain. Before that, you needed a good-sized desktop or server to do anything involving a Unix-like OS, or paid dearly for the convenience of a laptop.
And it wasn't much before that, that running Unix on any microcomputer was just a dream.
It's hard to comprend just how amazingly affordable computing has become in such a short amount of time.
Do you realise that a $100 is a fortune to the demographic that I specified? For that price, those devices are so out of reach that they might as well just not exist.
I remember that time in my life very well. I literally had around $50 in my local currency in the coin box, and that was it. Hand me down computers were unavailable where I was.
Ok, but that doesn’t mean that the Pi was important in getting most Linux or BSD tools working under ARM, as was the claim above.
Yes, it reached a wide audience, they did a lot of great stuff getting capable dev boards out to a lot of people. But fully functional Linux on ARM pre-dates the Pi significantly.
(I’d argue the NSLU2 and sheevaplug were both pretty affordable, at sub-$100 too, but the Pi definitely upped the game there)
In the chicken and egg problem of getting support for ARM software, the Raspberry Pi Foundation was the one that unilaterally plopped a ready-to-go (if very weak), low cost chicken in front of everyone. For that, I think they deserve some credit. Sure they built in top of the work of others - so let's acknowledge everyone involved in the process rather than pretend that one party deserves fame exclusively. It was a community effort :)
It is my contention that this support was largely already present. That's all.
Pi did great work getting cheap boards out to people, and put a lot of effort into education and community building. They exposed a huge number of people to the possibilities of non-x86 computing that otherwise might not have been. So indeed let's credit them with that, it's all great stuff!
But it's quite revisionist to say they helped get linux or BSD or their tools onto the platform, IMHO.
This. The ARM ecosystem didn't get a good prebuilt Docker container story until the Pi4, but it was somewhat doable on the Pi3 if you were willing to recompile 80% of containers from scratch. I can only guess that additional adoption fueled the critical mass of incentives for building for ARM as a rule rather than as an exception.
The pi3 CPU used an older ARM spec, which had issues with producing good binaries for e. g. Go-based builds (I remember that quite vividly).
Beyond that, I can only hypothesize that Docker on non-x86_64 (or multiarch Docker, in general), was very difficult to kick-start for whatever reason. This may have been compounded by the fact that Docker wasn't as broadly adopted by hobbyists back then, so demand for ARM builds want that great.
But the systems already worked on Arm, that's the point!
Getting more attention to the platform was definitely a good thing, exposing more people to a different architecture, to having a machine cheap enough to just play with, to all sorts of great stuff.
But Linux and BSD themselves were already mature on arm by that point.
Indeed. Debian had full ARM support for years before the Pi, very mature, and was really the only reason the Pi project could even choose ARM. If not for Debian's work, which the Pi relies upon, they'd have never been able to bootstrap.
I used to run my own mail server on one and use another as a media server. I even did a small hardware mod and soldered a wifi card to one of the internal USB headers on one of them. Much fun!
I don't imagine the processing power stands up to modern devices though :)
This started long before Pi. StrongARM existed 20 years ago and the new generation of application processors which pushed forward ARM started around 2005 (Atmel, TI, Freescale, Qualcomm and others) with the ARM9 core.
Pi entered a mature ecosystem, it just made it a bit more consumer/newbie facing but the system level foundations were all there.
Ironically the Pi isn’t even particularly well supported in terms of drivers compared to commercial offerings.
While strongarm and others have been around longer, none had any real mass market hold. With Pi & co, you can build a (slow) server with $35. You can build a k8 cluster with $150. That's why we have cross platform packages for almost anything server related.
You could do just that before. Cross compiling for ARM is an ancient, dark craft. All that changed for the better (and admittedly much better for the hobbyists) is getting prebuilt binaries in the major distributions.
Before pi & co, cross compiling was a black art. You often needed to build your own toolchain and the target environment were often not standardized either so you would need different compilers for each target.
Fast forward to today, you can apt-get cross-compilers and build tools.
While it was a hassle, you could use pre-built cross compilers on debian before the Pi came along, as I was in 2010/11. Multi-arch helped a lot when it was introduced too.
Are you going to run Apple software on your servers? Most servers leverage the Linux ecosystem, most of which is open-source and maintained by developers who do not have big $$$.
M1 probably enables the same transition for customer-oriented, proprietary apps. But it has little to do with cloud computing, except proving that performance and ARM are not mutually exclusive, thus providing more acceptance.
A lot of stuff got prioritized for Arm/M1 compatibility... and fast as soon as the first M1 Macs released. Docker, node, in addition to Apple software.
Well, to be honest it was probably macos+M1 platform support. Docker runs a virtual machine with Linux in it on macos and windows, not sure improvements there directly translate to server workloads. I am not sure about node, but it was surely working very well on ARM before that, although it might gain some optimizations thanks to M1.
Although Node.js existed for ARM, often there wouldn't be pre-built binaries for whatever version/native dependency you needed. M1 has significantly changed that picture.
Huh? I am
developing natively in macOS for a Linux server using Rust. Everything just runs fine on both. Unless I need to go really low into syscalls I have no reason to be writing and testing my code on Linux.
As someone who has worked with Apple "Enterprise" / "Business" / "Pro" products before..... I'll wait. It will start half decent, get a push to 90% of everything you need and then slowly get rewritten simplified to a level that is too basic to do any real work or just abandoned.
Yeah no. There was a time (early 2000s) when Apple made half-decent server hardware and operating systems. Those days are over. Apple has vocally abandoned the server market, and trying to use MacOS as a server today is an exercise in futility. Too many things require the GUI, and Apple's intense focus on security theater makes everyday server operations a nightmare.
> This is true, don't know why you're getting downvoted.
Apples are upset their $2000+ laptop isn't the revolutionary catalyst for the Arm revolution but instead a crappy $25 computer that's been around for a decade.
N-gate has no love for pis either. The article would’ve been like, Apples and Pis fight over whose shitty processor paved the way for more underpowered devices , while intel continues war against its users.
I am not the n-gate maintainer, just someone who enjoys the humor. Last I heard the maintainer has been busy and unable to dedicate enough time. They hang out in in #cat-v on oftc.net.
Raspberry Pi didn't have serious resources until 3/4 era. I have every iteration of the board, and while they're all useful in their own regard, only the latest boards can handle serious, biggish projects, resource wise.
Of course Pi has started a big movement and prepared the groundwork, but M1, which I also use, is something else in terms of performance, and it caused this build-up to become an avalanche. Cloud systems will inevitably ride this wave, and will seriously eat into Intel's market share.
Just provisioned a small docker-lab in an ORACLE Cloud Ampere system. This thing is seriously snappy at the first glance.
The Raspi and Apple's ARM chips may be the ones that get the most coverage, but as others have pointed out, the groundwork for this has been laid much earlier: even the first Android phones and iPhones back in 2007 already used ARM-based SoCs and had Linux kernel support from day one, and the support only grew from there. The initial Raspi could only be sold at that low price because it used a mass-produced ARM SoC.
So the only thing that is new is that ARM architecture is finally starting to "spill over" into the notebook/desktop and server market and preparing to eat Intel's (and AMD's) cake. But the wide-scale adoption of the architecture already has a long history, so presenting Raspi and Apple M1 as "pioneers of ARM introduction" is a bit misleading (Apple was a pioneer, but already back in 2007).
From the hardware perspective you're certainly right, but while iPhone and Android had made the initial groundbreaking in the kernel(s) and userland(s), Raspberry Pi arguably brought it to "touch distance" for the layman.
Before that there were already ARM servers too, but they're few and far between.
So I was trying to say that, Raspberry Pi and Apple M1 made ARM "touchable" with standard development tools and utilities for the normal people out there. So, this interaction accelerated the acceptance and software porting and grown the ecosystem faster.
From the hardware perspective you're certainly right, but while iPhone and Android had made the initial groundbreaking in the kernel(s) and userland(s), Raspberry Pi arguably brought it to "touch distance" for the layman.
Everyone already had an iPhone or Android, how was it not touch distance for the layman?
Saurik and the original jailbreakers did all the heavy lifting, APT, SSH, and the usual CLI tools were ported so they could enjoy their handheld Unix computers.
Pi was an image on a microSD, some computer interfaces, it was very slow, burned out the microSD card, and was discouraging for me to use when I had a much better computer, the phones were easy to invest effort in, if I had a way crappy computer I wouldn't do too much with it, most people throw PiHole and never touch it again.
> Neither was very hard, Android came with root in developer tools at first, has very easy root, and you could always just chroot if that was too hard.
Been there, done that. Doesn't matter. Tinkering with a critical life infrastructure device vs, tinkering with a device made for tinkering is different. Way different.
> Jailbreaking was so easy someone like Justin Beiber did it
Again, doesn't matter because of the reasons I've said earlier. I'm using both platforms for a decade, and won't do them to my primary devices either. Because these devices are handling a lot of stuff for me, tinkering with them is not an option.
> Pi was an image on a microSD, some computer interfaces, it was very slow, burned out the microSD card, and was discouraging for me to use when I had a much better computer, the phones were easy to invest effort in, if I had a way crappy computer I wouldn't do too much with it, most people throw PiHole and never touch it again.
Actually, with a semi-decent SD card, burning one out with a Raspberry Pi is almost impossible. If you're worried about that, you can add a ZRAM, move swap and some folders to it (as Raspbian does), and sync during reboots, or periodically. If you're downloading world on it, a USB drive can alleviate the worry too.
On the slowness side, a first generation Raspberry Pi has a comparable performance to a Pentium II - 266MHz system, with similar performance to GPUs of that era. Considering the things I've done (live webcasting for example) with a P2 at that time, A Raspberry Pi is a powerhouse for its size.
You may have a much better computer with way higher specs, but you're missing the point. A Pi is a literally unbrickable and forgettable Linux PC with decent performance and no noise for many tasks many people do. I'm running an home infrastructure on an OrangePi Zero and the bottleneck is the 512MB RAM, not the CPU. With a decent SD card, it's not slow either.
I can replace all of it with a Raspberry Pi 1 or 2, but the OrangePi is much smaller.
Just because it can't race with a specially built workstation, doesn't mean it's useless.
>Again, doesn't matter because of the reasons I've said earlier. I'm using both platforms for a decade, and won't do them to my primary devices either. Because these devices are handling a lot of stuff for me, tinkering with them is not an option.
If you use them for a lot of things, wouldn't you want to tinker them to make them run better? I certainly do on linux.
>You may have a much better computer with way higher specs, but you're missing the point. A Pi is a literally unbrickable and forgettable Linux PC with decent performance and no noise for many tasks many people do. I'm running an home infrastructure on an OrangePi Zero and the bottleneck is the 512MB RAM, not the CPU. With a decent SD card, it's not slow either.
I don't fear my PC being brickable, and its not noisy either, you can use water cooling for desktop or use a modern laptop, many are fanless (can't say the same about the Pi4). I have for example a router I used custom firmware on that can do everything a pihole can do, and more, just works quietly, it can do also act as a smart hub but I don't see the point.
>I can replace all of it with a Raspberry Pi 1 or 2, but the OrangePi is much smaller.
I find the problem with SBCs is they are too slow to be worthwhile, Android phones are much faster, have all inputs, and just works. Its not hard to buy an extra or use an old one if you fear tinkering.
>Been there, done that. Doesn't matter. Tinkering with a critical life infrastructure device vs, tinkering with a device made for tinkering is different. Way different.
What is "made for tinkering"? A Unix phone that can run all the software I want, a linux phone with root access, or my PSP which was made to play PSP and PS1 games with excellent homebrew counts as not made for tinkering seems alien. What can't they do? Throw a breakout board if you need the pins, I don't get the issue with using them. A one click root or unlock is way less complex than buying required parts to make an SBC work.
>On the slowness side, a first generation Raspberry Pi has a comparable performance to a Pentium II - 266MHz system, with similar performance to GPUs of that era. Considering the things I've done (live webcasting for example) with a P2 at that time, A Raspberry Pi is a powerhouse for its size.
>Just because it can't race with a specially built workstation, doesn't mean it's useless.
No its just not very useful for me, its hard to want to eat ground pork when there is steak to be had, I don't see the point of it really when my computer does everything better, why would someone want to use it to webcast for instance or use the PII era GPU? Sure you can emulate games one it, but I can on my computer already.
>I'm running an home infrastructure on an OrangePi Zero and the bottleneck is the 512MB RAM, not the CPU. With a decent SD card, it's not slow either.
You aren't disagreeing with me, you use it for a purpose (like a PiHole), not as a computer. I can use slower boards for purposes like yours too, what do you use it for? I don't really understand the benefit of a smart home hub, what are you running on it for what automation/smart?
Raspberry Pi was a huge boost, but most of the ARM porting work had been done earlier on other Linux/ARM platforms. We had Debian running on StrongARM iPAQ and Novell Netwinder devices 10+ years before RPi 1.
Either way, OP has an extremely Apple centric view on this and completely ignores decades of work on Linux and many others to get us here.
For example, consider that golang has been able to cross-compile to ARM since almost the very start. That alone has probably contributed more to actual ARM server deployment than M1 (which is by the way not available for cloud or any server systems).
And it's just used to run Apple-ecosystem-only things, not real production load for your typical API (unless you have loooots of $$$ to burn and give as charity to Apple)
(already prepared for the comment "I do use AWS M1 to serve production load, not just build/test Apple software")
Pretty much all of the ARM-based SBCs have ensured excellent Linux coverage for the architecture starting over a decade ago. The Pi popularized it with a more mainstream (i.e. plug and play) audience.
The big jump for ARM adoption was the iPhone and later Android. A lot of important C libraries got first-class ARM support because they were battle tested on the iPhone.
Can that not be attributed to Android or even Arduino? If we are talking about OS I would exclude arduino. In terms of Apple, they made the switch from PPC to x86 and ARM, the App Store made developers money and that allowed a huge repository of applications to exist so that Apple would move their computers.
Even before Pi my router had DDWRT, I don't think Pi did very much.
Easy, cost effective entry point with tons of applications and options all marketed as a learning/hobby board. Other than pi's, I'm on x86 but could see that changing if I can get away from Windows Server on my homelab.
correlation != causation. ARM has been gaining mindshare for the past 15 years without the Pi. I love rpi but to claim it is what started ARM being more prevalent will take a lot more than just stating it. There were lots of ARM embedded systems before pi was a sparkle in someone's eye.
FWIW, it’s unlikely AWS will ever shut down it’s Intel hardware. In a growing Cloud market, Graviton is likely just a way for AWS to access more of the semiconductor production than other clouds. They will likely keep buying from Intel at Intel’s price point, as long as it remains profitable for them.
The biggest thing not being talked about that hurts Intel is AWS’ pool of Intel CPUs that currently run S3, SQS, Dynamo, etc (m3s, c4s, etc) all being released to the public and offered at such a discount (especially on the spot market) that new onprem purchases (i.e. new Intel chips) are not cost effective.
Disclaimer: I used to work at Amazon but never in AWS. This is all based on public speculation.
AWS and other compute providers will continue to buy what customers want, but:
1) if their managed services (RDS, DynamoDB, etc) can run on ARM and it provides a better price/performance ratio, they'll use that internally, thus needing less x86 hardware
2) if most customers can run on ARM (interpreted languages run out of the box, compiled ones need more effort but is still doable) and it's better priced than x86 there is less demand on x86 again.
Right. But even then it does not become that much worse.
Just tell Go to use the appropriate cross compilation chain for the C bits, e.g. `apt install gcc-aarch64-linux-gnu` and set the `CC=aarch64-linux-gnu-gcc` environment variable to compile for arm64. Compilation becomes a bit slower, but it's still manageable.
It's that easy thanks to the foundations laid by the distribution that ships a solid cross-compiler setup (in this example, thank you Debian!) and of course the brilliance of the Go design and toolchain.
Relying on libraries written in C is not an issue if the library doesn't make assumptions about / isn't tied to, a particular architecture. Some do but most (especially newer codebases written to C11 and beyond) don't.
There's also the issue of performance-optimised assembly routines, which impacts many native libraries doing things such as crypto, image processing, etc.
The issue is that someone has to initially write the code to emit the native assembly/machine code for each architecture that a given compiler supports. (i.e. assuming it actually emits native code and not code for another compiled language like C/C++) The flag simply means that someone has already done that work for a particular language on a particular architecture.
The way I see it, in the long run AWS will stop net-adding Intel instances. Only decommissioning and re-adding the existing ones. While Graviton-based instances will continue growing with the growth of the cloud market (and global economy), Intel will be considered legacy the same way we view mainframes as legacy.
I think when discussing “Intel instances” over a long enough time horizon we need to not couple it with x86.
I do think x86 will be phased out as you suggest. I feel either Intels foundry division and/or chip designs for ARM and/or RISC-V will very likely get incorporated into Graviton-X (and similar) if only as a bargaining tactic between Intel vs TSMC and Intel vs Qualcomm vs ARM.
> cloud computing is often an easy sell for ARM and Amazon is likely going to be able to get the chips cheaper given that they're looking for markup on the service end while Intel needs markup on the chip.
> Plus, I'd guess that AWS is probably making sweet margins off those Graviton instances.
> It also gives AWS negotiating power with Intel.
It's not just about pricing. Another convincing argument for the move towards ARM [1] is that the big cloud providers like AWS have much more insight into cloud workloads than Intel could ever have, with vast amounts of metrics, so custom-designing chips would be a somewhat natural path by itself.
[1] I heard this first in a presentation by Brendan Gregg, but I can't find the slides at the moment.
This right here is what I'm working on at my company right now. We're a multi-cloud and on-prem SaaS provider, we meet the customer where they want to be.
Our entire deployment stack is Java/Python on Linux.
I'm looking at our deployments and thinking, we sure would save a lot of money, and likely deliver a superior product if we switched to ARM on both AWS and OCI. Better product due to beefing up the Postgres clusters with part of the savings.
My hope is that by the end of Q2, our default deployment is on ARM on OCI. We're looking at nearly a 50% cost reduction versus the current m5 RDS and other EC2 instances.
I have not. I'll add that to my test matrix. I don't care who provides the compute(Intel/AMD/ARM). I care about QoL and QoS for customers, costs for delivery for us.
If switching to ARM means general parity on per core performance, I can easily sell rolling a portion of that savings into a bigger baseline config for all customers. Especially on Oracle, where their charges for RAM on A1 are peanuts.
This sounds more and more like an existential threat to Intel.
I thought the industry's consensus was a key component of Intel's dominance is they leveraged the best fab tech in the world, made possible by huge volume. That's a virtuous cycle on the way up and terrible as volume decreases...
> Plus, I'd guess that AWS is probably making sweet margins off those Graviton instances. Let's say that their margin on Intel instances is X and that margin on Graviton is 2X.
I’d guess like everything they do, they’re taking a bath on margin in the short term to solidify a monopoly in the long term. Amazon made basically $0 for over a decade. Investors were fine with that because everyone knew they were building a monopoly to later be exploited.
I don't know personally how it works, but some stuff only works on AWS, so it can have monopoly status in some server instances, but not all. They were a monopoly for years, but no longer are at least for general cloud computing.
I actually have a few production workloads running exclusively on ARM servers and have saved money doing so.
The applications themselves are in TypeScript, and Node as a runtime is cross arch without any extras. The hardest part is building docker images from my Github action.
I wouldn't be surprised to find out that AWS is secretly switching the hardware running of their serverless overrings to use their in-house chips. Things like lambda (assuming source is distributed), aurora serverless, Dynamo, Cognito, CloudFront, etc
I don't believe this is much of a secret. A lot of internal services have been gradually shifted over the past couple years, there was a re:Invent talk this year about it.
> Apple is ensuring that everything developers need will be ARM compatible
Let's rather say that Apple helps developers realize that the vast majority of open source dev tools they need have been working seamlessly on arm platforms for decades thanks to the work of linux and bsd distros.
Funny how the makers of cloud hardware didn’t want to get into the cloud from the start. Intel Cloud would have seemed more normal then a bookseller getting into it.
No risk, no reward. Intel execs wanted less risk and were happy with the easy stream of profits they were getting from their monopoly. Other companies opted to forego taking profits and kept spending on R&D for new avenues for the business to earn money.
I worked for Savvis who initially bought an Intel datacenter in the 90s to get started. Company lore was that Intel couldn't make a profit in the colo space because they over engineered everything. Savvis cut DC staff and cut other services until the data center was profitable.
I'd rather see x86 staying. X86 is much more open than ARM. Can you install Windows or Linux on ARM Macs? How easy is to replace the operating system on an ARM mobile device?
"Danger" undersells it a bit - it's already happening, quite explicitly spelled out in Microsoft's legalese regarding Secure Boot. A lot of the Linux world was very afraid on the introduction of Secure Boot, as it was seen as the final anti-Linux coup-de-gras - Microsoft could, at their whim, simply turn off the ability to install Linux! This fear was widely mocked as hysterical. Now they have done exactly that, on all ARM devices that run Windows.
Is funny. Gras (pronounced gra) means fat(as in animal fat, not a fat panda), and the real saying is coup de grâce (pronounced gras and means grace). For some weird reason English speakers pronounce coup de grâce improperly, skipping the s sound at the end ( funnily it's usually the French language which cuts sounds), including in movies (like Kill Bill).
I firmly believe that this is their endgame on x86 too. The elevated Windows 11 hardware requirements had the effect of normalising hardware setups capable of end-to-end DRM, which is only a step to the side from Secure Boot restrictions.
You can cast doubt on TPM/SecureBoot implementations with evidence, but it very much is real security controlled by the user with mokutil. We need more of this kind of security, that's controller by the user, if we're ever going to get away from locked bootloaders.
They give you the keys now. They could change that overnight if they mandate the same "Secure Boot only with a Microsoft key" that they mandated on early ARM devices. Don't be mistaken. They are very much the Don of x86, and when they choose to alter the deal, you'll be SOL.
That's why I consider alternative ecosystems (that don't have exorbitant prices) like RISC-V and Raspberry Pi to be critical to the survival of general-purpose computing. Once your ability to run on bare metal disappears (via Secure Boot or otherwise), you're in grave danger of simply not having physical hardware to convert new users.
I hope the regulations being discussed right now[1] pass and we can just call SecureBoot what it is instead of fearing what it might become (atleast on x86).
A lot of people who have lived entirely in the x86 world are completely unaware that a single CPU architecture can have many, many, many different incompatible types of system architecture - x86 means a few sub-types of BIOS + UEFI-based PCs, while ARM, Power, etc have far more variance.
The fact that you can't install Windows on an ARM Mac is mainly a licensing issue, as in, Microsoft will not sell you an ARM license. Apple has said it's open to the idea of ARM Bootcamp and people have pretty successfully ran ARM Windows in VMs.
This is not an architecture issue but ecosystem. Current PC/x86 systems have been built in open historically. ARM has no such baggage, they are free to build closed and incompatible systems which are awesome as long as you stay within the garden.
It certainly does. An architecture isn't just about the ISA, but the culture around it too. The culture around ARM mobile phones and tablets heavily emphasized bootloader locking, which accelerated its spread to x86 too - which works contrary to the notion of openness.
> The culture around ARM mobile phones and tablets heavily emphasized bootloader locking
Virtually all of which are user-unlockable. But keep telling me how unopen they are?
(BTW, those of us who actually want secure systems very much appreciate the fact that malware can't overwrite my boot sector without a hardware-level vulnerability.)
I am aware of exactly 0 ARM motherboard makers that make boards with socketed (read upgradeable) CPUs.
I know they are made for PowerPC. I know there are tons of x86 ones.
For a while I bought motherboards with upgradable CPUs. I quit doing that after a while because by the time I wanted to upgrade, the newer chips required a new socket, I needed faster memory that the motherboard didn't support, etc.
>Very easy if you have an Android phone
Very easy if you have an Android phone from the correct manufacturer that and it's popular enough to have custom roms for it or you know how to port custom roms to other devices*
If you have Android, you have to have an excessive number of stars aligned just right for a successful OS replacement. On x86, the replacement involves toggling some firmware settings, rebooting, and using a point and click installer. On Android:
1. Have bootloader unlock available via official channels. If you are in the 1% with a supported phone variant, congratulations. Service box unlocks, while often possible unofficially, rarely result in a critical mass of developers that's needed for the next step.
2. Have a ROM ready to go. If lucky, it'll be Lineage OS. If less lucky, some unofficial build of Lineage OS with potentially some hardware not supported (but there's hope!). If you are really unlucky, the phone is so unpopular that a ROM developer hasn't taken enough interest in it to produce a usable ROM.
3. Flash the ROM, keeping all idiosyncrasies of the particular hardware in mind. Hope you got the right partition table/kernel/bootloader/ fragile fragment, because if you flash the one meant for GEX5546ab on a hardware that corresponds to GEX5546cb, you just killed your phone. Run through contingency procedures for unbricking the phone in case you need them.
4. Congratulations. You now have a firmware that works better than the factory OS, but they also happens to potentially not support small things like Bluetooth or the camera blobs. You may get support in 2-3 months, hopefully. Your secure enclave keys are also gone, so bank apps will be suspicious and Netflix will refuse to serve higher-quality content.
>1. Have bootloader unlock available via official channels. If you are in the 1% with a supported phone variant, congratulations. Service box unlocks, while often possible unofficially, rarely result in a critical mass of developers that's needed for the next step.
Untrue for most of the phones on XDA, S3 was the most popular android phone and came locked.
>2. Have a ROM ready to go. If lucky, it'll be Lineage OS. If less lucky, some unofficial build of Lineage OS with potentially some hardware not supported (but there's hope!). If you are really unlucky, the phone is so unpopular that a ROM developer hasn't taken enough interest in it to produce a usable ROM.
Luck to me is more than LOS, its the bare minimum, but I have gotten unlucky with non popular phones. Not hard to look at roms when you see if you should root or install custom bootloader.
>3. Flash the ROM, keeping all idiosyncrasies of the particular hardware in mind. Hope you got the right partition table/kernel/bootloader/ fragile fragment, because if you flash the one meant for GEX5546ab on a hardware that corresponds to GEX5546cb, you just killed your phone. Run through contingency procedures for unbricking the phone in case you need them.
You could just run Halium to run Linux with all drivers, or use chroot to have linux in it too. Safety that existed since 2012 prevented me from ever having a single issue like that, and I never bricked a phone with over 20 flashes.
>4. Congratulations. You now have a firmware that works better than the factory OS, but they also happens to potentially not support small things like Bluetooth or the camera blobs. You may get support in 2-3 months, hopefully. Your secure enclave keys are also gone, so bank apps will be suspicious and Netflix will refuse to serve higher-quality content.
Basically, if you spend 10 minutes looking up relevant information, buy a popular phone with good support and you will face 0 problems. You are making a mountain out of a molehill.
Yes you can on some, Apple's T1 was on x86 to prevent other OSes from installing. ARM isn't as open, RISC-V is, but don't worry x86 will still be staying in the way tha PPC, Amiga, and the ARM based on RISCOS stayed.
>Can you install Windows or Linux on ARM Macs?
ARM Windows sucks. I wouldn't even try, asahi can be installed (but I don't get why you would, all unix stuff works) https://asahilinux.org/about/
For example docker for arm has received a major push, with more images becoming available by default, more testing of dependencies, etc. Generally, a lot of developer tooling now get exercised on arm a lot more, making it more robust and accessible. And once you have that, it’s much easier to just build and deploy on arm as well.
Earlier there were a million paper cuts on ARM that couldn’t be solved by any single developer. So the friction to trying out ARM was high unless your use case already worked well. When millions of developers switched to M1 in the last year, most of these paper cuts were fixed. Almost everything just works now.
So the conversation went from “what’s the cost of getting our systems running on ARM and how does that compare to cloud pricing” to “our systems work everywhere, so who’s giving us the best deal”
Even if Apple doesn't support writing code for Arm + Linux, you could use Go on a new Apple Silicon system and simply cross-compile for Linux for deployment.
Just an anecdote - in my surroundings (development for Linux server with Java and Python), I usually see around 50 % Mac, 40 % Linux, 10 % Windows.
I was using a Mac when I worked for a Fortune 500 enterprise. The choice was between a top of the line Macbook Pro with sudo rights and some underpowered Thinkpad where I'd have to request software to be installed by someone.
Not exactly what i suspected, but in retrospect my comment wasn't clear. I should have mentioned that eg. IOs engineers / Mac developers have no option. Which would be some share of the market.
On this point I just deployed my first end to end ARM app. It’s a small aggregator but it’s written in Go on an M1 mac and deployed to graviton2 instance.
The killer is there was no friction at all and target is cheaper. It all just worked. Absolutely no issues at all.
The thing that scares intel is that they’re not special any more.
In my experience the difference between Intel laptop parts and xeons isn't that much, atleast it is for AVX512 which would be a good analog for the situation with SVE.
>It seems likely that Microsoft and Google will introduce ARM-based instances.
Actually google was the first. Its called TPU - Tensor Processing Units and has been available for their ML platform for a long time. Interestingly, TPU replaces nvidia in this case.
What does this say to an AWS customer who hasn’t previously considered Graviton (probably the majority). I’d argue something like:
‘Wow. Intel is conceding that Graviton is faster for some workloads and seems to be really concerned about it. We should do a detailed cost benefit analysis.’
Reminds me of ARM’s campaign against RISC-V which backfired badly. Like this anti-Graviton ad, the copy for the anti-RISC campaign had the unintentional effect of underlining the major advantages of the technology they were attempting to criticize.
You may have heard that it’s no big deal to port your code from one architecture to another. However, porting takes time and money. It’s a complicated, intense process. Plus, ARM doesn’t run every software program out there, so when you’re ready to add new functionality from Intel, you’ll be stuck with the limitation of ARM and the long-term cost and hassle of managing multiple code bases.
Then turning around and fabbing riscv cores is pretty sickening. If I was at SiFive I would be unenthusiastic about having Intel as a partner.
Intel benefits from decades of being the preferred platform for people to optimize server software on. I would believe that they outperform Graviton on workloads that have done this (especially when they have optimized with AVX instructions), but tends to be near performance parity on other workloads.
Databases and ML likely are winners for intel due to this optimization, while stuff like nginx performs just fine on the ARM chips.
I don't have much data here that's public. That said, Intel's AVX instructions generally give big advantages to range scans and other analytical workloads (as well as ML inference), while they don't help much for transaction processing. It depends on the DB and the workload. I would expect transactional benchmarks to look good for Graviton.
I had hoped Cavium would challenge Intel server space back ca. 2017, but they went HPC. Then I thought Ampere would do it, but their volumes are just minuscule for reasons I don't understand. I am surprised to see Amazon finally scare Intel.
Because if Amazon had adopted Cavium or Ampere, it wouldn't have fundamentally changed their strategic outlook. Different, cheaper supplier, but still a supplier.
By vertically integrating, Amazon gets to pocket that high margin revenue, as well as decide their own destiny.
If the Cavium experience of AWS HSM is at all reflective of what a Cavium-dominated server space would look like... I'd probably just switch to gardening at that point.
What, you expect them to compete fairly? It's Intel; they always pick the most... "favorable" benchmarks to compare, regardless of realism or apples-oranges issues.
Reminds me of the years Apple spent praising the superior performance of Motorola chips over Intel when running a couple of very specific Photoshop tasks.
That they claim as one of their benefits that Intel chips have better security than ARM is just laughable in light of the myriad speculative execution vulnerabilities present in x86 chips.
Yes I chuckled the most on this one. Given the fact that graviton 2 doesn’t have smt so they sell you actual physical cores instead of inherently and irreparably insecure intels hyperthreading crap cores this point is just laughable.
Intel were by far the worst affected but the basic speculation attack (spectre) will be with us for years. Basically all modern processors are iterations on roughly the same idea, there is no x86 in this regard.
That's not the point. When told to jump, most companies tend not to ask "how high?", but rather "why, and how did you get into my office anyway?". They simply won't bin useful hardware because of hypothetical (to them) security issues.
Why wouldn't Intel be scared if it's really 40% more perf for the same price. Once companies test the waters in AWS they may all wonder "why not for our on prem workloads too"?
Yes, like the Arm RISC-V announcement, it looks just amateurish and/or desperate marketing. In the Arm case I think some marketeer had gone rogue and the engineering org had the clout to shut it down.
Qualcomm might be interested if Graviton becomes defacto in AWS. You could end up ordering Quanta computers with Qualcomm Inside (which is basically what Nuvia was set to achieve)
It’s interesting that customers subscribing to Amazon prime eventually created a mega trust that is now trying to compete with the semiconductor industry.
It would be fun to play this out where Amazon starts a bank, a car company, a rocket ship company, etc. just to see how large it could possibly get.
UNH, Anthem, CVS, Humana, Cigna, Centene all earn low single digit profit margins, are extremely heavily regulated, and deal with healthcare providers that frequently have monopolies in their region.
I doubt it is worth trying to get into that business to try and capture the extra 3% profit margin they are currently paying to a managed care organization (aka health insurer).
There is always waste clouding margin. The fact that all these big players cannot figure out how to wring it out and get more than a few percentage points of profit with 10+ years of experience and data means it must be very hard to identify and fix.
Tech can usually come in and clean house in a new business due to introducing highly scalable processes that drive down marginal costs to near zero. I do not see how that dynamic would be possible in the current healthcare business in the US involving patients, doctors, hospitals, government, and lots of bureaucracy due to the politics of how healthcare resources are allocated and who is liable.
My observation has been that employee tenure is a huge drag on the insurance side. These are generally jobs for life, with all the organizational problems and inertia that spawns. Consequently, the internal thinking goes, why work to wring blood from a stone when there's a zero chance the company or your job disappears if you don't? They optimize for stability, not efficiency.
I'd chalk the failure of tech to penetrate up to the lack of standard interfaces. The entire insurance-provider industry, after you pry the lid off and look closer, is essentially bespoke person-person for almost any transaction.
It's gotten better, but it's still a far cry away, compared to the industries tech is used to disrupting. And you can't Uber-ize everything because (a) labor supply is a high-skill position with credentialing and negotiating power & (b) regulation prohibits overly disruptive moves (thank god).
So far they're all-in on Qualcomm, right now really held back by their unimpressive performance (mildly customized/rebadged Cortex cores with tiny caches for some reason) but let's see what happens after Nuvia cores happen.
https://news.ycombinator.com/newsguidelines.html
If you want to say what you think is important about an article, that's fine, but it's not ok to use the title box for that. Here are three better options:
(1) make a text post, describing what you observed (in this the Google ad), providing appropriate links, and giving your point of view;
(2) submit the URL with the original title and add a comment to the thread explaining why you posted it. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
(3) put up a blog post somewhere and submit that to HN instead (this is basically the same as (1) but on an external site).