Many people are seeing all these different vendor specific CPUs as a win. I'm a bit more skeptical, but perhaps that it unwarranted.
But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products. So most of FAANG, all if you count the ones with consumer products.
The first version of these CPUs will be very ARM compatible, as they are trying to drive adoption to of their silicon. Once there get a leg up they will start adding patented operation to their stuff. And then we'll end up with a fragmented CPU field driven by corporate greed.
And because they are not OEMing this hardware, they really have no incentive to be cooperative with the others. Similar to older gaming console that had custom and experimental architectures.
However it is obvious why this is all happening. And that is Intel and Qualcomm inhibited growth and squeezed too much profit from the market.
There's a lot of ignorance in some of the threads here, but the reality is proliferation of "custom" (but under license) ARM chips _is the end goal of ARM_. That's their whole business model. They (Arm, the company) don't manufacture anything. They just design and license their designs.
To be clear there is little risk in anyone manufacturing truly proprietary chips. ARM licensing ranges from "you shall produce exactly to spec" for those wishing to vertically integrate commodity parts to "you may freely modify" -- but in either case it's the ARM design being licensed. It does nothing but harm the licensee to build something novel and strange functionality.
This system has been working quite well for Arm for decades at this point. If anything, the fact that well-known brands like Apple and Google are spinning ARM chips speaks volumes to Arm's excellent business model. Nothing we've seen from either tech giants is truly revolutionary -- the smaller fabs have been doing the same things for years.
Why anyone ever sold Arm, I will never understand.
Theres survivor bias - the success of ARM ignores the relative failure of other alternative instruction sets. SPARC, Itanic, MIPS, the road to today is littered with instruction sets that didn't make it in the same way ARM has.
Oracle and Fujitsu will still sell your business SPARC servers, but I don't know that I'd buy that business unit (never mind that I don't have that kind of money). It's easy to buy the lottery ticket after you've got the winning numbers. ARMs successful now but there was a lot of luck and hard work up get there.
The comment above is talking about ARM's business model, which Intel on Itanic definitely didn't have, Sun didn't have for SPARC (they later freed it, after it was no longer commercially relevant), and MIPS as a company didn't follow, either.
IIRC Motorola couldn't keep up in terms of producing competitive performant designs and lost out to competitors. ARM hasn't fallen into that trap. Then again, empires rise & fall, and the end of something is usually just a matter of time. But in the tech world the "end" is also some after-life embedded in another organization or pivoted to a different direction. For now, ARM is rock solid on a foundation made from shifting sands of eternal tech change.
> For now, ARM is rock solid on a foundation made from shifting sands of eternal tech change.
ARM as an architecture may be on solid ground, but their future as a company may be uncertain given that their IP seems to have been appropriated by the CCP.
ARM and ARM China are separate companies. In software terms, while ARM China may have forked the ARM IP, they're not going to get any commits from upstream and it'll wither on the vine.
On the other hand, there's a global supply chain appetite for cheap products including CPU's. Upstream commits aren't required to dump cheap CPU's on the market or to develop compute intensive businesses around them that undercut the competition on price.
High-end CPUs, hardly. That would require significant design efforts to keep up with ARM's development.
Not that it'd be impossible for China to develop their own strong processor-designing forces, driven commercially or by the state. But so far it seems far from a trivial task.
High end isn't needed. Simply a marginal improvement and CCP comes out ahead. In the performance per watt per cost calculation, the cost factor has no lower bound. Cost can be subsidized to near zero like many other industries under their control.
We're talking about instruction sets with ARM, correct? It's not anywhere near the level of investment as next generation litho tech for a chip foundry?
I don't underestimate the ability to innovate. Stolen tech can be improved just as well as in-house R&D'd tech.
The good news is that intellectual "property" isn't truly rivalrous. If ARM china "steals" ARM ip, ARM is still capable of licensing to it's western clients; as it's unlikely any of them will be licensing from ARM china.
> If ARM china "steals" ARM ip, ARM is still capable of licensing to it's western clients;
If the smartphone market is an indicator, too bad that this will just mean that the majority of OEMs will just buy their chips from ARM china and thus demand for ARM IP will expectedly drop, and meanwhile this IP appropriation will just be used to develop independent design capabilities.
ARM China is independent from ARM, which just happens to have whatever IP it was able to run away with. To compete they're going to have to match ARM, given that nobody is going to write software for their custom fork.
The domestic Chinese market was not able to support homegrown TD-SCDMA without the silicon manufacturing restrictions that now exist for Chinese companies, what makes you think that a company with rapidly outdating chip designs and access only to domestic silicon fabs which are trapped on older 14nm+ processes is going to be able to compete outside of the low end of the budget segment?
Even Intel had trouble surviving on 14nm, hence all the contra-revenue spent to directly subsidize Intel tablets (whether they were $100 HP Stream Windows tablets, or $50 Walmart special Android tablets) to try and not get locked out of that space.
> (...) what makes you think that a company with rapidly outdating chip designs and access only to domestic silicon fabs (...)
Well, maybe the fact that not so long ago it had none of that and it clearly looks like both the company and the political regime aren't having many problems getting their hands on all the missing pieces.
Plenty of people do (they're some of the most beloved chips of all time), but their peak was pre-1990, so most people don't bring them up in discussions like this.
The 68K line was the Itanium of it's time. It overpromised and underdelivered and was crushed by the 286 and 386. Many vendors made machines based on it (Atari ST, Amiga, Mac, Sun Microsystems, Sinclair QL, ...) and all of those vendors either went out of business or transitioned to RISC architectures in a hurry. It was one of the many near death experiences the Mac platform had.
It was more successful than the beautiful losers such as the TMS9900, iAPX 432, i860, NS32000, but it hit the end of track and left everyone in the lurch.
Performance per clock cycle was much better on 68k. I understood that they lost out because the world adopted DOS and DOS run only on x86. Then... consequences. The only surviving platform that used to run on 68k is the Mac, which was a minor player even at the time.
68k were extremely expensive at the time, that's why they lost desktop market to their 80x86 killer. An additional factor was what XT and then AT became an open architecture.
From what I remember, 68k processors were used by PalmPilot starting from the US Robotics days. I wonder if they had any particular power efficiency to make it better for Palm. Either that or I am remembering it wrong.
They are also used in automotive as part of the Coldfire CPU series.
But I think a lot of this usage goes back to the days where embedded CPU families had been a lot more fragmented, and companies usually picked one family of their favorite supplier (based on pricing, fulfilling the use-case, etc) and then just sticked to it due to code not being particularly portable.
Since that time the amount of CPU families that are actually used shrank drastically, and it's a lot more likely that all of those use-cases just pick ARM.
>There's a lot of ignorance in some of the threads here,
Not just this thread, but on HN. Hardware and their business model is a thing where signal to noise ratio is extremely low on HN.
If it wasn't because of a few working in ASML, ARM, embedded programming adding some immense value of input to balance things out, Hardware on HN is no different to any other forum on the internet.
Apple is one of the founders of ARM with very different licensing pricing and rights. Apple using ARM is very different from Google or Samsung using ARM.
Apple was involved in the founding of ARM, but their licensing isn't special. Many companies, including Samsung, have ARM architecture licenses which allow them to use the ARM instruction set with whatever custom design they want.
I don't think Apple has any ownership stake in ARM anymore, regardless of their initial investment in 1990. (Though I'm not sure how much they still had up through 2016 when SoftBank bought them)
Do we know that Apple has a special arrangement not available to other licensees?
Near as I can tell, in terms of licensing rights the only major difference is that they purchase an "architecture license" that gives them the right to create more custom designs, but this is a license available to and purchased by other companies like Qualcom, Intel, and others. And if Apple has lower royalty rates, it may simply be a product of their scale rather than special treatment.
But I don't know this for sure-- only that a little bit of searching didn't find anything obvious about special arrangements.
Copyright and patents do serve some good purposes; this is why they were introduced in the first place.
It's the endless extension of copyright (hello, Mickey Mouse) and granting excessively wide patent rights (e.g. on whole classes of chemical compounds) that does not serve the initially envisioned good purposes. Overdose is bad, whether you use table salt, a medicine, or a law.
ARM in particular is not known for abuse of the system, so indeed they are a positive example.
I'm not sure why this is an example of positive use of IP law. I maybe agree that it's an example of IP law not being ridiculously abused (however I really don't have enough information). It might be positive to arm shareholders as well, but generally positive to society? You say that removing ip laws would harm legitimate business models, sure but having IP laws also harms legitimate business models, which ones would be more "legitimate" or "positive" is not straight forward to say.
Why is this being downvoted? I think it’s a good point. It seems like ARMs success is due in part to minimal fragmentation, which is due in turn to ARM’s licensing strategy.
Is it really the case that ARM is less fragmented than x86? Or, even if it is, x86 is wildly successful in the face of a rogue licensor spinning off an unofficial 64-bit extension to it's ISA.
No, not “it makes me money” but “it succeeded to build an entire ecosystem of partners inheriting from a central producer of intelligence work, all of this succeeding to produce better output than the established player, Intel.” This is “good enough” as a measure of success, I don’t care where the money flows as long as an industry is being built. Actually that’s the sole role of money, which could be replaced by anything you like (point system, exchange, central planning). I’m just noting that IP protection enabled this industry, this time, despite me being generally against.
>"This is an excellent example of positive use of IP law, and why removing copyright and patent would harm"
And there are countless counter-examples when IP law causes harm to consumers and inventors.
>"legitimate business models"
And what makes them legitimate? IP law is an artificial construct which currently serves select few. Why should not I be able to "invent" something and sell it without the worries just because somebody else happened to have the same idea? And many of the existing patents, especially ones in a software field can be "invented" by any reasonably intelligent Joe Doe in a few minutes if there is a need.
> The first version of these CPUs will be very Arm compatible, as they are trying to drive adoption to of their silicon.
All these CPUs will be Arm compatible, otherwise they will be breaking their license.
> Once there get a leg up they will start adding patented operation to their stuff. And then we'll end up with a fragmented CPU field driven by corporate greed.
Maybe they will add accelerators as Apple has done but Arm compatible code will still run on all these CPUs.
> And because they are not OEMing this hardware, they really have no incentive to be cooperative with the others.
Yes they do because they need Arm code to run on them.
There are thousands and thousands of Arm designs in use and they _all_ run Arm code as specified by Arm Ltd.
(And yes I know that there is fragmentation in other things that go on the SoC but that is a different point).
There is some minor breakage, like Apple CPUs enforcing the VHE feature of ARMv8.1-A to be on, and not supporting it being off. But that was like… the sole issue on that front.
What you are allowed to do is make extensions with the highest tier of arch licenses, you however cannot break the ISA.
VHE isn't even observable for userspace code, and Apple is very quickly going to drop support for people being in the kernel, so I would argue this probably isn't even that important.
It’s called an architectural license. There are a few publicly announced license holders, including Apple and Qualcomm. Refer to the Arm wiki page for a list.
> Companies can also obtain an ARM architectural licence for designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture. (Wikipedia)
Does not need a reference as ARM itself is not the whole platform that can be compatible or incompatible. Whoever tried to change CPU/microcontroller finds that not the core itself that matters the most, but the peripherals and the whole stuff as a system.
At base ISA level perhaps. But there will also be extensions for AI, DSP, media encoding, etc which will be incompatible. You nominally wouldn't have to use those, but the CPU will be nothing special without all that. This problem is already quite pervasive in the ARM ecosystem, and it'll get worse over time. There's no way around this. In a power-sensitive environment the only realistic way out currently is specialized co-processing and ISA extensions.
You’re getting completely confused between ISA extensions (very rare) and other functionality on the SoC such as GPUs which are obviously universal and very diverse.
Apple has already done this to an extent. M1 has undocumented instructions and a semi-closed toolchain (Assuming they have some, M1 tuning models for LLVM and GCC are nowhere to be seen afaik)
Intel publish a very thick optimization manual, which is a good help.
Compilers aren't great at using the real parameters of the chip (i.e. LLVM knows how wide the reader buffer is but I'm not sure if it actually can use the information), but knowing latencies for ISel and things like that is very helpful. To get those details you do need to rely on people like (the god amongst men) agner fog.
Apple also contributes plenty to LLVM, more than Intel actually, naively based on commit counts of @apple.com and @intel.com git committer email addresses. This isn't very surprising given that Chris Lattner worked at Apple for over a decade.
LLVM is Apple's baby, beyond its genesis under Lattner. They hate the GPL, that's it.
The thing about their contributions is that they upstream stuff they want other people to standardize on but aren't doing it out of love, as far as I can tell e.g. Valve have a direct business interest in having Linux kicking ass, Apple actively loses out (psychologically at least) if a non-apple toolchain is as good as theirs.
Apple M1 also has that x86 emulation mode where memory accesses have the same ordering semantics as on x86. It's probably one of the main things giving Rosetta almost 1:1 performance with x86.
TSO support at the hardware level is a cool feature but it's a bit oversold here. Most emulated x86 code doesn't require it and usually not at every memory instruction when it does. For instance the default settings in Window's translation implementation do not do anything to guarantee TSO.
Rosetta is also a long way from 1:1 performance, even you're own link says ~70% the speed. That's closer to half speed than it is to full speed.
The M1's main trick to being so good at running x86 code is it's just so god damn fast for the power budget it doesn't matter if there is overhead for emulated code it's still going to be fast. This is why running Windows for ARM in parallels is fast too, it knows basically none of the "tricks" available but the emulation speed isn't much slower than the Rosetta 2 emulation ratio even though it's all happening in a VM.
In a fun twist of fate 32 bit x86 apps also work under Windows on the M1 even though the M1 doesn't support 32 bit code.
> The M1's main trick to being so good at running x86 code is it's just so god damn fast for the power budget it doesn't matter if there is overhead for emulated code it's still going to be fast.
M1 is fast and efficient, but Rosetta 2 does not emulate x64 in real time. Rosetta 2 is a static binary translation layer where the x86 code is analysed, translated and stashed away in a disk cache (for future invocations) before the application starts up. Static code analysis allows for multiple heuritstics to be applied at the binary code translation time where the time to do so is plentiful. The translated code then runs natively at the near native ARM speed. There is no need to appeal to varying deities or invoke black magic and tricks – it is that straightforward and relatively simple. There have been mentions of the translated code being further JIT'd at the runtime, but I have not seen the proof of that claim.
Achieving even 70% of native CPU speed whilst emulating a foreign ISA _dynamically (in real time)_ is impossible on von Neumann architectures due to the unpredictability of memory access paths, even if the host ISA provides the hardware assistance. This is further compounded with the complexity of the x86 instruction encoding, which is where most benefits the hardware assisted emulation would be lost (it was already true for 32-bit x86, and is more complex for amd64 and SIMD extensions).
> This is why running Windows for ARM in parallels is fast too, it knows basically none of the "tricks" available but the emulation speed isn't much slower than the Rosetta 2 emulation ratio even though it's all happening in a VM.
Windows for ARM is compiled for the ARM memory model, is executed natively and runs at the near native M1 speed. There is [some] hypevisor overhead, but there is no emulation involved.
x86 apps with JITs can run [1]. For instance I remember initially Chrome didn't have a native version, and the performance was poor because the JITted javascript had to be translated at runtime.
Windows decided to go the "always JIT and just cache frequent code blocks" method though. In the end whichever you choose it doesn't seem to make a big difference.
> Windows for ARM is compiled for the ARM memory model, is executed natively and runs at the near native M1 speed. There is [some] hypevisor overhead, but there is no emulation involved.
This section was referring to the emulation performance not native code performance:
"it knows basically none of the "tricks" available but the _emulation speed_ isn't much slower than the Rosetta 2 emulation ratio "
Though I'll take native apps any day I can find them :).
> Windows decided to go the "always JIT and just cache frequent code blocks" method though. In the end whichever you choose it doesn't seem to make a big difference.
AOT (or, static binary translation before the application launch) vs JIT does make a big difference. JIT always carries a pressure of the «time spent JIT'ting vs performance» tradeoff, which AOT does not. The AOT translation layer has to be fast, but it is a one-off step, thus it invariably can afford spending more time analysing the incoming x86 binary and applying more heuristics and optimisaitons yielding a faster performing native binary product as opposed to a JIT engine that has to do the same, on the fly, under tight time constraints and under a constant threat of unnecessarily screwing up CPU cache lines and TLB lookups (the worst case scenario for a freshly JIT'd instruction sequence spilling over into a new memory page).
> "it knows basically none of the "tricks" available but the _emulation speed_ isn't much slower than the Rosetta 2 emulation ratio "
I still fail to comprehend which tricks you are referring to, and I also would be very much keen to see actual figures substantiating the AOT vs JIT emulation speed statement.
Breaking memory ordering will breaks software - if a program requires it (which is already hard to know), how would you know which memory is accessed by multiple threads?
It's not just a question of "is this memory accessed by multiple threads" and call it a day for full TSO support being mandated it's a question of "is the way this memory is accessed by multiple threads actually dependent on memory barriers for accuracy and if so how tight do those memory barriers need to be". For most apps the answer is actually "it doesn't matter at all". For the ones it does matter heuristics and loose barriers are usually good enough. Only in the worst case scenario that strict barriers are needed does the performance impact show up and even then it's still not the end of the world in terms of emulation performance.
As far as applying it the default assumption for apps is they don't need it and heuristics can try to catch ones that do. For well known apps that do need TSO it's part of the compatibility profile to increase the barriers to the level needed for reliable operation. For unknown apps that do need TSO you'll get a crash and a recommendation to try running in stricter emulation compatibility but this is exceedingly rare given the above 2 things have to fail first.
Yes, it absolutely can. Shameless but super relevant plug. I'm (slowly) writing a series of blog posts where I simulate the implications of memory models by fuzzing timing and ordering: https://www.reitzen.com/
I think the main reason why it hasn't been disastrous is that most programs rely on locks, and they're going to be translating that to the equivalent ARM instructions with a full memory barrier.
Not too many consumer apps are going to be doing lockless algorithms, but where they're used all bets are off. You can easily imagine a queue where two threads grab the same item from, for instance.
Heuristics are used. For example, memory accesses relative to the stack pointer will be assumed to be thread-local, as the stack isn’t shared between threads. And that’s just one of the tricks in the toolbox. :-)
The result of those is that the expensive atomics aren’t applied to all accesses at all on hardware that doesn’t expose a TSO memory model.
Nitpick: relative speed differences do not add up; they multiply. A speed of 70 is 40% faster than a speed of 50, and a speed of 100 is 42.8571…% faster than a speed of 70 (corrected from 50. Thanks, mkl!). Conversely, a speed of 70 is 30% slower than a speed of 100, and a speed of 50 is 28.57142…% slower than one of 70.
=> when comparing speed, 70% is about exactly halfway between 50% and 100% (the midpoint being 100%/√2 ≈ 70.7%)
Not sure what the nit is supposed to be, 70% is indeed less than sqrt(1/2) hence me mentioning it. And yes, it's closer to 3/4 than half or full, but the thing being pointed out wasn't "what's the closest fraction" rather "it's really not that close to 1:1".
Meh, in the context of emulation, which ran at 5% before JITs, 70% is pretty close to 1:1 performance. Given the M1 is also a faster cpu than any x86 Macbooks, and it´s really a wash (yes, recompiling under arm is faster...)
No, it's not. Switching to TSO allows for fast, accurate emulation, but if they didn't have that they would just go the Windows route and drop barriers based on heuristics, which would work for almost all software. The primary driver behind Rosetta's speed is extremely good single-threaded performance and a good emulator design.
My guess: if the situation is similar to Windows laptops, they just use a subset of OEM features and provide a sub-par experience (like lack of battery usage optimizations, flaky suspend/hibernate, second-tier graphics support, etc)
Now, I'm typing this on a GNU/Linux machine, but let's face it, all of the nuisances I mentioned are legit and constant source of problems in tech support forums.
If the extra instructions also operate in extra state (e.g. extra registers) a kernel needs to know about their existence so it can correctly save and restore state on context switches
It doesn’t need to use them, but it must be aware of them, insofar as they may introduce security problems.
As an example, if the kernel doesn’t know of DMA channels, and it requires setup code to prevent user-level code from using them to copy across process boundaries, the kernel will run fine, but have glaring security problems.
What dma channels doesn't require mapping registers into the user space process to work? There aren't usually magic instructions you have to opt into disabling as far as I know.
And I would like to emphasize that when Apple used Intel, even then it was not commercially viable to use their platform. Bringing in ARM did change less than one would think at first.
> Once there get a leg up they will start adding patented operation to their stuff. And then we'll end up with a fragmented CPU field driven by corporate greed.
Why would they go out of their way to make it harder for devs to target their ecosystems?
Here's what I see happening: the majority of application development now takes place within architecture-agnostic languages/frameworks/platforms. The platforms solve the problem of multi-architecture once, and then everyone else can get on with their business without thinking about it. Fragmenting the hardware itself mainly serves to open the door for more finely-grained optimizations by these platforms (interpreters, browsers, LLVM, etc).
Look to graphics APIs for comparison: OpenGL was a relatively high-level API because that's what game devs and others were usually writing directly. Now that most game dev happens in engines like Unity and Unreal, or even on platforms like the Web, only the platform/engine developers have to write actual graphics code a lot of the time (excluding shaders). And they want more fine-grained control, so we're switching to lower-level graphics APIs like Metal and Vulcan because the needs for the baseline API have shifted.
So what I see is not a difference in the total fragmentation from the average dev's perspective, just a shifting of where the abstraction happens.
> Many people are seeing all these different vendor specific CPUs as a win. I'm a bit more skeptical, but perhaps that it unwarranted.
Yeah i think we fail to see this common pattern of negative (even monopolistic) long term effects because we are distracted by the trade-offs or benefits the company promises us initially (but betrays later).
> Many people are seeing all these different vendor specific CPUs as a win. I'm a bit more skeptical, but perhaps that it unwarranted.
Mixed upon that.
Upside - getting software made more portable across architectures and with that, the choice of an ARM or X86 desktop also makes porting to other cpu types less of an effort. Then drivers and lets cut to the crux - support for other hardware via drivers is what holds any other architecture back from the start.
Downside - Moving towards more and more closed/secret source CPU's that more locked in and if they become the norm, then a whole level of developing becomes way harder.
Whilst I'm sure many more upsides and a few other downsides, the real upside in all this will be even better ARM support by the commercial apps. With that, I really do think Apple with the M1 done wonders for ARM and the whole ARM environment.
The extensions present in the M1 are not used at all by Apple’s public compilers. From the perspective of a 3rd-party macOS developer that doesn’t reverse engineer, it’s just a regular arm64 machine.
This allows Apple to deprecate/change those extensions over time without developer involvement, and even use Cortex reference designs where it would make sense for them.
These alarmist comments seem to be comically oblivious of much of computing history. If you have something open and better, it will prevail over closed products; if you have something open and worse, you need to work on making it open and better, screaming “but they’re closed” doesn’t help.
If you have something open and better, it will prevail over closed products
The problem is that "better" is being defined by the companies spreading the marketing propaganda about their products, and that has an unfortunate effect on the perception of the users. It's sad that most of the population would simply take it at face value if a company told them something was "better", but if you realise that making users unquestioning and under their control is ultimately a great way to extract more $$$ from them, it all makes sense.
> The problem is that "better" is being defined by the companies spreading the marketing propaganda about their products
Partially, but I don't think this is nearly as big of a factor as lots of FOSS people on the internet these days seem to think. Remember that RMS wrote a lot of the original GNU tools by himself and wrote the beginnings of GCC. Stallman could have just chosen to reject C and not write GCC altogether the way a lot of modern FOSS advocates seem to act about rejecting closed software. Unix and C could have been cast as proprietary enemies to reject, but he understood at the time that to gain traction, FOSS needed software that was usable for similar purposes as proprietary software. We can cry conspiracy all day but the hard work of building good products is what ultimately wins the day.
(Network effects on modern social/messaging platforms are a much more complicated story however.)
That's not exactly the truth. "Closed and worse" often does better than "open and better." It really just comes down to who has better salesmen.
Microsoft, for example, licensed Windows to vendors for decades with the requirement they not have another operating system pre-installed. This killed both "closed and better" and "open and better" systems alike!
It's about who has more capital to aggressively out price competition, even by taking ridiculous losses. Or buy promising startups, or litigate potential competitors into oblivion.
Diapers.com for example. It's not about anything close to sales and marketing.
I'm not sure that in computing history there's ever been the situation we have now of a software vendor owning so much of the market(s) that they can realistically afford to make this kind of move.
Microsoft in the 90s might have been the only company positioned appropriately to try but compared with Amazon, Google, and Apple, they did not have as much of an "in" into people's daily lives the way all three of those companies do today.
Unregulated capitalism leads to the company store, which is I think effectively where GP was suggesting things were headed.
This is the way most of the computing market has worked most of the time. Mainframes and minis were this way, then almost all the workstation vendors had their own CPU architecture and OS (often a flavour of Unix). The only real exception, and I admit it is a huge one, in Wintel but outside that in house hardware/software development in tandem has been the norm.
Furthermore this model is fundamental to the ARM architecture. The whole point is for licensees to develop SOCs with their own custom components on the same die. That's literally what a SOC is.
Windows is a massive “in”, your work and home device, your gateway to the *internet*.
As an aside that is monopolistic behaviour which at least California and EU are in top of. Companies have to be careful exploiting their dominance in an area.
maybe an “in” and dominance are different, if they are it doesn’t matter
> But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products. So most of FAANG, all if you count the ones with consumer products.
The F in FAANG - Facebook - has consumer products (notably, the Oculus VR headset).
Also Netflix spun out their consumer product into another company (Roku).
All the x86 extensions are freely implementable by any constructor. There's some stuff like SSE4a which is not available on Intel processors, and others that Intel has implemented and AMD chose not to. But they are published as extensions and part of the standard.
“x86-64/AMD64 was solely developed by AMD. AMD holds patents on techniques used in AMD64; those patents must be licensed from AMD in order to implement AMD64. Intel entered into a cross-licensing agreement with AMD, licensing to AMD their patents on existing x86 techniques, and licensing from AMD their patents on techniques used in x86-64. In 2009, AMD and Intel settled several lawsuits and cross-licensing disagreements, extending their cross-licensing agreements.”
⇒ I think only Intel and AMD currently can freely implement x64.
“However, there have been reports that some companies may try to emulate Intel’s proprietary x86 ISA without Intel’s authorization. Emulation is not a new technology, and Transmeta was notably the last company to claim to have produced a compatible x86 processor using emulation (“code morphing”) techniques. Intel enforced patents relating to SIMD instruction set enhancements against Transmeta’s x86 implementation even though it used emulation.”
It was a message to Microsoft in 2017. The patents for x86-64 and SSE2 should have all expired since then. I don't think most software needs SSE3/4/AVX.
I believe you misunderstood the comment. Intel also has undocumented extensions and functionality, that requires reverse engineering. It’s exactly the same as the vendor specific cases here. You were thinking of documented and well-specified vendor-specific extensions, but I don’t think that’s the main concern.
I don't see you complaining about ASML being the only way of getting photolithography machines, or TSMC being basically the only constructor at scale.
Some industries are awfully complex and getting into it requires insane amounts of work. x86 processor making is one of these. It's the same as making new browser engines/js JIT/etc. There is just so much work that catching up with the incumbents is almost impossible.
I'm not complaining about anything, just saying there are two companies that have licenses to make x86¹, so it's not true that anybody could implement a cpu with sse4 extensions.
¹Not sure what happened to that x86 license that Cyrix had.
>> But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products.
It all leads to technical debt. They will make their own devices and keep them in-house. Then they will modify software to leverage these devices. Eventually they will loose touch with the standards. This will go well, for a while. Then something will go wrong. Their chip development will run into a roadblock and they will want to return to the standard. But by that point the cost of reverting will be far to high. They will be stuck with their in-house chips for better or worse. This isn't healthy. Sticking with the community standards on fungible hardware, or better yet contributing to them, is always the better long-term bet.
In some tech sectors, the double margin problem is a very real issue. Merchant silicon vendors have a margin they need to hit, and the vendors have a gross margin they need to hit, and so on. This is especially an issue in HPC and networking.
> But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products. So most of FAANG, all if you count the ones with consumer products.
Google is doing this for Chromebooks. If they want to lock out other vendors from making Chromebooks, they don't have to make a custom CPU, they'd just have to fork Chrome OS and Android. But they don't want to corner the Chromebook market, they want wide adoption.
> And because they are not OEMing this hardware, they really have no incentive to be cooperative with the others. Similar to older gaming console that had custom and experimental architectures.
What exactly is this referring to? Are you saying that game consoles should only use chips that are available to other manufacturers, or that every game console should be an IBM-PC compatible, like the first Xbox.
No, your fears aren't warranted. In fact, when Apple launched their ARM SoC for their laptop / desktop platform, I pointed out that Intel / AMD x86-AMD64 processors are now the final bastion of computing freedom as they build general purpose CPUs, and their current business model prevents locking it down to any specific platform.
With custom ARM SoCs we are going to see extreme platform lockdowns. (What most people here don't get, or are being deliberately obtuse about, is that ARM's IP are used to build System-on-Chips where the ARM processor is just one of the units. It is other custom parts of the SoC that allow you to lockdown and make the SoC incompatible with other ARM SoCs).
This is how Apple ARM SoCs only fully support ios / macOS, Google's will only support Android / ChromeOS, Microsoft's ARM SoC will only fully support Windows etc, and will enforce platform lockdown.
To understand why this is happening, we have to recognize the shift in the business model of BigTech - They want to turn everything into a service that will earn them recurring revenue. Platform lockdown will ensure they can force users data on to the cloud by pushing cloud services on these device and limiting their hardware. While initially they will lock down personal data too on to their specific cloud services, regulation may force them to allow "data transfer" between the BigTech services to give consumers the illusion of choice.
Remember, PRISM has showed US government how valuable it is to have access to personal data from corporates and how easy it is when it is on the "cloud". BigTech's worldwide reach will allow them to literally spy on anyone in a few decades and the US needs that ability to maintain its influence on world politics.
I predict a very bleak future for personal computing, where the concept of ownership of computers or computing device will no longer exist - your device in the future will be fully controlled by the BigTech, and we will have no computing freedom to even install any open source code on "our" device.
BigTech is already experimenting with "cloud coding" - this means they will even have access to all your codes in the future too - in such a scenario no competitor will emerge in the future to be able to dislodge them for centuries until they lose political backing.
The x86-64 patents also expired last year, so it's not only a general computing architecture, but a free as in freedom one too. It's also the architecture everyone uses for pretty much everything. Developers are finally free and united in the language we use to encode computer programs. But how long will this Tower of Babel be permitted to stand? Writing code for things other than x86-64 (e.g. GPUs, DSP/ML chips, phones, Apple SDKs) usually require entering into a legal agreement or obtaining some kind of authorization. History won't move backwards once the platforms we've fought hard to learn and use have finally opened up, just because Google and Apple want to save battery on their laptops.
Just because the patents have expired doesn’t make it ‘free as in freedom’ - someone has to design the CPU and there is no modern open source x86-64 implementation.
So right now you can buy from Intel / AMD (completely closed source and with ME nonsense etc). License your IP from Arm for a not excessive fee and inspect the code or use open source RISC-V implementation.
And finally writing code for Arm / RISC-V doesn’t require any sort of legal agreement or authorisation.
Only if you're coming to open source with the mindset of a taker rather than a giver. Nevertheless there are open source implementations of x86 architecture. I've written one. Bochs and QEmu have written another. If you feel that it must be implemented in hardware to count then a director at oracle actually wrote a verilog implementation of 80186 a few years ago.
If you want to go back to 1982 then fine. I'll keep running on my Arm and RISC-V cores thanks.
I do find the enthusiasm for the utterly closed oligopoly of x86 (with all the dubious ME stuff), with literally no modern open source hardware implementation, odd when you could be getting behind RISC-V.
This means more competition right? I mostly think good will come from this for consumers. Things felt like they really stalled out for a while with Intel
Seriously asking - does a fragmented CPU field even matter in today's software ecosystem of high-level languages, frameworks and extensive compiler infrastructures/virtual machines ?
Wouldn't such a divergence merely only increase the demand for compiler-backend/VM engineers making this job segment more attractive ?
Qualcomm used its modem patents and licensing terms to strongarm companies into purchasing modems and CPUs together (since buying just a modem or just a CPU ended up costing nearly as much). That kind of behavior pushes companies to find alternatives and reduces choice in the market.
>However it is obvious why this is all happening. And that is Intel and Qualcomm inhibited growth and squeezed too much profit from the market.
Or, it's just the regular old way capitalism works. Any sufficiently large company will start to vertically integrate to maximize their profits. If not Apple / Google, it could have been Intel / Qualcomm who started vertically integrating by offering their own laptops, phones, OSes and App Stores.
The big fish eats the small ones. That's capitalism 101.
It won't, and shouldn't. Wasm meets very few of it's goals, and is nowhere near any native speeds under real world workloads. Even speedups with SIMD ar hindered, and the entire design and implementation of it in the clang/llvm compiler is very experimental.
Go work with wasm in the browser, you will pull your hair out at the fact emscripten is still the most used tool. It's all very hackey and if something breaks, you wont fix it.
I have worked with wasm in the browser in production. Yes, there are many issues, yes I have some critiques of the multithreading proposal, but in the long run portable workers solve lock-in.
100% this. Cross-platform binaries that "just work" everywhere in a hardware agnostic manner is going to be huge.
Browser-based games and real-time 3D applications are the one's I'm most excited about, personally but that may be because I'm developing a WASM startup in this space.
> However it is obvious why this is all happening. And that is Intel and Qualcomm inhibited growth and squeezed too much profit from the market.
It's not that intel and qualcomm have squeezed too much profit, it's that Apple, Google, Facebook, etc have too much profit. Apple and Google are worth $2 trillion. Facebook is worth more than $1 trillion. Intel and Qualcomm are worth $300 billion put together. These tech companies are big enough and monopolistic enough to own their entire hardware stack. It's like when Rockefeller and Standard Oil got so big that they "bought" the railroads that transported their oil.
Apple and Google each make more profit than Intel and Qualcomm's combined revenue.
FAANG companies will primarily be differenated based on network or software, rarely commodity hardware. They don't need to go all the way to silicon if they can just tune the designs for their workloads. We're early on in that process, and I think the internt giants will progress up the stack into the metaverse, not down into the physical science of it.
Semiconductors are a capital intensive and brutal business, as evidenced by Intel's recent fall - they basically made one architectural mistake and that slip up cost them the lead on multi-threaded performance for probably 5-8 years, assuming they can get it back.
Separately, Rockefeller's coercion of the railroads was not something he did because he was big, it was something he did to become big. He would strong-arm his way into controlling or coercive positions at railroads, then cut off his competitor's ability to transport their product. When they were struggling, he would buy them up at distressed prices and turn the rails back on.
The worry I have is that these deep pocketed vendors will lock up access to top tier fast designs. There will no longer be a high end CPU open market, making it almost impossible for a new entrant to build anything competitive.
X64 is still decoupled but how long will it survive with ARM designs like the M1 or Graviton blowing it out of the water?
The thing we really have to beware of is an oligopoly of vertically integrated vendors locking up all high end fab capacity, making it impossible to even fab a competing design if you can design one.
Not only do we need a revival of antitrust, but we need competitors to TSMC badly. Samsung and Intel are close on its heels but that’s not enough.
We’ve moved from a less diverse world (where Intel and AMD have an oligopoly) to one where a large number of firms can use Arm’s ISA and base designs (eg Neoverse) to build a decently competitive CPU. This is the opposite of what you’re suggesting is happening.
The argument from M1 is that "competitive CPU" will cease to be a thing as markets close; you'll get vertically integrated hardware. The M1 isn't a competitive CPU because it doesn't compete as a CPU; it competes only as the sealed unit of "Apple laptop"
The CPUs in the M1 are competitive because they build on Arm's IP which others can and are doing (Google, Amazon, Qualcomm Samsung already and others can join them) which is a much less oligopolistic position than Intel / AMD having the market to themselves.
But you can’t source the M1 for another product. If you could you’d see them in servers already as they (or at least their performance cores) would be fantastic chips for many server workloads.
That was my point. Diversity doesn’t matter. It’s diversity of what can be sourced on the open market that matters.
You do know that Graviton is essentially a slightly modified Arm Neoverse core or that you can buy an Ampere CPU based on Neoverse today or that Qualcomm and Nvidia will almost certainly be launching competitive Arm based server chips in the near future. Nothing to stop other vendors coming in and licensing Neoverse too.
So tell me that isn’t a more open market than say 3 years ago when you basically had to buy Intel.
I find these arguments bizarre - we’ve had many, many years of Intel monopoly on the desktop and server and finally we have some competitors and people are worried about the market being closed up? Seriously?
But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products. So most of FAANG, all if you count the ones with consumer products.
The first version of these CPUs will be very ARM compatible, as they are trying to drive adoption to of their silicon. Once there get a leg up they will start adding patented operation to their stuff. And then we'll end up with a fragmented CPU field driven by corporate greed.
And because they are not OEMing this hardware, they really have no incentive to be cooperative with the others. Similar to older gaming console that had custom and experimental architectures.
However it is obvious why this is all happening. And that is Intel and Qualcomm inhibited growth and squeezed too much profit from the market.