Hacker News new | past | comments | ask | show | jobs | submit login
Google developing own CPUs for Chromebook laptops (nikkei.com)
402 points by jonbaer on Oct 2, 2021 | hide | past | favorite | 383 comments



Many people are seeing all these different vendor specific CPUs as a win. I'm a bit more skeptical, but perhaps that it unwarranted.

But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products. So most of FAANG, all if you count the ones with consumer products.

The first version of these CPUs will be very ARM compatible, as they are trying to drive adoption to of their silicon. Once there get a leg up they will start adding patented operation to their stuff. And then we'll end up with a fragmented CPU field driven by corporate greed.

And because they are not OEMing this hardware, they really have no incentive to be cooperative with the others. Similar to older gaming console that had custom and experimental architectures.

However it is obvious why this is all happening. And that is Intel and Qualcomm inhibited growth and squeezed too much profit from the market.


There's a lot of ignorance in some of the threads here, but the reality is proliferation of "custom" (but under license) ARM chips _is the end goal of ARM_. That's their whole business model. They (Arm, the company) don't manufacture anything. They just design and license their designs.

To be clear there is little risk in anyone manufacturing truly proprietary chips. ARM licensing ranges from "you shall produce exactly to spec" for those wishing to vertically integrate commodity parts to "you may freely modify" -- but in either case it's the ARM design being licensed. It does nothing but harm the licensee to build something novel and strange functionality.

This system has been working quite well for Arm for decades at this point. If anything, the fact that well-known brands like Apple and Google are spinning ARM chips speaks volumes to Arm's excellent business model. Nothing we've seen from either tech giants is truly revolutionary -- the smaller fabs have been doing the same things for years.

Why anyone ever sold Arm, I will never understand.


Theres survivor bias - the success of ARM ignores the relative failure of other alternative instruction sets. SPARC, Itanic, MIPS, the road to today is littered with instruction sets that didn't make it in the same way ARM has.

Oracle and Fujitsu will still sell your business SPARC servers, but I don't know that I'd buy that business unit (never mind that I don't have that kind of money). It's easy to buy the lottery ticket after you've got the winning numbers. ARMs successful now but there was a lot of luck and hard work up get there.


The comment above is talking about ARM's business model, which Intel on Itanic definitely didn't have, Sun didn't have for SPARC (they later freed it, after it was no longer commercially relevant), and MIPS as a company didn't follow, either.


Anyone remember the Motorola families of chips? 68xxx and 88xxx.


IIRC Motorola couldn't keep up in terms of producing competitive performant designs and lost out to competitors. ARM hasn't fallen into that trap. Then again, empires rise & fall, and the end of something is usually just a matter of time. But in the tech world the "end" is also some after-life embedded in another organization or pivoted to a different direction. For now, ARM is rock solid on a foundation made from shifting sands of eternal tech change.


> For now, ARM is rock solid on a foundation made from shifting sands of eternal tech change.

ARM as an architecture may be on solid ground, but their future as a company may be uncertain given that their IP seems to have been appropriated by the CCP.

https://www.iotworldtoday.com/2021/09/08/arm-loses-china/


ARM and ARM China are separate companies. In software terms, while ARM China may have forked the ARM IP, they're not going to get any commits from upstream and it'll wither on the vine.


Optimistically, yes.

On the other hand, there's a global supply chain appetite for cheap products including CPU's. Upstream commits aren't required to dump cheap CPU's on the market or to develop compute intensive businesses around them that undercut the competition on price.


Controllers, maybe.

High-end CPUs, hardly. That would require significant design efforts to keep up with ARM's development.

Not that it'd be impossible for China to develop their own strong processor-designing forces, driven commercially or by the state. But so far it seems far from a trivial task.


High end isn't needed. Simply a marginal improvement and CCP comes out ahead. In the performance per watt per cost calculation, the cost factor has no lower bound. Cost can be subsidized to near zero like many other industries under their control.

We're talking about instruction sets with ARM, correct? It's not anywhere near the level of investment as next generation litho tech for a chip foundry?

I don't underestimate the ability to innovate. Stolen tech can be improved just as well as in-house R&D'd tech.


The good news is that intellectual "property" isn't truly rivalrous. If ARM china "steals" ARM ip, ARM is still capable of licensing to it's western clients; as it's unlikely any of them will be licensing from ARM china.


> If ARM china "steals" ARM ip, ARM is still capable of licensing to it's western clients;

If the smartphone market is an indicator, too bad that this will just mean that the majority of OEMs will just buy their chips from ARM china and thus demand for ARM IP will expectedly drop, and meanwhile this IP appropriation will just be used to develop independent design capabilities.


"just buy their chips from ARM china"

I'm really not sure what that means. Does ARM china have fabs?


ARM China is independent from ARM, which just happens to have whatever IP it was able to run away with. To compete they're going to have to match ARM, given that nobody is going to write software for their custom fork.


They can win the domestic Chinese market, who has their own home grown software for everything.

Soon, those companies will move beyond China as well.


The domestic Chinese market was not able to support homegrown TD-SCDMA without the silicon manufacturing restrictions that now exist for Chinese companies, what makes you think that a company with rapidly outdating chip designs and access only to domestic silicon fabs which are trapped on older 14nm+ processes is going to be able to compete outside of the low end of the budget segment?

Even Intel had trouble surviving on 14nm, hence all the contra-revenue spent to directly subsidize Intel tablets (whether they were $100 HP Stream Windows tablets, or $50 Walmart special Android tablets) to try and not get locked out of that space.


> (...) what makes you think that a company with rapidly outdating chip designs and access only to domestic silicon fabs (...)

Well, maybe the fact that not so long ago it had none of that and it clearly looks like both the company and the political regime aren't having many problems getting their hands on all the missing pieces.


They still have access to TSMC ( for now ).


That's pretty much what I mean by a solid foundation... Built on sand.


Plenty of people do (they're some of the most beloved chips of all time), but their peak was pre-1990, so most people don't bring them up in discussions like this.


They should.

The 68K line was the Itanium of it's time. It overpromised and underdelivered and was crushed by the 286 and 386. Many vendors made machines based on it (Atari ST, Amiga, Mac, Sun Microsystems, Sinclair QL, ...) and all of those vendors either went out of business or transitioned to RISC architectures in a hurry. It was one of the many near death experiences the Mac platform had.

It was more successful than the beautiful losers such as the TMS9900, iAPX 432, i860, NS32000, but it hit the end of track and left everyone in the lurch.


Performance per clock cycle was much better on 68k. I understood that they lost out because the world adopted DOS and DOS run only on x86. Then... consequences. The only surviving platform that used to run on 68k is the Mac, which was a minor player even at the time.


68k were extremely expensive at the time, that's why they lost desktop market to their 80x86 killer. An additional factor was what XT and then AT became an open architecture.


I never understood why Motorola couldn't keep up with Intel in the speed race. If they had, 68K would be the dominant architecture today.


From what I remember, 68k processors were used by PalmPilot starting from the US Robotics days. I wonder if they had any particular power efficiency to make it better for Palm. Either that or I am remembering it wrong.


They are also used in automotive as part of the Coldfire CPU series.

But I think a lot of this usage goes back to the days where embedded CPU families had been a lot more fragmented, and companies usually picked one family of their favorite supplier (based on pricing, fulfilling the use-case, etc) and then just sticked to it due to code not being particularly portable.

Since that time the amount of CPU families that are actually used shrank drastically, and it's a lot more likely that all of those use-cases just pick ARM.


what is the main product line of PalmPilot?



Thank you for the link.


>There's a lot of ignorance in some of the threads here,

Not just this thread, but on HN. Hardware and their business model is a thing where signal to noise ratio is extremely low on HN.

If it wasn't because of a few working in ASML, ARM, embedded programming adding some immense value of input to balance things out, Hardware on HN is no different to any other forum on the internet.


> Why anyone ever sold Arm, I will never understand.

To make a quick buck, and who cares about timeframes other than short term? ;)

/s


ARM clearly shows that Apple's aggressive supply chain integration is perhaps not the optimal model from the economy's perspective.


Why? Apple’s use of Arm IP would seem to have a minimal impact on the economy.


Apple is one of the founders of ARM with very different licensing pricing and rights. Apple using ARM is very different from Google or Samsung using ARM.


Apple was involved in the founding of ARM, but their licensing isn't special. Many companies, including Samsung, have ARM architecture licenses which allow them to use the ARM instruction set with whatever custom design they want.

https://en.wikipedia.org/wiki/Arm_Ltd.#Arm_architectural_lic...


I don't think Apple has any ownership stake in ARM anymore, regardless of their initial investment in 1990. (Though I'm not sure how much they still had up through 2016 when SoftBank bought them)

Do we know that Apple has a special arrangement not available to other licensees?

Near as I can tell, in terms of licensing rights the only major difference is that they purchase an "architecture license" that gives them the right to create more custom designs, but this is a license available to and purchased by other companies like Qualcom, Intel, and others. And if Apple has lower royalty rates, it may simply be a product of their scale rather than special treatment.

But I don't know this for sure-- only that a little bit of searching didn't find anything obvious about special arrangements.


This is an excellent example of positive use of IP law, and why removing copyright and patent would harm some legitimate business models.


Copyright and patents do serve some good purposes; this is why they were introduced in the first place.

It's the endless extension of copyright (hello, Mickey Mouse) and granting excessively wide patent rights (e.g. on whole classes of chemical compounds) that does not serve the initially envisioned good purposes. Overdose is bad, whether you use table salt, a medicine, or a law.

ARM in particular is not known for abuse of the system, so indeed they are a positive example.


I'm not sure why this is an example of positive use of IP law. I maybe agree that it's an example of IP law not being ridiculously abused (however I really don't have enough information). It might be positive to arm shareholders as well, but generally positive to society? You say that removing ip laws would harm legitimate business models, sure but having IP laws also harms legitimate business models, which ones would be more "legitimate" or "positive" is not straight forward to say.


Why is this being downvoted? I think it’s a good point. It seems like ARMs success is due in part to minimal fragmentation, which is due in turn to ARM’s licensing strategy.


Is it really the case that ARM is less fragmented than x86? Or, even if it is, x86 is wildly successful in the face of a rogue licensor spinning off an unofficial 64-bit extension to it's ISA.


how to downvote in Hacker news? There is no down vote button right?


After a certain upvote threshold on your account you get the ability to downvote


Oh ok thanks for letting me know.


Get 500 karma. I’m pretty sure that’s the level to unlock downvoting.


If it relies on Monopoly power, I have trouble regarding it as "legitimate".

"It makes me money" is not in and of itself a successful business model.


No, not “it makes me money” but “it succeeded to build an entire ecosystem of partners inheriting from a central producer of intelligence work, all of this succeeding to produce better output than the established player, Intel.” This is “good enough” as a measure of success, I don’t care where the money flows as long as an industry is being built. Actually that’s the sole role of money, which could be replaced by anything you like (point system, exchange, central planning). I’m just noting that IP protection enabled this industry, this time, despite me being generally against.


Is that not the literal definition of a successful business model?


>"This is an excellent example of positive use of IP law, and why removing copyright and patent would harm"

And there are countless counter-examples when IP law causes harm to consumers and inventors.

>"legitimate business models"

And what makes them legitimate? IP law is an artificial construct which currently serves select few. Why should not I be able to "invent" something and sell it without the worries just because somebody else happened to have the same idea? And many of the existing patents, especially ones in a software field can be "invented" by any reasonably intelligent Joe Doe in a few minutes if there is a need.


Sorry, this is unwarranted alarmism.

> The first version of these CPUs will be very Arm compatible, as they are trying to drive adoption to of their silicon.

All these CPUs will be Arm compatible, otherwise they will be breaking their license.

> Once there get a leg up they will start adding patented operation to their stuff. And then we'll end up with a fragmented CPU field driven by corporate greed.

Maybe they will add accelerators as Apple has done but Arm compatible code will still run on all these CPUs.

> And because they are not OEMing this hardware, they really have no incentive to be cooperative with the others.

Yes they do because they need Arm code to run on them.

There are thousands and thousands of Arm designs in use and they _all_ run Arm code as specified by Arm Ltd.

(And yes I know that there is fragmentation in other things that go on the SoC but that is a different point).


You can definitely get a license that allows you to go hog wild and make breaking changes. I believe both Apple and Google pay for that.

But it doesn't help anyone to fragment too much.


> You can definitely get a license that allows you to go hog wild and make breaking changes. I believe both Apple and Google pay for that.

Reference?


There is some minor breakage, like Apple CPUs enforcing the VHE feature of ARMv8.1-A to be on, and not supporting it being off. But that was like… the sole issue on that front.

What you are allowed to do is make extensions with the highest tier of arch licenses, you however cannot break the ISA.


VHE isn't even observable for userspace code, and Apple is very quickly going to drop support for people being in the kernel, so I would argue this probably isn't even that important.


For macOS at Reduced Security, kernel extensions will no longer be supported.

fuOS/CustomKC will stay in the long term, and that's what matters for other OSes.


Thanks! Very interesting on VHE.


It’s called an architectural license. There are a few publicly announced license holders, including Apple and Qualcomm. Refer to the Arm wiki page for a list.

Probably outdated article on this: https://www.electronicsweekly.com/news/business/finance/arm-...


> Companies can also obtain an ARM architectural licence for designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture. (Wikipedia)

That's not going 'hog wild'!


Does not need a reference as ARM itself is not the whole platform that can be compatible or incompatible. Whoever tried to change CPU/microcontroller finds that not the core itself that matters the most, but the peripherals and the whole stuff as a system.


We’re talking about licences for Arm cores.


> All these CPUs will be Arm compatible

At base ISA level perhaps. But there will also be extensions for AI, DSP, media encoding, etc which will be incompatible. You nominally wouldn't have to use those, but the CPU will be nothing special without all that. This problem is already quite pervasive in the ARM ecosystem, and it'll get worse over time. There's no way around this. In a power-sensitive environment the only realistic way out currently is specialized co-processing and ISA extensions.


You’re getting completely confused between ISA extensions (very rare) and other functionality on the SoC such as GPUs which are obviously universal and very diverse.


Apple has already done this to an extent. M1 has undocumented instructions and a semi-closed toolchain (Assuming they have some, M1 tuning models for LLVM and GCC are nowhere to be seen afaik)


Are the tuning models in opensource toolchains for Intel CPUs been released by Intel or figured out over time by opensource developers?


Intel publish a very thick optimization manual, which is a good help.

Compilers aren't great at using the real parameters of the chip (i.e. LLVM knows how wide the reader buffer is but I'm not sure if it actually can use the information), but knowing latencies for ISel and things like that is very helpful. To get those details you do need to rely on people like (the god amongst men) agner fog.


That should say reorder buffer above.


Intel contributes optimizations to gcc and llvm.


Apple also contributes plenty to LLVM, more than Intel actually, naively based on commit counts of @apple.com and @intel.com git committer email addresses. This isn't very surprising given that Chris Lattner worked at Apple for over a decade.


LLVM is Apple's baby, beyond its genesis under Lattner. They hate the GPL, that's it.

The thing about their contributions is that they upstream stuff they want other people to standardize on but aren't doing it out of love, as far as I can tell e.g. Valve have a direct business interest in having Linux kicking ass, Apple actively loses out (psychologically at least) if a non-apple toolchain is as good as theirs.


Apple used to contribute plenty to LLVM.

Nowadays less so, for them C++ support is good enough for what they make out of it (MSL is based on C++14 and IO/Driver Kit use an embedded subset).

The main focus is how well it is churning Objective-C, Swift and their own special flavour of bitcode.

None of them end up upstream as one might wish for.


They do now; I think I remember the time when they didn't.


Apple M1 also has that x86 emulation mode where memory accesses have the same ordering semantics as on x86. It's probably one of the main things giving Rosetta almost 1:1 performance with x86.

https://mobile.twitter.com/ErrataRob/status/1331735383193903...


TSO support at the hardware level is a cool feature but it's a bit oversold here. Most emulated x86 code doesn't require it and usually not at every memory instruction when it does. For instance the default settings in Window's translation implementation do not do anything to guarantee TSO.

Rosetta is also a long way from 1:1 performance, even you're own link says ~70% the speed. That's closer to half speed than it is to full speed.

The M1's main trick to being so good at running x86 code is it's just so god damn fast for the power budget it doesn't matter if there is overhead for emulated code it's still going to be fast. This is why running Windows for ARM in parallels is fast too, it knows basically none of the "tricks" available but the emulation speed isn't much slower than the Rosetta 2 emulation ratio even though it's all happening in a VM.

In a fun twist of fate 32 bit x86 apps also work under Windows on the M1 even though the M1 doesn't support 32 bit code.


> The M1's main trick to being so good at running x86 code is it's just so god damn fast for the power budget it doesn't matter if there is overhead for emulated code it's still going to be fast.

M1 is fast and efficient, but Rosetta 2 does not emulate x64 in real time. Rosetta 2 is a static binary translation layer where the x86 code is analysed, translated and stashed away in a disk cache (for future invocations) before the application starts up. Static code analysis allows for multiple heuritstics to be applied at the binary code translation time where the time to do so is plentiful. The translated code then runs natively at the near native ARM speed. There is no need to appeal to varying deities or invoke black magic and tricks – it is that straightforward and relatively simple. There have been mentions of the translated code being further JIT'd at the runtime, but I have not seen the proof of that claim.

Achieving even 70% of native CPU speed whilst emulating a foreign ISA _dynamically (in real time)_ is impossible on von Neumann architectures due to the unpredictability of memory access paths, even if the host ISA provides the hardware assistance. This is further compounded with the complexity of the x86 instruction encoding, which is where most benefits the hardware assisted emulation would be lost (it was already true for 32-bit x86, and is more complex for amd64 and SIMD extensions).

> This is why running Windows for ARM in parallels is fast too, it knows basically none of the "tricks" available but the emulation speed isn't much slower than the Rosetta 2 emulation ratio even though it's all happening in a VM.

Windows for ARM is compiled for the ARM memory model, is executed natively and runs at the near native M1 speed. There is [some] hypevisor overhead, but there is no emulation involved.


> but I have not seen proof of that claim.

x86 apps with JITs can run [1]. For instance I remember initially Chrome didn't have a native version, and the performance was poor because the JITted javascript had to be translated at runtime.

[1]: https://developer.apple.com/documentation/apple-silicon/abou...?


> There have been mentions of the translated code being further JIT'd at the runtime, but I have not seen the proof of that claim.

I've seen mentions of a JIT path but only if the AOT path doesn't cover the use case (e.g. an x86 app with dynamic code generation) not as an optimization pass. https://support.apple.com/guide/security/rosetta-2-on-a-mac-...

Windows decided to go the "always JIT and just cache frequent code blocks" method though. In the end whichever you choose it doesn't seem to make a big difference.

> Windows for ARM is compiled for the ARM memory model, is executed natively and runs at the near native M1 speed. There is [some] hypevisor overhead, but there is no emulation involved.

This section was referring to the emulation performance not native code performance:

"it knows basically none of the "tricks" available but the _emulation speed_ isn't much slower than the Rosetta 2 emulation ratio "

Though I'll take native apps any day I can find them :).


> Windows decided to go the "always JIT and just cache frequent code blocks" method though. In the end whichever you choose it doesn't seem to make a big difference.

AOT (or, static binary translation before the application launch) vs JIT does make a big difference. JIT always carries a pressure of the «time spent JIT'ting vs performance» tradeoff, which AOT does not. The AOT translation layer has to be fast, but it is a one-off step, thus it invariably can afford spending more time analysing the incoming x86 binary and applying more heuristics and optimisaitons yielding a faster performing native binary product as opposed to a JIT engine that has to do the same, on the fly, under tight time constraints and under a constant threat of unnecessarily screwing up CPU cache lines and TLB lookups (the worst case scenario for a freshly JIT'd instruction sequence spilling over into a new memory page).

> "it knows basically none of the "tricks" available but the _emulation speed_ isn't much slower than the Rosetta 2 emulation ratio "

I still fail to comprehend which tricks you are referring to, and I also would be very much keen to see actual figures substantiating the AOT vs JIT emulation speed statement.


Rosetta 2 also emulates 32-bit x86 code, you can try it out with CrossOver (CodeWeavers’ commercial version of Wine)


No it does not. It isCrossOver letting you run 32-bit 86 (Windows apps only , I think) not Rosetta 2


Yes it does. Rosetta includes full emulation for 32-bit instructions, allowing Crossover to continue to not have to be an emulator.

(The parent commenter also works at CodeWeavers, so he would know :P)


Breaking memory ordering will breaks software - if a program requires it (which is already hard to know), how would you know which memory is accessed by multiple threads?


It's not just a question of "is this memory accessed by multiple threads" and call it a day for full TSO support being mandated it's a question of "is the way this memory is accessed by multiple threads actually dependent on memory barriers for accuracy and if so how tight do those memory barriers need to be". For most apps the answer is actually "it doesn't matter at all". For the ones it does matter heuristics and loose barriers are usually good enough. Only in the worst case scenario that strict barriers are needed does the performance impact show up and even then it's still not the end of the world in terms of emulation performance.

As far as applying it the default assumption for apps is they don't need it and heuristics can try to catch ones that do. For well known apps that do need TSO it's part of the compatibility profile to increase the barriers to the level needed for reliable operation. For unknown apps that do need TSO you'll get a crash and a recommendation to try running in stricter emulation compatibility but this is exceedingly rare given the above 2 things have to fail first.

Details here https://docs.microsoft.com/en-us/windows/uwp/porting/apps-on...


> For unknown apps that do need TSO you'll get a crash

Sure about that? Couldn't it lead to silent data corruption?


Yes, it absolutely can. Shameless but super relevant plug. I'm (slowly) writing a series of blog posts where I simulate the implications of memory models by fuzzing timing and ordering: https://www.reitzen.com/

I think the main reason why it hasn't been disastrous is that most programs rely on locks, and they're going to be translating that to the equivalent ARM instructions with a full memory barrier.

Not too many consumer apps are going to be doing lockless algorithms, but where they're used all bets are off. You can easily imagine a queue where two threads grab the same item from, for instance.


Heuristics are used. For example, memory accesses relative to the stack pointer will be assumed to be thread-local, as the stack isn’t shared between threads. And that’s just one of the tricks in the toolbox. :-)

The result of those is that the expensive atomics aren’t applied to all accesses at all on hardware that doesn’t expose a TSO memory model.


Nitpick: relative speed differences do not add up; they multiply. A speed of 70 is 40% faster than a speed of 50, and a speed of 100 is 42.8571…% faster than a speed of 70 (corrected from 50. Thanks, mkl!). Conversely, a speed of 70 is 30% slower than a speed of 100, and a speed of 50 is 28.57142…% slower than one of 70.

=> when comparing speed, 70% is about exactly halfway between 50% and 100% (the midpoint being 100%/√2 ≈ 70.7%)


Not sure what the nit is supposed to be, 70% is indeed less than sqrt(1/2) hence me mentioning it. And yes, it's closer to 3/4 than half or full, but the thing being pointed out wasn't "what's the closest fraction" rather "it's really not that close to 1:1".


> a speed of 100 is 42.8571…% faster than a speed of 50

I think you mean either "100% faster" or "faster than a speed of 70" there.


Meh, in the context of emulation, which ran at 5% before JITs, 70% is pretty close to 1:1 performance. Given the M1 is also a faster cpu than any x86 Macbooks, and it´s really a wash (yes, recompiling under arm is faster...)


No, it's not. Switching to TSO allows for fast, accurate emulation, but if they didn't have that they would just go the Windows route and drop barriers based on heuristics, which would work for almost all software. The primary driver behind Rosetta's speed is extremely good single-threaded performance and a good emulator design.


That’s a single ACTLR bit present in the publicly released XNU kernel source code.


I'm asking out of naiveté here -- how were they (kernel maintainers) able to get the Linux kernel to support M1 with undocumented instructions?


My guess: if the situation is similar to Windows laptops, they just use a subset of OEM features and provide a sub-par experience (like lack of battery usage optimizations, flaky suspend/hibernate, second-tier graphics support, etc)

Now, I'm typing this on a GNU/Linux machine, but let's face it, all of the nuisances I mentioned are legit and constant source of problems in tech support forums.


A kernel doesn't need to use all the instructions a CPU offers -- only the ones it needs.


If the extra instructions also operate in extra state (e.g. extra registers) a kernel needs to know about their existence so it can correctly save and restore state on context switches


Not necessarily/really, the custom extensions still need to be enabled by the kernel before they can be used.

As such, it isn’t actually an issue.


You're confusing MSRs (which don't have to be saved/restored on context switch) with GPRs (which do).


Well if there is a truly undocumented extension, how do I know it doesn't come with it's own registers (e.g. like a floating point unit does)


Apple reserves certain MSRs that apply on a per-process basis, and thus must be applied on context switch.


It doesn’t need to use them, but it must be aware of them, insofar as they may introduce security problems.

As an example, if the kernel doesn’t know of DMA channels, and it requires setup code to prevent user-level code from using them to copy across process boundaries, the kernel will run fine, but have glaring security problems.


What dma channels doesn't require mapping registers into the user space process to work? There aren't usually magic instructions you have to opt into disabling as far as I know.


Those are not security problems, they are insecurity features readjusts tinfoil.


I'm assuming they aren't using them or they've reverse engineered the ones they need to use.


Don't use them. The instructions necessary to run Linux are likely inherited from the normal ARM set.


The undocumented instructions aren't required in order to use the hardware


The instructions are (all?) for acceleration I think.


Not all of them, but the ones that don't (GXF, memory compression, etc.) are opt-in.


And I would like to emphasize that when Apple used Intel, even then it was not commercially viable to use their platform. Bringing in ARM did change less than one would think at first.



That's for the instruction selection patterns, no? I couldn't see a pipeline model in there last time I checked.

P.S. GCC descs are a goldmine for old arch's.


That is true, but I think this is the case for all the ARM cores. I didn't spot a scheduler for Cortex-A73, for example.


> Once there get a leg up they will start adding patented operation to their stuff. And then we'll end up with a fragmented CPU field driven by corporate greed.

Why would they go out of their way to make it harder for devs to target their ecosystems?

Here's what I see happening: the majority of application development now takes place within architecture-agnostic languages/frameworks/platforms. The platforms solve the problem of multi-architecture once, and then everyone else can get on with their business without thinking about it. Fragmenting the hardware itself mainly serves to open the door for more finely-grained optimizations by these platforms (interpreters, browsers, LLVM, etc).

Look to graphics APIs for comparison: OpenGL was a relatively high-level API because that's what game devs and others were usually writing directly. Now that most game dev happens in engines like Unity and Unreal, or even on platforms like the Web, only the platform/engine developers have to write actual graphics code a lot of the time (excluding shaders). And they want more fine-grained control, so we're switching to lower-level graphics APIs like Metal and Vulcan because the needs for the baseline API have shifted.

So what I see is not a difference in the total fragmentation from the average dev's perspective, just a shifting of where the abstraction happens.


It was already like that with the UNIX wars.

The only issue is with commercial software, but even there JIT enabled emulation is a possible solution.


> Many people are seeing all these different vendor specific CPUs as a win. I'm a bit more skeptical, but perhaps that it unwarranted.

Yeah i think we fail to see this common pattern of negative (even monopolistic) long term effects because we are distracted by the trade-offs or benefits the company promises us initially (but betrays later).


Do they even have to betray anything? It's not like Apple has been using interoperability as a selling point...


Undocumented features and functionality become the norm


> Many people are seeing all these different vendor specific CPUs as a win. I'm a bit more skeptical, but perhaps that it unwarranted.

Mixed upon that.

Upside - getting software made more portable across architectures and with that, the choice of an ARM or X86 desktop also makes porting to other cpu types less of an effort. Then drivers and lets cut to the crux - support for other hardware via drivers is what holds any other architecture back from the start.

Downside - Moving towards more and more closed/secret source CPU's that more locked in and if they become the norm, then a whole level of developing becomes way harder.

Whilst I'm sure many more upsides and a few other downsides, the real upside in all this will be even better ARM support by the commercial apps. With that, I really do think Apple with the M1 done wonders for ARM and the whole ARM environment.


The extensions present in the M1 are not used at all by Apple’s public compilers. From the perspective of a 3rd-party macOS developer that doesn’t reverse engineer, it’s just a regular arm64 machine.

This allows Apple to deprecate/change those extensions over time without developer involvement, and even use Cortex reference designs where it would make sense for them.


Very true and easy to blur the lines of thought between SOC and CPU these days upon ARM. Thank you for the poignant reminder.


These alarmist comments seem to be comically oblivious of much of computing history. If you have something open and better, it will prevail over closed products; if you have something open and worse, you need to work on making it open and better, screaming “but they’re closed” doesn’t help.


If you have something open and better, it will prevail over closed products

The problem is that "better" is being defined by the companies spreading the marketing propaganda about their products, and that has an unfortunate effect on the perception of the users. It's sad that most of the population would simply take it at face value if a company told them something was "better", but if you realise that making users unquestioning and under their control is ultimately a great way to extract more $$$ from them, it all makes sense.


> The problem is that "better" is being defined by the companies spreading the marketing propaganda about their products

Partially, but I don't think this is nearly as big of a factor as lots of FOSS people on the internet these days seem to think. Remember that RMS wrote a lot of the original GNU tools by himself and wrote the beginnings of GCC. Stallman could have just chosen to reject C and not write GCC altogether the way a lot of modern FOSS advocates seem to act about rejecting closed software. Unix and C could have been cast as proprietary enemies to reject, but he understood at the time that to gain traction, FOSS needed software that was usable for similar purposes as proprietary software. We can cry conspiracy all day but the hard work of building good products is what ultimately wins the day.

(Network effects on modern social/messaging platforms are a much more complicated story however.)


That's not exactly the truth. "Closed and worse" often does better than "open and better." It really just comes down to who has better salesmen.

Microsoft, for example, licensed Windows to vendors for decades with the requirement they not have another operating system pre-installed. This killed both "closed and better" and "open and better" systems alike!


Salesmen? Nah. They don't pay that game anymore.

It's about who has more capital to aggressively out price competition, even by taking ridiculous losses. Or buy promising startups, or litigate potential competitors into oblivion.

Diapers.com for example. It's not about anything close to sales and marketing.


I'm not sure that in computing history there's ever been the situation we have now of a software vendor owning so much of the market(s) that they can realistically afford to make this kind of move.

Microsoft in the 90s might have been the only company positioned appropriately to try but compared with Amazon, Google, and Apple, they did not have as much of an "in" into people's daily lives the way all three of those companies do today.

Unregulated capitalism leads to the company store, which is I think effectively where GP was suggesting things were headed.


This is the way most of the computing market has worked most of the time. Mainframes and minis were this way, then almost all the workstation vendors had their own CPU architecture and OS (often a flavour of Unix). The only real exception, and I admit it is a huge one, in Wintel but outside that in house hardware/software development in tandem has been the norm.

Furthermore this model is fundamental to the ARM architecture. The whole point is for licensees to develop SOCs with their own custom components on the same die. That's literally what a SOC is.


Windows is a massive “in”, your work and home device, your gateway to the *internet*.

As an aside that is monopolistic behaviour which at least California and EU are in top of. Companies have to be careful exploiting their dominance in an area.

maybe an “in” and dominance are different, if they are it doesn’t matter


> But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products. So most of FAANG, all if you count the ones with consumer products.

The F in FAANG - Facebook - has consumer products (notably, the Oculus VR headset).

Also Netflix spun out their consumer product into another company (Roku).


Is this worse than the current state of things? intel has its own special extensions to its chips, right? They all do.


OP's point is exclusivity. Intel chips are not exclusive to Intel branded end consumer computers.


All the x86 extensions are freely implementable by any constructor. There's some stuff like SSE4a which is not available on Intel processors, and others that Intel has implemented and AMD chose not to. But they are published as extensions and part of the standard.


https://en.wikipedia.org/wiki/X86-64#Licensing:

“x86-64/AMD64 was solely developed by AMD. AMD holds patents on techniques used in AMD64; those patents must be licensed from AMD in order to implement AMD64. Intel entered into a cross-licensing agreement with AMD, licensing to AMD their patents on existing x86 techniques, and licensing from AMD their patents on techniques used in x86-64. In 2009, AMD and Intel settled several lawsuits and cross-licensing disagreements, extending their cross-licensing agreements.”

⇒ I think only Intel and AMD currently can freely implement x64.

There’s also https://newsroom.intel.com/editorials/x86-approaching-40-sti...:

“However, there have been reports that some companies may try to emulate Intel’s proprietary x86 ISA without Intel’s authorization. Emulation is not a new technology, and Transmeta was notably the last company to claim to have produced a compatible x86 processor using emulation (“code morphing”) techniques. Intel enforced patents relating to SIMD instruction set enhancements against Transmeta’s x86 implementation even though it used emulation.”

That was seen as a message to Microsoft (https://arstechnica.com/information-technology/2017/06/intel...)


It was a message to Microsoft in 2017. The patents for x86-64 and SSE2 should have all expired since then. I don't think most software needs SSE3/4/AVX.


Maybe does not “need” but definitely “wants” - using these as part of code optimization is quite common.


I believe you misunderstood the comment. Intel also has undocumented extensions and functionality, that requires reverse engineering. It’s exactly the same as the vendor specific cases here. You were thinking of documented and well-specified vendor-specific extensions, but I don’t think that’s the main concern.


You mean implementable by AMD, the other x86 constructor.


I don't see you complaining about ASML being the only way of getting photolithography machines, or TSMC being basically the only constructor at scale.

Some industries are awfully complex and getting into it requires insane amounts of work. x86 processor making is one of these. It's the same as making new browser engines/js JIT/etc. There is just so much work that catching up with the incumbents is almost impossible.


I'm not complaining about anything, just saying there are two companies that have licenses to make x86¹, so it's not true that anybody could implement a cpu with sse4 extensions.

¹Not sure what happened to that x86 license that Cyrix had.


Via technology is making x86 CPUs using Cyrix's licence (they purchased Cyrix 20 years ago)


... and

The proprietary cpus will only run software obtained from the corresponding app store.

You can take that to the bank.


Yeah. This is more about full stack DRM to provide rentseeking opportunities than Intel's fat (but historically tolerable) margins.


Yeah, this won’t be happening.


It effectively is already happening: that's the iOS model. The fact that it's not enforced by incompatible instructions is irrelevant.


Because macOS is not iOS.


Why?


>> But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products.

It all leads to technical debt. They will make their own devices and keep them in-house. Then they will modify software to leverage these devices. Eventually they will loose touch with the standards. This will go well, for a while. Then something will go wrong. Their chip development will run into a roadblock and they will want to return to the standard. But by that point the cost of reverting will be far to high. They will be stuck with their in-house chips for better or worse. This isn't healthy. Sticking with the community standards on fungible hardware, or better yet contributing to them, is always the better long-term bet.


In some tech sectors, the double margin problem is a very real issue. Merchant silicon vendors have a margin they need to hit, and the vendors have a gross margin they need to hit, and so on. This is especially an issue in HPC and networking.


> But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products. So most of FAANG, all if you count the ones with consumer products.

Microsoft too! https://www.microsoft.com/en-us/surface/business/surface-pro...

And of course RaspberryPi: https://www.arm.com/blogs/blueprint/raspberry-pi-rp2040


> And then we'll end up with a fragmented CPU field driven by corporate greed.

In marked contrast to the blissful years of the utopian Intel/AMD duopoly? (:


Google is doing this for Chromebooks. If they want to lock out other vendors from making Chromebooks, they don't have to make a custom CPU, they'd just have to fork Chrome OS and Android. But they don't want to corner the Chromebook market, they want wide adoption.

> And because they are not OEMing this hardware, they really have no incentive to be cooperative with the others. Similar to older gaming console that had custom and experimental architectures.

What exactly is this referring to? Are you saying that game consoles should only use chips that are available to other manufacturers, or that every game console should be an IBM-PC compatible, like the first Xbox.


No, your fears aren't warranted. In fact, when Apple launched their ARM SoC for their laptop / desktop platform, I pointed out that Intel / AMD x86-AMD64 processors are now the final bastion of computing freedom as they build general purpose CPUs, and their current business model prevents locking it down to any specific platform.

With custom ARM SoCs we are going to see extreme platform lockdowns. (What most people here don't get, or are being deliberately obtuse about, is that ARM's IP are used to build System-on-Chips where the ARM processor is just one of the units. It is other custom parts of the SoC that allow you to lockdown and make the SoC incompatible with other ARM SoCs).

This is how Apple ARM SoCs only fully support ios / macOS, Google's will only support Android / ChromeOS, Microsoft's ARM SoC will only fully support Windows etc, and will enforce platform lockdown.

To understand why this is happening, we have to recognize the shift in the business model of BigTech - They want to turn everything into a service that will earn them recurring revenue. Platform lockdown will ensure they can force users data on to the cloud by pushing cloud services on these device and limiting their hardware. While initially they will lock down personal data too on to their specific cloud services, regulation may force them to allow "data transfer" between the BigTech services to give consumers the illusion of choice.

Remember, PRISM has showed US government how valuable it is to have access to personal data from corporates and how easy it is when it is on the "cloud". BigTech's worldwide reach will allow them to literally spy on anyone in a few decades and the US needs that ability to maintain its influence on world politics.

I predict a very bleak future for personal computing, where the concept of ownership of computers or computing device will no longer exist - your device in the future will be fully controlled by the BigTech, and we will have no computing freedom to even install any open source code on "our" device.

BigTech is already experimenting with "cloud coding" - this means they will even have access to all your codes in the future too - in such a scenario no competitor will emerge in the future to be able to dislodge them for centuries until they lose political backing.


The x86-64 patents also expired last year, so it's not only a general computing architecture, but a free as in freedom one too. It's also the architecture everyone uses for pretty much everything. Developers are finally free and united in the language we use to encode computer programs. But how long will this Tower of Babel be permitted to stand? Writing code for things other than x86-64 (e.g. GPUs, DSP/ML chips, phones, Apple SDKs) usually require entering into a legal agreement or obtaining some kind of authorization. History won't move backwards once the platforms we've fought hard to learn and use have finally opened up, just because Google and Apple want to save battery on their laptops.


Just because the patents have expired doesn’t make it ‘free as in freedom’ - someone has to design the CPU and there is no modern open source x86-64 implementation.

So right now you can buy from Intel / AMD (completely closed source and with ME nonsense etc). License your IP from Arm for a not excessive fee and inspect the code or use open source RISC-V implementation.

And finally writing code for Arm / RISC-V doesn’t require any sort of legal agreement or authorisation.


Only if you're coming to open source with the mindset of a taker rather than a giver. Nevertheless there are open source implementations of x86 architecture. I've written one. Bochs and QEmu have written another. If you feel that it must be implemented in hardware to count then a director at oracle actually wrote a verilog implementation of 80186 a few years ago.


Emulators != Open source core

If you want to go back to 1982 then fine. I'll keep running on my Arm and RISC-V cores thanks.

I do find the enthusiasm for the utterly closed oligopoly of x86 (with all the dubious ME stuff), with literally no modern open source hardware implementation, odd when you could be getting behind RISC-V.


This means more competition right? I mostly think good will come from this for consumers. Things felt like they really stalled out for a while with Intel


Seriously asking - does a fragmented CPU field even matter in today's software ecosystem of high-level languages, frameworks and extensive compiler infrastructures/virtual machines ?

Wouldn't such a divergence merely only increase the demand for compiler-backend/VM engineers making this job segment more attractive ?


>Squeezed too much profit from the market.

And what should they have done instead? Is it up to companies themselves to forge their own competition? Or did you snooze at the opportunity.


Qualcomm used its modem patents and licensing terms to strongarm companies into purchasing modems and CPUs together (since buying just a modem or just a CPU ended up costing nearly as much). That kind of behavior pushes companies to find alternatives and reduces choice in the market.


Embrace and extend.


>However it is obvious why this is all happening. And that is Intel and Qualcomm inhibited growth and squeezed too much profit from the market.

Or, it's just the regular old way capitalism works. Any sufficiently large company will start to vertically integrate to maximize their profits. If not Apple / Google, it could have been Intel / Qualcomm who started vertically integrating by offering their own laptops, phones, OSes and App Stores.

The big fish eats the small ones. That's capitalism 101.


> And then we'll end up with a fragmented CPU field driven by corporate greed.

So x86, basically.


> But Apple, Google, and Amazon are now creating their own ARM CPUs for their own products

Oracle too, whose CPU's btw are ~20% faster than aws graviton2. Looks like they use Ampere Altras.


I do agree. Greed is not always the best.


Your right, but all of this will drive WASM adoption. I'm fine with papering over incompatibility as long as the costs continue to go down.


It won't, and shouldn't. Wasm meets very few of it's goals, and is nowhere near any native speeds under real world workloads. Even speedups with SIMD ar hindered, and the entire design and implementation of it in the clang/llvm compiler is very experimental.

Go work with wasm in the browser, you will pull your hair out at the fact emscripten is still the most used tool. It's all very hackey and if something breaks, you wont fix it.


I have worked with wasm in the browser in production. Yes, there are many issues, yes I have some critiques of the multithreading proposal, but in the long run portable workers solve lock-in.


WASM adoption is not something everyone agrees is a great thing in the context of physical CPUs.


100% this. Cross-platform binaries that "just work" everywhere in a hardware agnostic manner is going to be huge.

Browser-based games and real-time 3D applications are the one's I'm most excited about, personally but that may be because I'm developing a WASM startup in this space.


They should think of a new slogan, maybe "Write once, run anywhere". Yea that sounds good.


WASM runs on 3 billion devices!


This x 1000%.


It was a joke on the Java installer saying Java runs on 3 billion devices for ages, not sure if you got it or not.


Pascal P-Code.


I think so too.


> However it is obvious why this is all happening. And that is Intel and Qualcomm inhibited growth and squeezed too much profit from the market.

It's not that intel and qualcomm have squeezed too much profit, it's that Apple, Google, Facebook, etc have too much profit. Apple and Google are worth $2 trillion. Facebook is worth more than $1 trillion. Intel and Qualcomm are worth $300 billion put together. These tech companies are big enough and monopolistic enough to own their entire hardware stack. It's like when Rockefeller and Standard Oil got so big that they "bought" the railroads that transported their oil.

Apple and Google each make more profit than Intel and Qualcomm's combined revenue.


FAANG companies will primarily be differenated based on network or software, rarely commodity hardware. They don't need to go all the way to silicon if they can just tune the designs for their workloads. We're early on in that process, and I think the internt giants will progress up the stack into the metaverse, not down into the physical science of it.

Semiconductors are a capital intensive and brutal business, as evidenced by Intel's recent fall - they basically made one architectural mistake and that slip up cost them the lead on multi-threaded performance for probably 5-8 years, assuming they can get it back.

Separately, Rockefeller's coercion of the railroads was not something he did because he was big, it was something he did to become big. He would strong-arm his way into controlling or coercive positions at railroads, then cut off his competitor's ability to transport their product. When they were struggling, he would buy them up at distressed prices and turn the rails back on.


> FAANG companies will primarily be differenated based on network or software, rarely commodity hardware.

I know. I didn't say they were going to sell commodity chips. Just that they were big enough to own their own stack.

> Separately, Rockefeller's coercion of the railroads was not something he did because he was big, it was something he did to become big.

No. It was something an already large SO did to get bigger. If Standard Oil wasn't big, it could never coerce the railroads to begin with.

> When they were struggling, he would buy them up at distressed prices and turn the rails back on.

I know. But small insignificant standard oil didn't do this. It was big standard oil on its way to even bigger and better things.


Nice take, I infer them the same.


The worry I have is that these deep pocketed vendors will lock up access to top tier fast designs. There will no longer be a high end CPU open market, making it almost impossible for a new entrant to build anything competitive.

X64 is still decoupled but how long will it survive with ARM designs like the M1 or Graviton blowing it out of the water?

The thing we really have to beware of is an oligopoly of vertically integrated vendors locking up all high end fab capacity, making it impossible to even fab a competing design if you can design one.

Not only do we need a revival of antitrust, but we need competitors to TSMC badly. Samsung and Intel are close on its heels but that’s not enough.


We’ve moved from a less diverse world (where Intel and AMD have an oligopoly) to one where a large number of firms can use Arm’s ISA and base designs (eg Neoverse) to build a decently competitive CPU. This is the opposite of what you’re suggesting is happening.


The argument from M1 is that "competitive CPU" will cease to be a thing as markets close; you'll get vertically integrated hardware. The M1 isn't a competitive CPU because it doesn't compete as a CPU; it competes only as the sealed unit of "Apple laptop"


M1 isn't a CPU its an SoC.

The CPUs in the M1 are competitive because they build on Arm's IP which others can and are doing (Google, Amazon, Qualcomm Samsung already and others can join them) which is a much less oligopolistic position than Intel / AMD having the market to themselves.


But you can’t source the M1 for another product. If you could you’d see them in servers already as they (or at least their performance cores) would be fantastic chips for many server workloads.

That was my point. Diversity doesn’t matter. It’s diversity of what can be sourced on the open market that matters.


You do know that Graviton is essentially a slightly modified Arm Neoverse core or that you can buy an Ampere CPU based on Neoverse today or that Qualcomm and Nvidia will almost certainly be launching competitive Arm based server chips in the near future. Nothing to stop other vendors coming in and licensing Neoverse too.

So tell me that isn’t a more open market than say 3 years ago when you basically had to buy Intel.

I find these arguments bizarre - we’ve had many, many years of Intel monopoly on the desktop and server and finally we have some competitors and people are worried about the market being closed up? Seriously?


The question I have around Google's CPU ambitions is what their end game is.

I understand Apple's end game. They sold lots of iPhones and wanted to in-house the chips for that. Makes sense. Once they had that working well (and Intel was falling behind on their chips), they decided to put in-house chips in their macOS computers. Makes sense.

AWS rents more CPU time than anyone out there. Makes sense for them to want to develop some custom chips for their stuff.

What's Google's end game? They don't actually sell a lot of chips. I'd understand them wanting to create a server chip, but they seem to be targeting phones and laptops. Is Google looking to become a major player in Android hardware? I like the Pixel phones, but they're a very small part of the market. Chromebooks are usually built by companies like HP and Acer, not Google directly.

The article says that Google's Pixel sold 7M units in 2019 (their highest year) and they're looking for 50% more than that (so 10.5M) for the Pixel 6. In 2019, Apple shipped 215M iPhones (over 30x the Pixel). Is Google looking to make the Pixel a much larger part of the Android ecosystem? Are they looking to compete directly with HP, Acer, and others in the Chromebook market?

I guess I wonder what the end-goal is here. Maybe the premium on Qualcomm's Snapdragon chips is so high that they'd rather make their own. Maybe they want to become the primary hardware vendor in the Android ecosystem - or at least a much larger one. But Google never seems to push its own hardware aggressively so it's a bit curious.


Apple didn’t decide to design their own chips because they sold a lot of phones, it was a differentiation play not a volume play. Designing their own chips means they can offer unique features it’s hard for their competition to replicate, like the machine learning acceleration stuff and hardware enabled security features.

The Android OEMs are trapped in a perpetual race to the bottom, so they can’t afford to invest in premium hardware features. Google has realised the only way for them to compete head on with Apple on hardware enabled features is to design their own hardware because nobody else is going to do it.


> Android OEMs... can’t afford to invest in premium hardware features.

I think there are a lot of android phones that have had premium features: e.g. foldables, 90-120 Hz refresh rates (Apple only recently caught up), Vivo x70 Pro Plus w/ an amazing camera that beats the best iPhones (https://www.youtube.com/watch?v=n0r2rENgvwY), etc.. I think there's just a perception that Apple hardware is better, but I feel like they put in above average hardware overall w/ good polish. These are all hardware features. Sure, the performance of Qualcomm chips lag Apple's, but that's because Apple's engineering team is better than Qualcomm's (IMO). I am sure that Samsung, etc. would buy a faster chip for their flagships if it was available. E.g. Samsung, etc. typically fill their flagships with crazy amounts of RAM and Storage.

Disclaimer: My views are my own, and not of my employers.


It’s true that Android has had some competitive devices but you’re usually getting one good feature and a bunch of sub-Apple features: great camera but a slow CPU or years shorter software support, etc.

The problem is that Apple is the hardware vendor getting recurring revenue from usage. Qualcomm needs you to buy a new phone to see anything more, only Google gets a cut of you buying stuff on the Google Play store, etc. That gives Apple both much larger revenues per customer and an incentive to keep your old phone working as long as you’re buying stuff with it. Google trying to get into that model seems healthy from the perspective of getting better competition but worrisome for the level of resources needed to compete.


acdha addressed the feature issue very well I think, it's not just about one or two features and when Apple does execute a feature they often do it better. ProMotion actually saves battery overall by often clocking the display down fo static images. Qualcomm's engineering team is just fine, the problem in their economics don't work for making truly high end chips. It's the most expensive engineering they do, but only a tiny fraction of the Android handsets sold carry flagship chips. Meanwhile all iPhones are effectively flagship devices.

A lot (but not all for sure) of the A-series chip performance advantage comes from a massive on-die cache for example. That's not fancy engineering, just brute force transistor count. Qualcomm engineers could absolutely do that, but Android handset economics won't support it.


Variable refresh rate display is adopted before Apple (like Galaxy Note 20).


Why mention your employer?


Because if I don't, people end up alleging potential bias in their replies to my comment, even though I feel there's no connection. So, I just throw it in there pre-emptively.


Samsung had their own chip division. Not everyone is trapped, like laptop manufacturers.


I suspect their end game is building machine learning features into the silicon.

Most Chromebooks at the low price point use awful low-end intel CPUs that seem unable to cope with opening more than 2 tabs at once. If Google can create a CPU that is competitive in performance and price to those terrible Intel based ones, but also chuck in some hardware assistance for ML inference on-device, they're likely on to a winner.


What user-detectable effect would ML have on a Chromebook? They are just used for web browsing and word processing, aren't they?


Above 80% of Internet web devices are running Android. Apple made M1 which is very efficient power usage and speed this would be a Google counter move.

Consumers will get more power efficient devices.

Intel may get affected by this.


Google doesn't need a solid plan to do something, they have more money than investment opportunities and since they are so "skilled" at shutting down projects it wont be a long term cost if it doesn't work out.


> They sold lots of iPhones and wanted to in-house the chips for that. Makes sense.

The Intel situation is more obviously, but could you elaborate on this? Was it to improve margins or for CPUs optimized for phones?


Google wants everyone using Chromebooks instead of Windows laptops.

Because almost no one uses Chromebooks except Google employees, so far their strategy involves contracting all the developers they can.


| no one uses Chromebooks except Google employees

There were ~30M Chromebooks sold in 2020[1]. I think many of those were sold to non-googlers.

[1] https://www.statista.com/statistics/749890/worldwide-chromeb...


Shipments doesn't mean being sold.

Some of those landed here in German stores, where they usually get reduced every couple of weeks until finally someone buys them.


In the current situation I wouldn't be too surprised if prices for these Chromebooks went up tenfold with queues forming at the stores from midnight every night - everyone in the queue working for some auto maker desperate to get their hands on some (ANY!) electronics parts ;-)


If they wanted Chromebooks to succeed they should let you use one of them without giving it your telephone number.

Seriously, try using a Chromebook in anything other than the crippled "guest mode" without giving it your telephone number, a telephone-number-linked gmail account, or jailbreaking it. You can't (unless it's part of a corporate/education site bulk purchase).

Completely moronic policy decision.


They are huge in education, and I am guessing they are seeing the new hybrid work world as a huge new avenue where people might want a simple/easy/disposable laptop to give employees for working remotely.


US education system yes, everywhere else not much.

Here in Germany they tend to stay on some corner being increasingly reduced until someone finally buys them.


Chromebooks are very popular in education.


Without seeing a good reason I would expect it’s to have control of the entire vertical of ‘you’ from hardware through to surveillance ad tech engine.. yikes


I hope so. I had a Google Pixelbook, and with the Linux container support (aka Crostini) it was an ideal developer laptop, especially as the Linux support improved over time. I was worried they were going to drop it as the Pixel Slate is much more of a consumer device.

Would love to see another Pixelbook with great performance and stability for Linux apps.


I wouldn't be ever paying for space constrained Chromebooks, when I can get proper Linux laptop around 300 euros, with GPUs not stuck in GL ES 3.0 hardware level and plenty of SSD space.


To each their own. If I were solely looking for "a Linux laptop", correct, I'd get something else. But I find the combination of Crostini and ChromeOS ideal for me:

1. I can run any Android app on the device. I find this incredibly useful.

2. On the ChromeOS side of things, everything really "just works" for me - the Pixelbook is a really great combo of hardware and software.

3. There definitely were some Linux hiccups but they continually improved over time. The was container backups work in the UI is especially simple and easy to use.


4. Google decides what to compile into the kernel and what modules to include, e.g., no Wireguard, sorry.


Links would be appreciated, I'm in the market.


I just got a Asus netbook from Amazon Germany without OS back in 2009, but with explicit Ubuntu support, the 1215B.

However, shops like Netbook Billiger and Tuxedo Computers would be my options for a future replacement.

No idea what would work out on your region.


And you even get an Intel Management Engine thrown in for free!

I'll keep my RK3399 laptop, thank you very much. I can (and have) programmed every processor in the device, including the PMU.

You can't say the same thing about your IME/PSP.


Anything that helps making Tanebaum right, I am for it.


I’d be quite happy with a ChromeOS just want to be able to install it on my own hardware rather than the over priced stuff that's available ATM


You may like CloudReady: https://www.neverware.com/freedownload#intro-text

It's basically a ChromiumOS distro you can install on normal PC hardware. The company that runs it was acquired by Google some time back too.


Can you explain your workflow and what development you do? Everything is done locally?


This is the beginning of the commoditization of computer processing. Right now, the only thing that gives you a lead is what you can have fabricated... Apple and AMD using TSMCs fabs have proven this to be extremely true taking the top CPU spots from Intel the past ~3 years. While we will probably see a small spike in "custom dogshit CPUs" I don't think it will be long lived (maybe ~5 years that suck starting around the end of this decade). It certainly won't be worse than the current variety of processors in phones or random other small electronics. Not to mention there's little incentive to use hardware that has poor software support. Eventually (~25 years), the fabs will go from billions to millions to maybe a few tens of thousands of dollars. Purchasing a CPU will (eventually in like 40-50 years) simply be deciding how much power, heat, and physical space you have to budget...

The reduced cost from mastering processes will mean companies are competing on price; which, is typically good for consumers and awful for the businesses involved. If anything, as we really push physics to its limits, the largest companies will likely be forced into cooperation to make any reasonable gains via something like RISC-V because of diminishing returns.

I believe "Machine Learning" will be what leads the way for eventual commoditization of processor fabrication. Pretty much what Relativity has done with 3D printing rockets: using ML to correct, predict, and/or even utilize, what would now-a-days be considered physical flaws to produce extremely perfect parts... but instead of rocket parts, it'll be lithography and CPU fabs one day.


> Apple and AMD using TSMCs fabs have proven this to be extremely true taking the top CPU spots from Intel the past ~3 years.

Absolutely not. The proved you can beat intel at lower transistor counts, at more or less similar frequencies.


If such efforts are not curbstomped early, we'll end up with fully closed computing devices architecture, belonging to different vendors who provide whole vertical from bare metal to OS. There will be Google's processors/OS/software, Apple's, Samsung's, plus some shit CCP will come up with, like, TikToks, and all of them will be mutually incompatible, with own programming languages and data formats. The future looks bleak.


> all of them will be mutually incompatible, with own programming languages and data formats. The future looks bleak.

Server hardware had (still has somewhere): SPARC from Sun, PowerPC from IBM, HPPA from HP, etc... See for example list of supported architectures for Debian4: https://www.debian.org/releases/etch/hppa/ch02s01.html.en

But it feels like amd64 is winning today. Don't you think that eventually one of these architectures you're talking about (Google's, Apple's, Samsung's, CCP's) will win over others, one way or another?


Only if they have an incentive to sell it as a commodity.

x86_64 didn’t win on technical superiority alone.


amd64 is going to lose to arm64. arm64 is significantly cheaper in a server context.


Is it though? I can't find any new ARM server CPUs for sale anywhere. I can assure you that "request a quote" pricing is anything but cheap.


i'd guess there are about an order of magnitude more arm devices on the planet than intel/amd64.

it's called "ARMada" for a reason


If you're developer, it's guaranteed that these companies will attempt to capture 15% to 30% of all of your revenue on these platforms, too. They already do it with phones and tablets.


How bad is that, though? The success and impressive performance, battery life, temperature of the Apple M1 suggests that this kind of integration can have outstanding results. So there are tradeoffs, at least.


Apple M1 didn't come out of nowhere, they learned from everything that came before. Your comment sort of implies that the innovations came solely from being developed behind closed doors (or in some kind of vacuum), which denies credit to workers who made the thousands of incremental breakthroughs that landed in the commons and which has helped us to arrive in the place we are now.


No, GP made a pretty specific point: that vertical integration can clearly provide some serious benefits in the realm of computer hardware.

Now, the fact that vertical integration usually implies closed doors is relevant, yeah; but I don't think anyone is denying credit anywhere, and I think it's important to acknowledge upsides.


I think there's a stronger argument that breaking backwards compatibility allows for big increases in performance


> impressive performance, battery life, temperature of the Apple M1

From my perspective Apple just made it so non Apple hardware couldn't get fast CPU's for another year.

M1 is a 5nm CPU that is faster than 7nm low power CPU's, and uses less power than 7nm high power CPU's, but to me it isn't obvious that it will perform significantly better than others 5nm low power CPU's when those hits the market. Others 5nm will hit the market once Apples contracts to use all 5nm fabs runs out, but before then we are stuck with only Apple having that tech.

You might think that this situation sounds great, but it doesn't look that great to me. I'd prefer if Apple didn't work to lock in hardware components and tech as they do.


I feel you’re missing the fact that Apple fund’s TSMC to build 5nm and now 3nm at scale.

Yes without Apple’s money TSMC would have still built 5nm, but likely a few years later.

ex. https://wccftech.com/apple-secures-3nm-tsmc-chip-production/


Ok, but I still don't see how M1 performance is evidence that this kind if integration is a great thing? Rather M1 is evidence that excessive money from one area can be used to expand your influence in other areas. Rather, the "tight integration" here has nothing to do with technical bits but instead economic, Apple can guarantee funds for them since they have money from other parts of the stack. Funds of that scale doesn't exist for many other companies, except Google. Google could also do something like that and compete with Apple, which might be why Google is now entering this space?


> Ok, but I still don't see how M1 performance is evidence that this kind if integration is a great thing?

Because humanity and every company now has access to 5nm infrastructure? Because Apple created the greatest marketing campaign TSMC ever benefited from? Because Apple proved that custom silicon design is no longer fools gold and can be profitable and competitive? Because Apple showcased that more instruction parallelism, heterogeneous core types, co-processors, etc are all beneficial design choices and every competing company can take advantage of them? Because Intel is now competing with TSMC? Because AMD and Intel might lose the monopoly on x86 and need to compete? Because Rosetta 2 and universal binaries pave the way for Apple to invest and eventually transition to RISC-5 or …?

Do you not see any of these points as great? If not, why not? I’ll use your reasons to try to come up with better examples.


If only they make them vendor locked and not following specification.

I guess that spareparts are harder to get and third party repairing will be almost non existent. It'll also make new competitors with good concepts harder to come into industry (steam deck, framework laptop).

Imagine if every car vendor (bmw, mercedes, toyota, etc) start making a very specific tire that can only be used by their own model.


What good are performance and battery life if you don't get to decide which apps you are permitted to run on this device?

Does the recent removal of apps [1] from russian appstore ring some bells?

[1]: https://www.nytimes.com/2021/09/17/world/europe/russia-naval...


> What good are performance and battery life

For the average consumer — everything.

I'm kind of torn between the two opinions. I wish my iPhone would be able to run anything I want to run. I also recognize that nobody forced me to buy this iPhone.

Mobile phones are mandated to have emergency calling functionality. Maybe in the future we'll have some "general purpose computing as a human right" law.


> For the average consumer — everything.

> I'm kind of torn between the two opinions.

I am not torn. Because of us two, apparently only one lives in an authoritarian country. When your computing device blocks you from accessing anything not praising the current regime, it kinda devalues the user-friendly interface and manufacturing quality. Shiny chains are still chains.


Well, that’s a different kind of issue. Should politics have anything to do with mandating which technologies have the right to exist? The Internet itself enables global surveillance by sufficiently funded actors, but it also makes this conversation possible. I like my internet, I don’t want it to go.

I also welcome tech companies trying to extract every tiny bit of performance from the silicon that powers computing today. It may lead to developing something incompatible with existing solutions, but surely no regime on this planet is to blame here.


Assuming we are talking about the desktop, you can run whatever you’d like. The App Store is just one avenue for software distribution on the Mac. The situation is of course different on iOS.


> Assuming we are talking about the desktop, you can run whatever you’d like.

Only sometimes. Have you ever tried unlocking a Pixelbook to run a different OS?

It needs a custom cable and significant RE efforts by the community.


Sorry, I thought the question was a critique of macOS, not the Pixelbook, and I offered a clarification that macOS is actually pretty open and you can run whatever app you like, in most cases.


For the moment. You also can't run any of the iOS apps (an advertised feature of macOS on M1) if you disable SIP, so this isn't really a true statement logically speaking.


For what it's worth, another option for disabling the write-protect is to temporarily remove the battery.


The people reverse engineering the M1 found that it uses an architecture that will likely make it easier (not harder) for Linux to support new generations of Apple hardware as it is released: https://news.ycombinator.com/item?id=28183176


This is literally what competition looks like, the opposite of a monopoly. Isn't that what we all want?


Yes, it's certainly better than a Qualcomm monoculture would be. But ideally competition on CPUs would happen in the CPU market, instead of in the target product market with lots of vertical integration.

In a world where all device manufacturers design (or even fab) their own CPUs and make their own software/cloud services (we're already there in some ways, sadly), any newcomer would have to start with transistors and work their way up to a long-term cloud ecosystem. That probably won't happen any time soon, standalone chip designers will continue to be competitive or at least good enough, but you can see how proprietary chips can actually make competition harder in some ways.


If it's vendor locked, then it'll be vendor-scoped monopoly rather than global-scoped monopoly. If Microsoft start manufacturing their own CPU and locking the OS for their model only, it can be said that Microsoft monopolize the windows hardware market.


I'd prefer a handful of architectures driven by industry groups. A competition free-for-all optimizes for architecture advancements and better software from vertically integrated players, but limits software from smaller players.


I was watching a podcast interview of one of the founders of Wikipedia. He was describing the effects of creeping control over Wikipedia and the definitive ideological narrative being set on the internet through Wikipedia. It is a very interesting interview but the thing that most surprised me is that he talks about how he has a personal server to host his website.

Of all the things he said, I found that simple idea to be the most surprising. The internet is already being dominated by a handful of cloud providers, few domain registers and select social media.

It is damn difficult to go against the tide and even though we are falling off a cliff.


Link to the video: https://youtu.be/l0P4Cf0UCwU


Sounds like an interesting interview. Do you have a link?



> all of them will be mutually incompatible, with own programming languages and data formats. The future looks bleak.

Isn't it already like this? We already have to make our apps in Obj C / Swift for iOS and Java/Kotlin for Android. You can use C/C++ libraries but you typically need to recompile them anyways. So if iOS or Android switch to different CPU architectures what's the problem? We will have to change our build processes, but the platforms are already fragmented.


> and all of them will be mutually incompatible, with own programming languages and data formats

All of them will be 100% compatible with SaaS over the internet offerings, no end user will notice.


Unless the internet will be fractured into 'google's net', 'Apple space', 'TikTok realm', 'Microsoft fief'. If you think that this will never ever happen, think again. Not so long ago some SaaS internet offerings worked only on one operating system and on one infamous browser.

Once you are deep in a vendor lock-in, you are trapped to use whatever he allows you to.


It might happen, but so far all most of the tech giants did not enforce their position despite being able to do so. I can browse porn from iPhone Safari, what a scandal. Surely they had an idea of building a curated whitelist which would satisfy a majority of users, but they did not follow that idea so far.


Yeah? How are progressive web apps working on iOS?


DEC, Sun, IBM, and many others did this. The world still survived.


And for the most part, they did not.


The new corps learned from the old corps. I'm not sure <<we>>'re going to survive them.


Maybe we should start with universal cable format first. Seems easier to implement.


This is what computing used to be. It wasn't bleak, and I would say it was a lot more interesting. Most of the retro computers that people remember with great fondness were like this.


Will it be possible to one day make custom orders to the fabs for custom designs at very low volume?


Chrombooks are pretty open, although totally non-standard.


What's non-standard about them, and compared to what exactly?


They don't use UEFI/ACPI like every other PC.


They use ACPI. Not using UEFI was a perfectly reasonable choice. The bootloader is fully open source, led to improvements in boot speed and security. Booting non-chromeos images is possible and reasonably well documented. A standard bootloader may seem nice from an interoperability standpoint, but I'm not sure UEFI is a great place to land.


[flagged]


How about no. How about people use curbstomp whenever they damn please. I'm very well aware of what you're implying and what you're trying to do here. Poison the well, police the conversation. Just like the OK sign just means "ok", curbstomp just means curbstomp. The fact that someone somewere used these things in a racist context means nothing, as these words/signs are not racist at all. Racists also use toilet paper, are we going to ban it? How about air or water. No, all of these attempts to police and cancel speech need to go to hell.


It literally means a horrible violent act which has no place in a conversation about computer chips in any way shape or form.


GP didn't even call it racist just mentioned a movie. Racist rant just came out of no where...


There was a clear implication, let's be honest here. Extreme violence is a normal part of our culture - young kids watch hulk smash violently in marvel movies and no one bats an eye. The reason strong offense is being taken here is because of the racist context of the violence in the movie.


It's a metaphor, is it not? Akin to "kill it with fire"


Yeah. I haven't seen the movie, but I recoiled just reading the sentence. It was a very graphically violent term to use here.


Thank you for your concern. I have seen that scene, and similar scene from Sopranos. This is the exact meaning that I think is appropriate in this context (a horrible violent act aimed at a victim to stop him once and for all from having certain ideas), because the danger to the society from such initiatives is that severe.


That’s really horrible that you think that a company commissioning custom computer chips somehow deserves violence.

I don’t know why anyone would possibly think that’s ever ok. Or why you would think that’s possibly ever appropriate to invoke in a discussion about computer chips or anything other than the act itself.

If your intent was to invoke a visceral reaction in the reader that causes the rest of your point to be completely ignored, then well done. Otherwise, if you want to actually communicate your ideas in a civilized fashion, please reconsider your choices.


You should probably look up for things called metaphors and hyperboles. Rhetorical devices are used by people in order to make their speech less boring and bland.

Computer chips have no literal teeth that can be taken out during curbstomping. However, companies DO have figurative poisonous teeth that harm society, and those CAN be taken out. And they should.


Is it that much of a problem as long as alternate OSes can be installed by the user? For example, the M1 chip can run Linux. I don't see how Apple and Google producing their own chips is much different as a consumer than the Windows/x86 monoculture that's existed for a long time.


The M1 can run Linux after an extensive reverse-engineering effort. And Apple didn't take particular active measures to prevent running your own OS on it; they just didn't offer any help. Many Android device manufacturers actively try to prevent that. Apple actively tries to prevent it on iOS devices. Some early Chromebooks actively tried to prevent it, too. (Though not Google's own flagship developer devices.)


> For example, the M1 chip can run Linux.

No, not in any useful form it can’t.

And any architectural changes could require another lengthy reverse engineering process rendering these devices unbootable with non-Apple OSes until then.

I would not be surprised if Apple completely closes off the Mac ARM64 platform for “security” in the next few years. The option to boot third-party OSes seems like a short-term gimme to keep the pitchforks and torches at bay.

I make this distinction because this is precisely the issue being discussed.


There was too much engineering work put in to make the M1 be still secure by default while allowing you to run other OSes. If Apple wanted to make it so you could only run signed kernels then the best time to make that breaking change would’ve been when the first Apple Silicon macs were released, not three or four years down the road when suddenly they say “throw away that bespoke firmware and all that special security work we did, just load the iPhone bits on there”.


>> For example, the M1 chip can run Linux.

> No, not in any useful form it can’t.

Yes, it can? Ex. https://www.tomshardware.com/news/apple-m1-debian-linux - you're welcome to point out that drivers for the rest of the system are a WIP, but the CPU is fine and the rest is coming along.


Try it on a physical Mac Mini machine and tell me it’s useful.

Progress is being made by REs such as marcan and others but it’s not useful, yet. And I speculate that by the time it is solid, the M1X/M2 machines will be out and a bunch of additional REing will be done.


In other news, Nvidia's support for Linux is "coming along™" and Lenovo says drivers for fingerprint auth for thinkpads is just-around-the-corner®


Nvidia actively prevents open source support through firmware signing.


If I remember correctly that's similar what the PS3 did, but that's a different issue and comes with cool discussions

This talk is about the PS4, similar in concept https://youtube.com/watch?v=QMiubC6LdTA


> Is it that much of a problem as long as alternate OSes can be installed by the user? For example, the M1 chip can run Linux.

And yet you can't run alternate OSes on iPhones and iPads because of Apple's efforts to make sure that you can't.

We're relying on the goodwill of companies that have all the financial incentives in the world to lock users out of the hardware they own, and those are the same companies that already lock users out of their mobile devices.


That's precisely the issue.

Alternate OSes can't access all hardware functions.

We already have that problem with Linux support for battery life and hibernate.

It doesn't work in the vast majority of laptops, while with Windows it does.

It also happens with Smartphones, to the point that alternate OSes for Smartphones have trouble doing the most basic functions, like making calls and using GPS.

Just because it can be installed, it doesn't mean that everything will magically work.


> We already have that problem with Linux support for battery life and hibernate.

> It doesn't work in the vast majority of laptops, while with Windows it does.

That's kind of my point. The status quo is already not ideal, this doesn't seem necessarily worse than the status quo, as long as Apple and Google aren't overly restrictive in preventing alternate OSes (which of course is a big if).


> For example, the M1 chip can run Linux.

Can it run native Windows?


Apple said they would do drivers for a new version of Bootcamp if Microsoft had a non OEM version consumers could buy. So the ball is in Microsoft court.


But will it contain a Google Management Engine?

Interestingly:

> As of 2017, Google was attempting to eliminate proprietary firmware from its servers and found that the ME was a hurdle to that.

Source: https://en.wikipedia.org/wiki/Intel_Management_Engine


Chromebooks (and virtually all laptops) have an embedded controller (EC) but fortunately the source is open: https://chromium.googlesource.com/chromiumos/platform/ec/+/H...

If Google is building their own SoC they could move the EC on-chip. I don't know if it's worth the development effort.


All chromebooks are running coreboot that is open source, as far I understand.

As much as google gets hate, pixel phone and chromebooks are made to be hacked, when contrasted with Apple.


Pretty much any SoC powerful enough to use for this necessitates having an ME style core. The only question is if end users can change the code out.


They already have a Google Management Engine in the chromebooks. It's called an "H1" chip:

http://www.loper-os.org/?p=2433

It's connected to the auxiliary lines on one of the USB-C connectors. Tickle it the right way and it will happily overwrite the boot firmware -- so long as the image you give it is signed by Google. It will do this without any intrusion into the chassis. Very convenient for, say, airport security inspectors, who would arouse suspicion if they took the time to open the case. Plug in the magic USB dongle, press the power button, the deed is done.

There were a precious few aarch64 chromebook models without it, but they've all been discontinued.


Cool, the more options the better.

I almost uniformly support diversity: companies are better with many different types of people (and more fun to work at), different CPUs and other components with differing designs and supply chains, every country having their own culture and local economies, avoiding social media and connect directly via email and texts, less power in a few huge corporations, etc.

re: Chromebook's: I always have one for casual web surfing especially when clicking links on places like Reddit that might take me to a infectious part of the web using a mislabelled URI.


But will they all be fabbed by TSMC?


true! I would like to see more independent fabs in the USA and Europe, but not sure when that can happen.


what's are the most prominent bottlenecks with lithography?


ASML's monopoly on EUV tools, and their high cost, long lead times and low number of units shipped per year are probably the most significant limitation for leading-edge logic fabrication nodes. But even once you have one or two NXE:3400 tools, you still need at least a few years of R&D before you can get to commercially viable manufacturing of sub-7nm chips. (For example, TSMC developed their own pellicles for EUV rather than relying on ASML for that component.)


I think it's just kinda impossible to start one, it's prohibitively expensive to get the factories and equipments running, and there aren't enough people with the know-how of building and operating these things in the world. Not sure if I understood your question, but I hope so


I doubt this will be the helpful kind of diversity.


yeah i was hoping to see more skepticism


I went through multiple iterations of Chromebooks as a developer. I really wanted to love them, but they're just not powerful enough for it. Even opening multiple tabs slowed down the Pixelbook, for example. I think at this point, unless Google indeed drastically revamps the architecture to make it much more powerful for the buck, it's a dead product in this category. Education seems like the only market where it shines.


I don't think the Chromebook was ever aimed at developers. It's the console you use for your marketing and customer support team.


Not exactly - they've marketed it to developers with Crostini for a number of years now. But it's just not there, unfortunately.


I’ve never really considered a chromebook for development, and I’m almost certainly going to order a new MacBook pro next month, but we’ve been using Gitpod lately and it (and presumably codespaces) seem like they are really freeing developers from needing beefy machines.

I know you could have gotten a lot of these benefits already by maintaining a bespoke dev server, but having a common set up for a team with prebuilds makes a big difference. Gitpod’s recent open sourcing of their VS Code fork, along with the growth of the open-vsx extension marketplace, seems like it will really open things up further.


They're not powerful enough to do all the work all the time but if you have a lot of web based work they're definitely perfect for a lightweight terminal.


"Lightweight on the go" I would say, especially if editing something remotely with good connectivity - sure, and might be a sweet spot. Though it's less convenient than having a general-purpose machine.. I've worked from coffee shops etc., and found it to be much more versatile to bring an extra pound of a Macbook instead, even for SaaS work with a remote IDE.


Web can be the most demanding of applications in terms of memory and cpu (for what you get)


Chromebooks with fans are substantially faster than the Pixelbook. Pixelbook was a fanless low power CPU design, and the Intel low power CPUs just aren't quite up to powering video calls and a bunch of heavy browser apps.

Chromebooks with an Intel chip that have a cooling system with a fan can use much more powerful chips and seem to be fine. Typing this from an HP c1030 which I've never seen slow down.


Interesting, I remember when Nokia exited the chip business in 2007. Up until then they had a lot of in house expertise for that. A few of my colleagues in Nokia Research were pretty upset about that at the time. In retrospect, this became a contributing factor to their demise. Their reasoning was that chips were a commodity, which at the time was correct. However they then backed the wrong partners (e.g. Intel, MS) and got cornered in a market they used to own and dominate. Windows phone and MS get a lot of blame for this. Also betting on Intels failed mobile strategy and going for Wimax instead of LTE definitely did not help.

But the key thing was outsourcing their core technology bit by bit. And that started in 2007. By 2012, a lot of Nokia factories no longer existed and production had moved to China and they had changed software strategy so often that it all got very confusing.

In the space of about five years they went from being the largest smart phone manufacturer to selling out to MS who then unceremoniously pulled the plug on the whole thing because there was very little left in terms of tech and skills worth salvaging. Windows Phone was a dud. Nokia as an Android phone manufacturer did not make a whole lot of sense (they actually launched one Android phone just before MS took ownership). Around the same time they also launched what could have been a Meego flagship device but was effectively a dead end when they killed off the entire team before putting that in the market.

Already in 2010, Apple felt they needed to do the exact opposite and have more control over their hardware. Fast forward 11 years and now world + dog is doing their own chips. And Apple of course dominates the phone market with a strategy of basically doing everything in house. Just like Nokia used to do.


> Their reasoning was that chips were a commodity, which at the time was correct.

I think the real reasoning is to cut costs once a company achieves domination. If there is no political will in the executive ranks to continue spending on R&D, the executives will choose to cut costs today, coast for a few years and cash out on their “increased” performance.

It is one of the reasons I like dapple, since as far as I can tell, Microsoft is just planning on collecting rent from Office, and Google execs like to toy around with a project for their own prestige and then kill it once a new boss comes around who cannot claim the initiative as their own.


The US phone market.

Typing this from a Nokia phone.


Yeah, really great news: more vertical integration for higher walls around those gardens. And, as usual, the majority of hackernews is celebrating this as a win and will happily shovel more money into the pockets of tax evasion and anti-competitions champions, privacy invaders, and polluters.


Honestly, some people legit scare me when they're talking about ARM/RISC-V. They usually have very little relevant knowledge/experience in the field (some don't even know what an ISA really is) yet worship these architectures like they're the second coming of Jesus. It really is easy to brainwash people en-masse.


It is also easy if you deliver a machine that can do what they want it to do with superior performance, long battery life, no noise, and no heat.


CPU market fragmentation in the pro-sumer space is not necessarily a bad thing and might pave the way for RISC-V adoption in the coming years.

More competition and a more diverse product landscape usually ends up benefitting the customers.


I wonder if we will get more hardware optimizations for JavaScript


Please Google make a fab. You guys have smart enough people to figure out the chemistry and physics, and the world definitely needs more 5nm fabs.


From everything I have seen, Google most certainly does not have the talent already to compete with TSMC. The talent does not even exist outside of Taiwan and China. And that creating a new cutting edge fab is the most complicated task known to man and becomes obsolete in a few years.


> The talent does not even exist outside of Taiwan and China.

Intel definitely has the talent and they’re in USA.

They just haven’t had the right management for a while. Maybe now they do.

Full disclosure—I own Intel stock.


This is interesting, 'cos Chromebooks started ARM then largely went x86. Current Chromebooks are more-or-less PCs, with slightly different boot loading; you can put x86 GNU/Linux on them if you want to. (Even Windows, if you're willing to mess with the BIOS.)

I expect these ARM chips will be pretty conventional - because the other thing Chromebooks do is run Android apps.


Something with arm-like cheapness yet M1 like performance would be a pretty neat trick for the chromebook segment.

I do worry about the google part though. They clearly think in terms of open technically but not really via gigantic catch. See android being open yet huawei being utterly screwed.


I know nothing about how processors work so tell me if this even makes sense. But other than the time and money is there any reason that say a Google can’t create their 100% from scratch CPU and instruction set? Same for apple and all the others?


I don't see a clear value for them doing so. That's the almost purely time and money-unrelated reason I see.

What's more, as far as I know, it's not their expertise area, they outsource this kind of work (architecture and cpu design), so if they had to develop their own architecture or CPU, relying on work from other companies with this expertise would probably lead to better results (for cheaper), for instance for a very specialized kind of work that would be more efficient with a specialized architecture.

And yes, that would probably be very time-consuming and cost a lost of money and that alone is probably a big reason for not doing it, especially if there is no clear need for it.

They probably would if they both had the expertise to do so and they were in a situation where developing a new architecture and CPU from scratch would allow them to save a lot of money.

For general workloads, they are probably just better off using a widespread architecture to leverage existing tools and ecosystems.

And maybe they actually do for internal stuff without us knowing, but that does not seem very likely.


Hello Intel!

Note that Apple is slowly replacing your chips with M1. MS has similar machines with Snapdragrons. Amazon is using some non-x86 on its cloud. And now Google.

Please, consider that getting rid of IME may gain some goodwill and attractiveness to your platform.


In the long run for the environment, and consumer closed hardware should be outlawed. Hardware should be open and built with repairable, upgradable, recyclable design, everything else is waste.


How many people are employed by Google as of 2021? This company has got to be huge, yet we always just say "Google" as if it's a few people.


Another win for ARM! Glad to see the industry finally pivoting to RISC. I also wonder if this will integrate with Fuschia?


They pivoted 25 years ago with Pentium Pro.


Still has a phat decoder on the front, consuming power all the time + I don't want a disassembler to be a PhD project like it is for x86 with all its extensions and prefixes (and suffix?)


In designs with a L0 uOP cache, they clock gate the decoder when just running out of L0. But it's really not that much power compared to a giant ROB and bypass network in these newer designs.

And they don't really have suffixes in x86.


Hence the question mark, wasn't it 3DNow!

The fact that it has a to have a big streaming cache just for the decoding puts it at a loss. Intel have only just gone 6 wide.

Is it the end of the world? No. Do I want a RISC machine? Yes


3Dnow used an immediate field for an extended opcode, but you still figured out the length of the instruction from looking at the first opcode bytes, so it's not really a suffix. As far as the decoder was concerned, it was just another fixed length, required imm field for the execution unit.

It doesn't have to have a L0 cache; a lot of designs don't.

And the decoder is wider than it seems. x86 instructions have more uOPs mainly from RMW instructions that would be three separate instructions in RISC. It'll be fun to compare Zen 4 (even at Ultrabook TDP) and M1 apples to apples once everyone has access to 5nm.


The decoder is wider than it seems but isn't the throughput lower for those instructions anyway? i.e. different decoders for different instructions complexities.

The designs M1 is competing with all have L0s, surely? I guess it could be masked off for a low power design but I would've thought not.

Either way, it's old and ugly, I don't miss it at all even if ARM and friends peripheral story probably isn't as good.


> The decoder is wider than it seems but isn't the throughput lower for those instructions anyway? i.e. different decoders for different instructions complexities.

The short decoders don't typically have issues with mod/rm memory destination fields. It's a little confusing because there's typically at least two uOP formats inside the cores. The decoders generally discussed spit out uOPS that still look fairly x86 in the semantics they encode (2 address, can be RMW, etc.) just wide, but fixed width. But by the time they've made their way to the functional units they've been cracked into more uOPs so AGU, LD, ALU, ST can all go to different ports that can handle that work.

> The designs M1 is competing with all have L0s, surely? I guess it could be masked off for a low power design but I would've thought not.

Most designs I've seen they're actually most important as a power saving feature. They shine well in microbenchmarking for perf, but not as much for general code. The real win is the clock gating all the i fetch and decode, most of which gets shared with RISC too (think I-TLBs, and Ifetch before decode).


ARM Cortex chips have a decoder on the front. It's no different.


ARM Cortex has a pistol, X86 has a howitzer comparatively.

The observation that ARM also uses a decoder is not a useful one.



Interesting to see that when Intel stumbles, all the world is developing CPUs.


I have found the arm chrome books to be way better than the intel ones.


Hey, if you are Simon J Glass (sjg), I want to thank you for all your work on chromebook coreboot and u-boot.

And hope that someday you get the gru-kevin u-boot branch working :)


Not him, but he sure does have an impressive resume!


Someone should make amd64 royalty free and make this a real fight


Seems like Chromebooks break their brand promise by not being slow enough.

With a new slow custom chip, Google will kill Chromebooks the way they killed Google Pay and Wave and Loon and all the rest.


> the way they killed Google Pay

But...they didn't kill Google Pay.


See

https://arstechnica.com/gadgets/2021/10/google-pays-disastro...

a case of reverse imperialism if you ask me.


If they're killing them in 3 years anyway, I don't care :)


This story is a month old, by the way.


G1 ?


dec re-born


More fragmentation and more control over their own hardware. Having general purpose PCs that you fully own was great while it lasted.


You can get full control over any Chromebook by opening the device up and removing a single firmware-write-protect screw. Try that with an iOS device. It's the difference between a useless toy and something you can actually customize to fit your needs.


Do newer chromebooks still actively ask the user to wipe them every boot once unlocked? Depending on vendor, maybe?


They do that when you enable the stock "developer mode". Removing the firmware write protect allows you to override that behavior.


That is not true, there are plenty of boot locked Chromebooks on ebay. If the org didn't unlock them when they surplussed them, they will never run Linux.


They can be "deprovisioned" afterwards by the enterprise that was managing them, and these blocks will be removed. Just wait until they get around to doing it, they will because keeping devices enrolled costs them money.


I sent mine back to the vendor as defective, they had no way to allow Linux to boot on them. They couldn't deprovision them. I would be extremely reluctant to tell someone to "just wait" to have the boot unlock just magically work.

And how would the boot unlock get communicated back to these chromebooks?

My warning is, don't expect an Ebay chromebook that is coming off of a big school contract to be able to boot Linux. It may be locked, and will only run Chrome.


> I sent mine back to the vendor as defective, they had no way to allow Linux to boot on them.

It must be done remotely via GSuite, the stock Google firmware uses the serial number to check whether the device is enterprise-managed. And yes, unenrolling a device is very much a possibility.


Not anymore; that ended when they switched from Arm devices to Intel chips with the Intel Management Engine. You no longer have full control.


That's only true as long as Google wants it to be true.


Unless you have a fab in your closet, your ability to get customizable computers will always be contingent on someone else making them for you.


Sure but there are huge players in the x86 space so no one company could lock it down to only run companies $X software. Also the supplier and consumers are separate, Intel and AMD make the CPU and other parties like Microsoft, Linux. make the software, and then even other companies sell both the hardware+software (Amazon, Dell, HP, etc.)

If custom ARM chips make x86 irrelevant and are designed 100% in-house then we could have a situation where the only option is running that locked-down/walled OS or using an old, outdated hardware. Sure you can run our own code in the cloud but that's still under the eyes of the cloud provider.

Luckily this probably won't happen. Computation as a product is a large enough market that there should be sufficient demand for an open platform. Even if the market didn't provide this demand, a large number governments would raise concern & intervene.


Fragmentation isn’t good. Open standards—-not open standards from one company—-is best. The shiny new computer will hold the title ‘fastest in its class’ initially but its longevity will be at the mercy of the company building it. Linux will get harder and harder to port over without jailbreaking.


Back to the 80's.

The PC only happened because Compaq managed to screw IBM with their clean room reverse engineering.


I do wonder what this will bring about in the hacker/maker community. When Sony had closed off architectures like PS2 there was first nobody able to modify it, but then workarounds were discovered and then people started repurposing superior hardware for new uses. There was a wave of PS2 fueled super-computing for a while. I expect new usages will come about for stuff like M1 that Apple never intended to allow.


For many, their primary computing device has fully specialized for years.


It’s cool to see 25+ yrs of x86 dominance schism off into all these Arm variants.

I just hope we don’t see problems with portability like it’s the 70’s again.


"Apple cargo cult"


so is google docs working yet?


Google processors, for the advertising hive mind.


Schools are giving out Chromebooks to teachers to live stream classes. It integrates with google classroom well. However its terribly slow if you have more than few tabs open.


Following what your competitor is doing has not historically been a winning strategy in tech, I hope it works out for Google.

Samsung with it's bespoke ARM chips and Apple with its chips make better, more capable, devices than Google can possibly make with off the shelf parts. That combined with having to pay the third party margin on the chips and you don't achieve leadership. But the challenge is that "organically" developing a core competency in semiconductor design is unlikely to succeed. Samsung was a chip company to begin with and Apple bought PA Semi for the expertise.

Not that buying a company would necessarily help, after all their track record there is spotty at best.


> Following what your competitor is doing has not historically been a winning strategy in tech

There is a long history of tech companies copying competitors and taking over markets by executing better


You had me until you said "executing better" :-)

Somewhat more seriously, I'm wondering if there has been a single adjacent market that Google has tried to enter and executed better than any existing player, much less the leader in the space.


But other than browsers, mobile phone operating systems, ads, email, search, collaboration software, machine translation systems, mapping, ML hardware, internet video, identity providing, web analytics, autonomous driving, and virtual assistants, what products have the Romans ever executed well on?


If I remember correctly, Google did some work on machine learning for optimizing chip layouts.

Maybe they think we're at the stage where ML can significantly help with cpu design?


This isn't coming out of nowhere, they have been doing custom ASICs for a while already. The article mentions TPUs as an example. A more recent example are YouTube's in-house transcoding chips: https://news.ycombinator.com/item?id=27034627




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: