Hacker News new | past | comments | ask | show | jobs | submit login
AMD Grabs over 30% CPU Market Share as Intel Continues to Decline (wccftech.com)
533 points by oumua_don17 on Feb 15, 2023 | hide | past | favorite | 305 comments



As a cpu buyer, I'd like amd and intel to get stuck at 50-50 market share and fight tooth and nail by introducing cheaper and faster models every year. And maybe discover low power computing, although that's wishful thinking.


The upcoming Phoenix APUs from AMD are a game-changer for portable and handheld devices. Between 15 and 45W TDP, Zen4/RDNA3... They're slowly trickling out to thin and light laptops, but I can't wait to get them in the next Steam Deck killer.


I still want them to release a 6W chip. Intel still owns that part of the x86 market, and actually has decent chips.


Agreed! The N5105/6005 have been remarkable, and maybe they’ll replicate that with the Alder Lake N chips.

Chinese ODMs seem to have already started making mini-pcs with the new chips, and I imagine routers will follow quickly.

Until AMD’s media engine improves, the Intel chips will still be valued by the Plex/Emby crowd.


> I imagine routers will follow quickly.

I'd love a new router board with a 6 W CPU and some storage ports. But all I can find is media player boards with at most one ethernet.

Dream board atm would have said 6 W CPU, DC power, two ethernets, one 2.5 Gbps, and some M.2 and SATA slots. For a combined routing/storage box.


Have you seen this? It’s a 10W CPU but mostly seems to hit everything you’re looking for.

I imagine an Alder lake N version of this will appear sooner or later, and the N100/N200 should give you the 6W part you’re looking for.

https://m.aliexpress.us/item/3256804762512339.html?gatewayAd...


Oh that's nice-ish, thanks! Bookmarked.

Not as nice as my old Atom router because of the active cooling on the CPU and the ATX power connector. The latter I can fix with a PicoPSU but the former...

I'd rather have a passive radiator on the CPU and add some airflow via a huge case fan - which is what i'm doing now for my router box. That way it's a lot more silent.


This forum post has some real world stories of it's use. https://forums.servethehome.com/index.php?threads/topton-nas...

It seems like the CPU fan stays off almost all the time, and is apparently very quiet even when on.

> Oh, and I forgot to mention, I have set the fan to come on at 36 degrees and go off at 34 degrees. It's off most of the time (pseudo-passive) ;-)

> s I'll use the stock one first becase the heatsink fan is dead silent as @Camprr23 has mentioned.


I have been impressed what my GPDWIN2 m3-8100Y processor can do at 7 watts.


Exciting indeed. And if it still runs Steam I doubt Valve will blink an eye.


A hardware refresh is not on valve's list unless there is a significant performance gain.


I totally could believe that in another year or 2 there will be sufficient performance uplift. steam Deck with 50% more performance at the same battery life would be great.


I think I read somewhere that extending battery live is Valve's main priority for next Steam Deck


I hope a dynamic 120Hz screen (like in the iPhone but obviously not as screamingly expensive) is also in the pipeline.

With a 120Hz screen, you can run the UI and overlays at 120FPS, run cinematics at 24FPS or 30FPS and the game at 40FPS or 60FPS. All fits neatly into 120Hz.


> you can run the UI and overlays at 120FPS

How does a 99% static UI or overlay benefit from 120 fps?


That would be the right priority. Double the battery would make it even more competitive with Nintendo.


They are going to need more performance by then too though. They are tied to the PC gaming universe and the games people want to play will keep getting more demanding.

They have it a lot harder than something like Switch.


The basic system requirements for most PC games has barely budged in years mostly due to high GPU costs and lack of killer features that would require higher requirements. Neither factor seems to be immediately changing.


I don't see anyone demanding desktop graphics on handheld.


especially because if you half the tdp of the CPU, you can cut the cooling a bunch which reduces the cost and noise.


if it makes the battery last 30% more, you can bet it is


> They're slowly trickling out to thin and light laptops, but I can't wait to get them in the next Steam Deck killer

You mean the Steam Deck 2 or a new device that will "kill" the Steam Deck?


Steam Decks killer feature is Steam so it'll be an uphill battle to kill it.

That said no one has streaming down perfect yet. Steam Link, AMD link, Moonlight all suck.


Parsec is pretty close, at least on a LAN. https://parsec.app/


It's not really a game changer. AMD will advertise 15w - 45w chips but they will boost well over their marketed TDP.


It's not a game changer because it will boost opportunistically like every other CPU? What?


We already had a game changer, Apple Silicon. AMD chips are a bit better than Intel's on mobile but not by much. Mobile Zen4 and RDNA3 SoC would not be a game changer. We already know how they perform on desktop and how much power they use per performance.


I was wondering how far I'd have to scroll before someone trotted out the Apple horse.

Farther than I thought, it turns out.

You know, for all the stuff about Apple silicon being a game changer it doesn't really seem like the game has changed that much...


Yeah, it hasn't. x86 is still crap.


Let me know when Apple decides to sell their chips to 3rd party manufacturers, when they end up in a handheld gaming device, and when most games run natively on ARM with competitive performance.

Apple silicon was only a game changer for productivity tasks, and only for people willing to jump into the Apple ecosystem. In all other cases, especially for gaming, an APU with the performance of Zen4/RDNA3 at the announced TDP doesn't exist yet. So, yes, it's a game changer.


> when most games run natively on ARM with competitive performance.

They just need to run natively on ARM to get performance that's more than competitive on the CPU side. The GPU is no slouch but is not top of the line (yet). Seems trivial to pair the ARM CPU with an external GPU.


So why would your phrase your argument as "it's not a game changer because it boosts if there's available power/cooling", rather than "it's not a game changer because Apple's CPUs are better"?


How upcoming? Will we see them in laptops this year?


The Zen 4 8 core APUs should be in laptops before the end of February, while the 16 core ones are expected in March.


You can set lower power limits for both AMD and Intel's chips to improve perf/watt. AMD benefits more than Intel when doing so.

Here's some more reading if you're interested: https://www.anandtech.com/show/17641/lighter-touch-cpu-power...


My summary is AMD is hurt MUCH less than Intel by lowering the power. One example from the article:

Cinebench R23 multithread:

  Ryzen 7950X 230 -> 125 watts = 95%
  Ryzen 7950X 230 -> 105 watts = 93%
  Ryzen 7950X 230 ->  65 watts = 81%

  Intel I9-13900k 253 watts -> 125 watts = 78%
  Intel I9-13900k 253 watts -> 105 watts = 72%
  Intel I9-13900k 253 watts ->  65 watts = 56%
Other benchmarks like C-ray show ZERO slowdown going from 230 -> 125 and even 105 watts. Intel drops by 21% for 125 watts and 30% for 105 watts.

Personally I'd rather get the 7900X (or non-x) and you'd keep an even larger fraction of the performance within a given power limit. It's pretty clear who has the worse core and has to push harder into high clock/terrible power usage for not much gain curve.


I'm a little dubious of benchmarks that compare only one pair of chips when it comes to details like power efficiency and tweaking. The silicon lottery can make a big difference.


Sure. Various forums corroborate the reducing the Ryzen 7000 series at 65 TDP is often a less than 10% decrease in performance.

Also keep in mind that "performance mode" on Intel can significantly increase the default 253 watt TDP. Anandtech reviews of the i9-13900k hit 380 watts I believe.

So sure there's chip to chip variation, but generally the AMD chips are more power efficient and have much lower penalties for reducing the TDP.


I'm not sure whether I've won the lottery or Intel's software is unreliable.

I've been playing around with the 13900K's settings and so far I've managed to get it down to a stable 210W TDP(according to XTU under stress testing. That's with a 0.110V undervolt(likely could do more) and the P-core turbo multipliers tapering down to 50 with all cores active from 57 with 1 core active. I've yet to mess with the E-cores though; could probably shave off some more perf/W still.


I think managing a 100mV undervolt is pretty common for the 13900K. That's what I have as well, with a power limit of 200W. The penalty in cinebench is only ~5% despite the 20% reduction in power.


Do you know how to undervolt laptop Ryzens? I'm really interested if we can increase the battery life by another 30-40% or so


The easiest way is to go to the bios or cpu settings and change the precision optimizer so that you have a negative pbo setting. That'll reduce the stock voltage at a given frequency. It'll also potentially speed your chip up since this lets the chip boost to higher frequencies at a given power level. I think the lowest you can go is -30, but try -20 or so and see if your system is stable before trying lower values.


I just tested my 7900x yesterday and it gets about 95% of the max performance at 105W TDP.

The system idles at half the wattage, too—so it's saving a ton of energy not running at the stock 170W TDP. Not sure why AMD didn't go for efficiency crown, especially for non-halo SKUs like the 7950.


Have you tested draw from the outlet? I assumed that modern chips could scale down their power usage significantly when idle. Which is to say I thought consumers would rarely ever hit maximum power draw, so I would expect really modest real-world savings.

Happy to be proven wrong, because it is a brilliant idea. My machine is wildly oversized for my typical usage, and I could easily take a 10%+ performance haircut and likely would not even notice.


That's part of the reason they're given such a high TDP by default. The cost of squeezing out the last 5-15% may be worth it if it for short periods of time.

But if you use these chips for tasks that max out all cores for several hours per day, it may be better the simply run a chip with more cores and to run those in econ mode. A 7950X@64W may perform as well as a 7900X@200W for such compute tasks, and are probably cheaper over time.

And if you use it in your house, they generate less noise, can use a cheaper cooler and doesn't dump so much heat into your room.


Thanks for the thoughts. My untested suspicion is that my CPU hits max load for <5% of total use. Meaning lowering max TDP is unlikely to noticeably impact my electrical bill, but it’s such a cheap optimization to enable, why not?


I can and I have. There's a ryzen limited to 70 W in the bios under my desk :)

99.99% of the population doesn't know this option exists.

And 0.01% want BIGGER NUMBERS OMG 720FPS!!! and ruin power consumption worldwide for everyone else.


I blame Intel for that. They pushed the clock speed high for silly small perf gains. AMD could take the high road, then media would brag about how fast Intel is and AMD would lose market share.


>I blame Intel for that.

As you should. I worked at Intel back in the 00s when they were in the "megahertz wars". All they talked about internally was "we're winning the MHz wars!!! Yay!". When anyone mentioned power consumption, or how fucking loud the cooling fans were, or MIPS/Watt, they didn't care; all that mattered was MHz. Hitching their wagon to RAMBUS memory was part of this. Then they got pissed and indignant when consumers didn't want to spend a fortune on RAMBUS memory and bought AMD CPUs with cheap SDRAM instead.


That fits. The p4 had a long pipeline, high clock speed, and poor perf per MHz. AMD pushed the general performance score, forget the name, and seems like the succeeded. Doubly so since they shipped x86-64 first, which people did care about.

I wrote a few microbenchmarks to explore the performance promises of rambus, and didn't find anything.


>AMD could take the high road, then media would brag about how fast Intel is and AMD would lose market share.

This already happened in previous generations (especially Zen2).

AMD has to play this stupid game, unfortunately, because the press is what it is.


Indeed. The game Intel used to play is now the game everyone plays -- Intel, AMD, and NVIDIA, for both consumer CPUs and GPUs. At least on the desktop. They tend to run them to redline by default, because numbers. It's all about getting that big splash at launch, to ride it for higher average sale price as long as possible.

Which is unfortunate, because when it comes to benchmarks, understanding the context of a test setup is extraordinarily important. Are you buying that very high-end memory the test setup used and then manually adjusting timings? No? Then you're not getting those numbers. Heck, you may be losing 20% or more performance compared to the test rig on just the difference in memory. Never mind adjusting other things, ensuring the system's thermals are entirely kept in check to prevent throttling, etc.


What is this option? I would like to know more about it.


You can set various power settings that suggest that they are limits, but at least in the Skylake NUC I have they really don't do much and certainly don't limit the maximum power the system uses at all. The article doesn't talk enough about the actual power use vs the settings although it sounds like while both are going over there might actually be some limiting in the recent chips.


That Anandtech article did include some power measurements on page three, but given those measurements show the limits are applied inconsistently between AMD and Intel it would have been nice to have power measurements on other graphs too.


Indeed, ideally 3 companies to help ensure there's not some agreement to raise prices. Apple's largely not in the same market, but is pushing the edge when it comes to performance per watt, even shipping products gasp without fans.


> 3 companies

My sweet summer child. You should read about tacit collusion.


Does it happen, sure.

Seems like Apple's increasingly happy to hit lower price points and disrupt the price structure for AMD and Intel based product lines.

Similarly Intel seems to finally be getting it's act together and setting up to disrupt the AMD and Nvidia GPU market. Here's hoping. I saw a pretty decent GPU (RTX3080) for $420 recently!


Until I can slap an Apple chip into my random Linux build, those numbers are meaningless to me. Would require absolutely enormous performance improvements to justify making such a huge leap.


Seriously. I wish I could buy Apple silicon on a desktop board with a bunch of PCIe slots and stuff.


Like say going from 80GB/sec memory bandwidth to 400GB/sec?

Or maybe having the iGPU performance go up 4-8x so most won't need a $500+ GPU?

I'm considering an Apple studio, once refreshed with the M2 Max, even for Linux. Sure the GPU driver isn't quite there, but it's improving quickly.


Yeah, don't really need a $500 GPU on a platform which barely supports applications which really need that $500 GPU.


Like ML training and inference?


The M1 sits near the bottom of NVIDIA's GPU lineup in terms of ML performance. Putting it just slightly ahead of the $200 1660ti.

When doing a quick Google search on the topic, it turned out that several of the top results were blatantly misleading in that they limited the competing hardware to match the M1's limitations. For example, the top result, a wandb article [1] claims that the M1 is competitive with the V100, yet their own data shows that they aren't even fully utilizing the V100 and that when properly utilized it obviously totally outperforms the M1.

https://wandb.ai/vanpelt/m1-benchmark/reports/Can-Apple-s-M1...

Similarly, Apple in its marketing for the M1 Ultra was extremely manipulative, bordering on outright lying, when it compared the chip to the 3090. It presented them on a "relative performance vs power" graph making it look like the chip matched the 3090 while consuming less than half the power, when what it was saying was that it's more efficient than the 3090 when you underutilize the 3090 to the point of matching the M1 Ultra's performance.


Agreed on deceptive marketing.

I dug around and found some M2 updates.

Seems that generally, Nvidia laptops drop to a small fraction when on battery.

Examples: 3dmark wild life. M2 max = 150 fps, 3080 ti 120 fps, 3080 Ti unplugged = 27fps.

Blender 3d rendering: m2 max = 1:04 3080ti = :29 3080ti = 3:25

Lightroom: m2 max = :39 3080 Ti =:51 unplugged = 1:29

DR18 6k braw editing: m2 max = 0:55 3080 ti = 2:40 unplugged = 15 minutes

For ML comparing 3070 vs M2 max, 3 batch sizes:

  CIFAR-10  
  m2 max   = 242, 280, 327   
  RTX 3070 = 240, 283, 328
However for AI/ML duty the apple neural engine (ANE) looks pretty promising. On Densenet 121 the ANE on the M2 max is almost 7 times faster than the M2 Max GPU.

Seems like the M2 max does pretty well compared to a plugged in RTX 3070 to 3080 Ti. The big bonuses is that you can use all ram (not limited to VRAM of 10-16gb) and you get the same performance even on battery.


Sorry bro, that iGPU is not able to run anything on it, either because of the limitations of the OS or the limitations of the architecture. It’s just marketing.


I found a bunch of ML benchmarks, only one had a comparison to a RTX3070. Seems quite a bit better than "not able to run anything". In particular you can use up to 96GB ram, 4x the 4090. Granted it's m2 max is approximately 3070 in speed for such workloads, at least it doesn't decrease when unplugged.

  ResNet50:
  M1 amx   = 155.3
  M2 max   = 188.8
  M1 ultra = 242

  mobilenetv2
  M1 mmx   = 415.7
  M2 max   = 467.9
  M1 ultra = 634.4

  DistilBERT
  M1 mmx   = 134
  M2 max   = 164
  M1 ultra = 215

  BERTLarge
  M1 mmx   = 20
  M2 max   = 25
  M1 ultra = 30
  
  CIFAR-10: 3 batch sizes
  RTX3070  240, 283, 328
  M1 Max   297, 358, 443
  M2 Max   242, 280, 327
  M1 Ultra 207, 308, 453

  Densenet 121: 3 batch sizes
  M1 Max    57  104  704
  M2 Max    66  124  831
  M1 Ultra  49   99  614


You mean like PyTorch or TensorFlow?


I would prefer many competitors instead of a duopoly.

Do not forget ARM.


I'd rather RISC-V. Lots of players there.


So would I, but this is already progress from the days when Intel had 90% market share.


I agree with you, although I'd prefer to see (at least) a three way duke out between Intel, AMD, and ARM. I really don't think living in a world where there's (basically) only one CPU architecture and instruction set is going to be as beneficial as one in which there is competition on more than simply price and power consumption.


No, it is better for AMD to pull far ahead so that the next challenger is forced to come up with even greater innovation to steal market share. This is better than just having two large vendors locked in a back and forth game of one up manship and incremental innovation.


Screw that; I want them to dump x86 altogether and move to ARM, or maybe come up with something even better, to compete with Apple's M2. Why are we stuck with this shitty old ISA from the 1970s?


ARM came out the same year as 32 bit x86. Both architectures are very old.

I very much doubt that architecture is all that relevant for their advancements in power usage. Apple's chips contain a significant x86 feature without ruining battery life. Meanwhile, Qualcom is struggling to compete with Apple in both performance and efficiency despite being in the ARM space for much longer.

I'm sure if Apple could've gotten an x64 license ten years ago, they would've made their own x64 chips instead of switching to ARM. When Apple's plans started coming together, there simply were no competing architectures they could base their chips on. MIPS was practically dead already, x64 was extremely closed off, RISC-V wasn't even announced and struggles to keep up today and it wasn't even announced when Apple started selling their own chips.

Maybe they could've licensed POWER6 or an early version of POWER7? The POWER architecture isn't exactly widely used or designed to be power efficient; power management wasn't introduced until 2017 and even then it was optional.

There simply weren't any serious alternatives to licensing ARM and Apple would be stupid to develop an entirely separate CPU architecture for their desktop/laptop/tablet form factors.


It would have been pretty cool if they'd gone back to the Power platform though. Full circle.


The pc platform is standardised and open. Everything else is a fragmentary shitshow. It will clearly be superseded at some point, but I pray it takes its time.


> Why are we stuck with this shitty old ISA from the 1970s?

ARM is from the 1980s, let's just jump right to RISC-V instead. :)


These days, don't instruction sets just map to "real" instructions on the chip? Something about microcode. Clearly, I'm not a chip engineer.


Yes and no. The (very simplified) answer is that yes, some ISA (front-end) instructions are decoded into simpler back-end operations, but the design of the ISA still imposes constraints on the implementation of that decoder [1] and on the design of the back-end.

Then there are concerns like register pressure. x86 has so few general purpose registers, that values need to be stored on the stack and reloaded when needed. Some of the performance impact can be reduced by complex decoder logic, but making complex logic fast nearly always leads to high power consumption.

[1] E.g., the highly variable length of x86/x86-64 instructions puts a limit on the number of instructions that can be decoded per cycle.


> x86 has so few general purpose registers,

It's got plenty in 64-bit mode (15 + RSP) + you can spill to SSE registers instead of the stack.

It also needs fewer registers than (most) RISCs because it has a more flexible way of specifying memory addresses (base, index, scale, offset + PC-relative) and it also has proper immediates.

> the highly variable length of x86/x86-64 instructions puts a limit on the number of instructions that can be decoded per cycle.

Not that much of a limit, actually. Yes, a parallel decoder that takes on arbitrary byte sequence and decodes them is hard to scale up. The instruction lengths can be cached, though. In fact, they used to be cached as extra bits in L1 back when the size of a wide decoder was a significant size of the CPU transistor budget. It should be possible to use that idea again to go wider.

The newer x86 CPU's also have a µop cache so no decoding is even needed for tight loops.


Decades of binary compatibility without concern for emulation bugs?


... because we're stuck with the shitty programming languages and development paradigms from the 1970s.


These is missing something. Looking at just the Q4 2022 numbers:

AMD Desktop CPU Market Share 18.6%

AMD Mobility CPU Market Share 16.4%

AMD Server CPU Market Share 17.6%

AMD Overall x86 CPU Market Share 31.3%

What's causing the bump from 16%-18% share to 31% in the avgs?

Are they including game consoles in the x86 market share?


I reached out to Mercury Research (the company whose research is being quoted) and the articles are missing some clarifying information.

The overall CPU Market Share number also includes both IoT and SoC numbers which are not included in the other numbers and in which Intel's shipments have declined substantially while AMDs SoC products (including game consoles) are making up a large number. This is what's responsible for the disparity.

They also added that the data was distributed with a helpful clarifying note the news articles are mostly omitting:

* Please keep in mind that due to the inventory corrections taking place, that the statistics and share movements reported here in the past few quarters -- and likely for the first half of 2023 -- are more reflective of the suppliers differing in the depth and timing of their inventory corrections, rather than indicating sales-out share of the PC market, which is something we probably won't know with any accuracy until late in 2023. *


I’m guessing ‘SoC and IoT’ includes things like the CPUs in NAS appliances. I have noticed recently a huge proliferation of AMD Ryzen CPUs in these devices when a couple of years ago it was Celerons/Pentiums.


The Steamdeck's APU (and others using the same tech) are becoming more common, not sure if those count as 'desktop' or 'console' though...


Valve is the only user of the AMD "Van Gogh" APU. Other handhelds are using regular laptop parts.

I think AMD originally pitched it for laptop makers since it was leaked in 2020: https://videocardz.com/newz/amd-mobile-apus-for-2021-2022-de...

But I suspect they rejected it because they don't care about IGP performance, just as they rejected Intel's "Iris" Broadwell CPUs and the hybrid amd/intel chip.


> NAS appliances

I love my Synology and love that they are going AMD. While some argue that having video transcoding on a NAS is wrong, it’s also very handy, but the AMD options I’ve seen aren’t a patch on quicksync.

That tech is unbelievably good and might be the only bit of Intel I like.


It helps that AMD is very generous with unlocking advanced features in even lower cost SKUs; low end Intel chips tend to be crippled in several ways.


I’m surprised those aren’t being shifted to much cheaper ARM chips.


I bet they will in due time. They have to recompile their whole stack to ARM, which probably isn't easy, but ARM is eating the world. It's only a matter of time.


That's what they said about Itanium too. /s


I think that's clearly not an apt analogy.


The 80-core ARM servers Hetzner provides are great. A nice midpoint between a slower general-purpose x86_64 server and an expensive and brittle GPU server.

I can easily see datacenters with these things in the future.


Probably a question of performance. NAS and IoT appliances have performance requirements too.


Low-end NASes have been using ARM for a long time.

The higher end x86 NASes are advertised closer to "home datacentre in a box" solutions that offer VM/container runtimes and third party software markets.

So far, x86 is significantly more user friendly and performant for these use cases: the "desktop-ish performance for desktop-ish prices" range doesn't really have many hardware options (Apple certainly won't sell theirs to OEMs, and the Snapdragon 8c is a bit on the low end, and the real data centre ARM many-core monsters are too big); and the software offerings aren't quite as user friendly either.


Perhaps they're not cheaper.


Majority of consumer nases are running the cheapest oldest arm and mips cpus they can find.


I understand that the 3 first statistics include non x86 CPUs


These numbers are really bad for both AMD and Intel, honestly. It suggests that the x86 server/desktop monopoly is getting crushed in a major way.


Less than 10% market share= crushed? These arm plans were initiated either before or when Zen was in its infancy. Amazon and others are going to have a real tough time competing with Genoa/Bergamo in server. ISA is unimportant, just ask Jim Keller or Mike Clark, it's all about execution and I'm sorry but you're not going to be able to compete with a team like AMD's. Apple's consumer chips are alright but they are never getting more than 20% of pc market share and are about to be embarrassed by Phoenix APUs. Oh and to the guy here claiming smart phones will replace pcs, go try and type an essay, play a AAA game, browse the internet with multiple tabs, music and discord on all at once on an iphone.


Amazon doesn't have to compete with AMD on performance, only on TCO. Not having to pay Jim Keller's salary is a good way to reduce your TCO. Graviton generally does win on TCO for most applications today.

Also, if 30% of x86 market share corresponds to ~20% of "big processor" market share, that means that non-x86 cores have about 30% market share.

European companies are discovering the same thing, too. They have embraced ARM and RISC-V very strongly.



I'm not so sure. The pc landscape is very diffuse, just imagine trying to get every windows app onboard with moving to arm. And until they do, the pc manufacturers aren't going to bother making hardware. Games consoles are moving to x86 to make porting easier, even though they have full control of the stack.

Unless Google chromebooks up their game, or the pc disappears completely I don't really see arm taking over.


Where are the non-X86 desktops??


Smartphones.

No, seriously. Most peoples' computing needs and desires are satisfied with a phone, they don't actually need a laptop let alone a desktop.

It's not that x86 is losing market share, it's that Personal Computers are changing from the desktop/laptop form factor to the smartphone form factor. It's mere coincidence ARM and not x86 powers all those smartphones.


Smartphones aren't counted as desktops in these statistics.


Mac Studio, Mac Mini, iMac for starters.


Since the M1 release I have really soured on x86. I had no interest in anything else for the last 30years...and certainly had never purchased a $3000 laptop before.

I purchased a MacBook Pro last year...zero remorse.


Well...when you're spending that much money of course you're not going to feel remorse XD


Some Chromebooks too.

(edit: maybe not "desktop" though)


Is a Chromebook a "desktop", or "mobility"? I have a hard time thinking of a Chromebook as a "desktop" PC.

Honestly, I think the research firm's divisions are crap. WTF is "mobility" anyway? Why would this not include laptops?


> I have a hard time thinking of a Chromebook as a "desktop" PC.

A Chromebox is a desktop PC. And ChromeOS devices can run a full Debian Linux distro in a VM. What's not desktop about that?


Smartphones can run a full Debian Linux distro in a VM too, but that doesn't make them desktop PCs.


Ok. It sits on a desktop, has an external monitor and keyboard and mouse, and let's you run general purpose computing tasks, like compiling a C++ program. You can even plug in a USB printer and scanner. You can get them with 16G of RAM and i7 CPUs. You can put them in dev mode, wipe the OS, and install any Linux distro of your liking. Is that a desktop computer?

Or is the definition of desktop computer that it can, say, run Adobe Photoshop natively?


Phones can do most of the things you listed.

So a "desktop PC" isn't about its feature set or where you might put it, instead it is a form factor.


NUCs, Mac Studios and Chromeboxes all have the same form factor. I believe there are All-in-one desktop Chromeboxes as well. The exclusion of desktops running ChromeOS from the desktop category is weird - especially considering they are capable of running Debian Bullseye VMs out of the box (x64 or ARM)


>Or is the definition of desktop computer that it can, say, run Adobe Photoshop natively?

My point is that the Chromebook is a mobile device: it's tiny and easy to carry around. So why is it not in the "mobility" category? And for that matter, why is every laptop computer not in this category? When did "mobile" somehow become equal to "something you can carry in one hand while also operating it"? The entire point of laptops was that they're mobile. These days, not that many people use "desktop PCs", outside of corporate offices, they use "mobile devices" (i.e. laptops).

And where do tablets fall? You can prop them up on a desk and attach a keyboard, and they act just like a laptop (you can also connect external screens etc.).

As I said before, their categories are stupid.


Here is a ChromeOS computer, that by your definition, is a "desktop":

https://www.hp.com/us-en/chrome/chromebase-all-in-one-deskto...

To the original point, it sleeps perfectly.


Oops, the bit about it sleeping perfectly was context leakage from another thread ;)


Chromebooks and other laptops would be in the "mobility" category. I'm pretty sure smartphones aren't counted anywhere, although things like the ipad may count in "mobility" as well.


Most Chromebook run x86 processors. You would be hard pressed to find one that didn't


> Most Chromebook run x86 processors.

Objectively true.

> You would be hard pressed to find one that didn't

Eh, that's less true; I own 2 ARM Chromebooks, because I wanted some non-x86 in my life, and neither was hard to find, it just took me actually looking for it.


> You would be hard pressed to find one that didn't

Search Chromebook on bestbuy, exclude out of stock and sort by price, and you'll see a mix of arm and x86.


The Samsung Chromebook Plus came out years ago and is ARM. There's also the ASUS C201 which is ARM and has Libreboot support.


I assume what the numbers actually mean is that desktop market share is shrinking to insignificance (confirmed by a sibling comment), and servers actually have significant competition from ARM now.

Edit: Also Macs.


An iPad with a keyboard


I find that difficult to believe, unless they did something weird like to lump smartphones into mobility.

I thought ARM represented like ~5% of server install base, and maybe like 10% of 2022 server sales. Similarly, I understood Chromebooks + Macbooks (the two major laptop ARM products) to be on the order of 25% of laptop sales.

These are both real and meaningful chunks of market, but I don't think they're enough to make up for the discrepancy in numbers. Perhaps they are including game consoles.

[Edit] - found reporting on 2021 numbers they clearly seem to indicate that the first three sets of numbers are x86 only (https://venturebeat.com/consumer/mercury-research-amd-closes...)

The two big extra categories would seem to be game consoles (as suggested by GP), and IoT.


The Intel not making chips for any gaming console I'd imagine


I assumed it's due to their server chips being way more efficient, powerful, and cheaper than the equivalent parts from Intel.


That is not an answer to the (math) question they were asking.


Yeah this must be it. In my experience new AMD server chips are better in pretty much any way, for most workloads. Everybody I know seems to agree. It just doesn't make economic sense to go Intel for new servers.


I found this bit from networkworld.com: "In the server market, AMD's total market share grew from 10.7% at the start of 2022 to 17.6% at the end of the year, while Intel fell from 89.3% at the start of the year to 82.4%. Interestingly, the server chips that are selling the most aren't the newest and greatest models" Hard to tell what calculations are being used to derive the percentages. Maybe they are using dollar amounts per total spending on all chips, and server chips cost more, therefore driving the market share numbers?


To give you some context: before Zen, their high was 25.3% in the fourth quarter of 2006. This was the year Intel shipped the first Core CPUs and in 2007 AMD shipped the K10 architecture. Their pre-Zen low came in at 11.6% in the first quarter of 2016 but even with Zen the turnaround was not immediate as you can see in the 10.6% in Q3 2018.

And while their discrete GPU market share is in decline -- and since they bought ATI they never had the lead, indeed it never went above 40% https://overclock3d.net/gfx/articles/2022/11/29160939924l.jp... it seems they still gain significant benefits from having them by selling a lot of APUs, especially in consoles.


Not sure how long that'll play out. The 13th gen have barely been integrated in stuff and they seem to, at least on single thread, smash AMD in the price/performance ranking. 13700K is a beast of a CPU for not much money. Also the next two gens of Intel stuff is very different. At least AMD are giving them an ass kicking which they deserve.

My PC has a 12400 in it which actually realistically kicks my M1 Pro MBP in the teeth on some things which is embarrassing as it's a junk ass end Lenovo desktop machine that cost <$500.

We live in interesting and exciting times.


Keep in mind the prices are changing quickly as of late. AMD brought out the 7700, 7900, and 7950 (non-x) that are cheaper, generate less heat, and include a fan. There's also been pretty heavy discounts on the 7700x/7900x/7950x AMD chips as well. DDR5 and Motherboard prices for the am5 chips are dropping as well. Microcenter is even including 8GB free with a CPU purchase. The price/perf of the AM5 platform has changes quite a bit in the last month.

Intel does win some benchmarks, but often at substantially higher power consumption. The AMD chips run cooler and even significantly reducing the TDP down to 65 watts often has a single digit percentage slow down.


The cost in the 7000 series is the motherboards at the moment. They're still extortionately priced. 2x the price of an equivalent Intel one and that hasn't changed. It's improving but nowhere near reasonable. The AM4/B550 combo is still pretty good however.

99% of what I do is single thread bound so I really want to see some single thread power benchmarks but no one seems to bother with those.

I'm in the market for a PC with the best single thread performance out there at the moment so I've got several machines configured in pcpartpicker and spreadsheets at the moment :(


Have you looked recently?

I've been tracking DDR5 motherboard (I think the perf is worth it), MicroATX (don't need fullsize), I don't need blinking lights, watercooling systems, wifi, or exotic sound systems, so I go for relatively barebones motherboards. I do want something small, quiet, 2.5G ethernet, and has dual m.2 slots.

On Newegg I found an Intel ASRock Riptide for $159 [0].

On Newegg I found an AMD ASRock Riptide for $169 [1].

I'll happily take the AMD and end up with a quieter and cooler system for the $10.00 premium. Debating between a 7700 (non-x), 7700X3D, or 7900 (non-x). Hopefully it lasts as long as my current (2015) desktop.

Do you consider a $10 premium "extortionately priced"?

[0] https://www.newegg.com/p/N82E16813162061 [1] https://www.newegg.com/p/N82E16813162083


UK here. Based on my needs, an Asus Prime H610 board that'll take a 13700K is £85. The cheapest AM5 board is £175.

(I'm checking one vendor here who I know has a decent returns policy when your system integration goes to hell, another common AMD problem from much experience!)


Ah, wasn't looking that low. That gets you half the memory channels, half the memory bandwidth, half the PCIe, and 40% of the ethernet bandwidth. They go for $100-$110 here, doesn't seem worth saving $50, at least if you plan to keep the system for 5-10 years.

BTW, AM5 added the ability to upgrade BIOS without a CPU. So no more nightmares of having to beg a CPU, borrow a CPU from AMD, or RMA to get a compatible CPU<->BIOS setup.


Yeah if I throw a B660 in there the price is closer. I would probably do that.

Good to hear that about the CPU issue. I found that one out the hard way and had to scoot around to a friend's house and borrow his CPU for a few minutes. I bought MSI board after that as they had that built in anyway.

AMD scare me on the reliability front. The 3700X and B450 combo I had before was a dick. It would come up 50% of the times with the fans cranking and have to be powered down and back up again a couple of times for it to boot. My Intel machines never did anything like that for the last 25 years or so but the last Athlon XP I had did the same damn thing.


Heh, I've been tracking desktops for quite some time, not noticed any significant reliability differences between AMD and Intel.

My current desktop is a 2015 Xeon e3-1230 which was CHEAPER then the similar i7, but allowed ECC. I do like that the AMD motherboards allow ECC, it's a bit of a mess, but better then nothing. AMD doesn't guarantee/certify it until you upgrade to the Epyc. So you end up digging through motherboard docs, certification lists, and forums. My previous desktop was some early amd64, had no reliability problems with it. Before that I think it might have been an Celeron 300A that was overclocked to 450 MHz, same silicon as their non-celeron 450 MHz CPU, so overclocking was very common then. So far I've been switching between AMD and Intel every gen, and likely on my Xeon replacement.


Are you consistently buying bargain basement boards? I have a 3700X + X570 and I've never had the fans cranking and it fail to boot.


All of them are bargain basement. Some just have shiny things stuck to them and fancy lights.

I tend to go for Asus Prime or MSI Pro boards these days.


No, some of them have better components too. It's not just shiny things and fancy lights.


I was an EE for years. There isn't much in it between the economy boards, corp shitbox boards and really expensive ones despite the marketing garbage.


What do you think of the sabertooth/TUF series boards (if they still have them around)? I always purchased those because they seemed to advertise better components specifically, but maybe I was being bamboozled.

Are there even quality grades for boards at all? Lets say I go into the super premium high price server space and start buying POWER 10s or something, would you say that the grade on those components (the basic stuff, not talking about anything design wise or the socket or whatever, just the base component quality) is also pretty similar to budget x86 boards?

I'm basically asking if there is any reason I should care at all about motherboards outside of whatever IO/cpu socket/memory its got on it, and if that also applies outside of the consumer desktop space.


All the major branded ones seem about equal. The only lesser ones are the weird Wish and Aliexpress ones.


Sure, but AM5 is a platform you'll still have CPUs to upgrade few gen from now, while Intel keeps the same socket for no more than 2 generations of CPUs.


A520 motherboards should be coming out soon. Hopefully they're cheap and don't suck.


What are you running that is pushing single thread performance but doesn't benefit from more cores?

Asking because IMHO it's so easy to use multiple cores on some parts of an app I feel like everyone should do it.


Compilation mostly (with shit compilers) and single image manipulation.


Which compilers? Always interested in unusual build-system issues.


These are custom written DSL and rule engine compilers. Mostly written or ruined by me.


I have an old i5 750 with 4 cores running Plex server, file server, Minecraft servers for kids, some home apps etc. Minecraft and transcoding has been pushing it a bit so I've been tempted to upgrade. I wonder if it's finally time to jump over to the AMD camp!

Any time I tried AMD in the past (I stopped trying 10 years ago) the AMD CPUs were cheaper but there was some problem that was just enough to keep me with Intel. Weird compatibility issues when compiling packages. Huge power draw. Weird motherboards. Has this improved nowadays?

Otherwise I'm thinking one of the budget i9 parts.


With Plex if you're using the iGPU on the Intel chip, best to stay with it since Ryzen iGPU still isn't supported. Then again, if you're otherwise planning to go i9, an equivalent Ryzen CPU would be more than capable of handling a few transcodes, although obviously wouldn't be as efficient as hardware acceleration.


> Huge power draw. Weird motherboards. Has this improved nowadays?

AMD never fully fixed the USB dropout issues under load, AMD CPU/mobo are still extremely problematic for VR setups. There are also some weird issues around stuttering with the onboard fTPM enabled (which windows 11 wants you to use).

Better than the early days where Ryzen 1000 chips had cache defects that led to segfaults under compiling workloads.

The AGESA-BIOS model foists responsibility onto partners for distributing microcode updates and AMD tends to release buggy microcode that partners then have to fix, and they churn the microcode so fast that the fixes often go stale and break other things, but partners have to do them or people accuse them of "buggy BIOS". It's sort of the inverse problem in GPUs where AMD has to fix other people's game bugs in the drivers... partners have to fix AMD's microcode problems in their BIOSs. Commonly manifests as sensor glitches/offset problems/etc, sometimes this has led to real fun problems when those sensors are controlling core/SOC voltage or things like that.

AMD GPUs still suck for VR, the 5700XT series had huge crashing problems under the windows drivers that were never really resolved for a lot of people, and the 7000 series seems to be on course to replicate those sorts of issues. They did finally bring the 7900XTX's idle power down to only about 45W though.

The fans will vehemently deny it but yeah, you're pretty much right. AMD has gotten better but their software/drivers and user-experience still aren't as polished, either CPU or GPU. It's usually good enough it's not a problem, but you're going off the beaten path and they're not always flawless. On the other hand who knows what's going to happen with Intel with an exodus of staff already happening and likely to continue.

7900X, 7800X3D, 7900X3D, and 7950X3D are great looking parts at the right price but Intel is willing to make dealz right now to keep the cash flowing, they've got DDR4 boards to bring the platform cost down, etc. You have to evaluate the whole build and see what you get... and AMD does worse with large 128GB builds right now too. Although Intel is not flawless either to be fair.

https://www.youtube.com/watch?v=P58VqVvDjxo

Around here microcenter is willing to throw in a free 32GB RAM kit of decent quality if you buy a 7000X (not the base tier and unknown whether this will include X3D) and Newegg was offering a similar promotion at one point. Which neutralizes a lot of the platform cost difference - the DDR5 + PCIe 5.0 capable Intel boards aren't cheap either, but, a bit cheaper than AMD right now. Again, gotta look at the whole cost of the build for your own local costs.

Personally I think the DDR4 compatibility makes Intel a no-brainer for very cheap builds. You can get a PCIe 4.0 + DDR4 Intel board for the usual, expected price ranges... PCPP says starting from $79. If you're a fit for a Pentium/Celeron or a 12100/12100F that's a system cost AM5 can't compete with at the low end, you have to go back to AM4 which are slower systems.

The 12600K, 13500, and 13600K are also very good gaming-focused builds. There's nothing wrong with 7800X for gaming either, especially with the free RAM, but, Intel 13th gen does outperform it. And the e-cores are "free" at the MSRP, so Intel is simply giving you more, but, AMD is also dropping prices below MSRP lately (ymmv depending on your location).

For productivity and "server"-ish workloads, the AMD 7900X is great, particularly since AMD supports AVX-512 while Intel has disabled it on 12th/13th gen. And the 7800X3D, 7900X3D, and 7950X3D should all be very attractive high-end gaming+productivity "crossover" components if you are willing to splash out.


My daughter has USB dropout issues on her 5600 desktop interestingly.


Do you have any information on the VR issues? I'm getting really weird issues if I really push MSFS2020 on my Ryzen setup.


official acknowledgement: https://www.reddit.com/r/Amd/comments/lnmet0/an_update_on_us...

later update: https://www.reddit.com/r/Amd/comments/m2wqkf/updated_agesa_c...

general discussion, a couple of which are post-update:

this one is interesting because the user has traced it to specific USB controllers and notes that one controller is perfect while the other 2 have problems: https://www.reddit.com/r/Amd/comments/lrcp8q/x570_usb_dropou...

https://www.reddit.com/r/Amd/comments/p5ygve/anyone_consider...

https://www.reddit.com/r/Amd/comments/wud1xo/was_the_usb_dro...

https://www.reddit.com/r/Amd/comments/twph8r/if_youve_build_...

this is all my personal summary/analysis and not fact, but: it does not seem to be tied to a particular generation of CPU or model of chipset (and this is significant because X570 is actually completely different from every other chipset which are all OEM'd by Asmedia). The symptom is that the USB controller stops responding under load (and I think probably drops frames at that point). Since VR is a very heavy USB load (especially Oculus) this is a semi-reliable reproducer but there's nothing inherent to VR that causes the problem, I think it just seems to be a "happens under load" issue.

AMD put out an AGESA patch which seems to significantly reduce the problem. And again, I think the idea that microcode fixes the problem, and that the problem exists across both AMD OEM and Asmedia OEM chipsets, suggests it's not a hardware problem with the chipset. Actually it may be the opposite, and the problem exists with the SOC implementation of USB and the chipsets themselves are fine. Some boards use the onboard SOC to drive the USB and that sounds like maybe the problem - others drive it off the chipset's USB and maybe that works better. It's possible the CPU themselves have a hardware defect or it may be a problem with the USB software stack inside the SOC. The AMD PSP is like the Intel ME, it gets in underneath the user experience and while you'd hope that USB is not implemented there maybe it is, or maybe it's interfering (mailbox overflow?) or something like that.

Again it's hard to know whether the residual crashes are something the mobo companies themselves might be doing to try and fix AMD's problems... maybe a fix that worked before is now crashing something else. It's hard to say. That's the problem with shipping glitchy microcode to partners. A lot of partners literally have one or two core BIOS devs for the whole company. This came up during the crisis around Zen2/Zen3 BIOS support, and after elmor left ASUS, that basically ASUS really only had a couple guys for the whole damn company and they're not the only ones. And AMD in particular pushes a lot of workload onto their partners. Great that they're moving fast but they break stuff which ends up being someone else's problem, and they're churning code so fast they break the fixes other people are doing for them. They don't publish source, only binaries, and they make the partner's lives very difficult.

https://youtu.be/JluNkjdpxFo?t=1276

https://youtu.be/36kCBQt7YBk?t=237

https://www.overclock.net/search/1039852/?q=agesa&t=post&c[u...

If I was personally in this situation I would try a standalone USB card, and pick from the list curated for the Oculus rift even if you are not using a rift, I think they are just better USB controllers. Put it in a CPU-direct slot and not one attached to the chipset. See if that helps, go from there. Maybe the card works in a chipset lane, maybe the chipset lanes work natively, etc, once it's working try to pin down the delta that causes crashing.

But like... there is this whole gaslighting cycle with the AMD fans. You saw the same thing with the 5700XT. "I have one, there are no crashes, it's 100% stable for me, you're making it up" -> lots of users report it and the gaslighting intensifies (HUB finally ran a poll on 5700XT after months of denial and it turned out over 1/3 of their very technically adept and very pro-AMD userbase reported issues) -> AMD acknowledges there is actually an issue and they're working on it -> patch released and it doesn't fix the issue for everybody -> "OK so there was an issue but it's all better now, if you're still having problems it's your fault!" -> everybody moves on to the next product and if you still have problems welp sucks to be you, my 5800X3D is amazing!

Same thing happened with defective Zen2 early silicon... GamersNexus had their 3950X miss its advertised boost clocks by up to 450 MHz (they actually understated it by 100 MHz, the advertised clock was 4.7 not 4.6), basically 10% of its advertised speed. People did the "it says up to", some people went as far as making programs that just run NOPs in a loop to show that a program could actually hit the advertised clocks so it wasn't technically false advertising. AMD acknowledged an issue, released a patch that didn't help (GamersNexus' review was after the patches, they go into F10c later in the video), the fans moved back to gaslighting, and everyone moved on and forgot about it. Later silicon was higher quality I think and just didn't have a problem but AMD wanted to ship some marginal silicon that didn't quite meet spec and didn't think anybody would call them on it. And indeed they were pretty much right, the fans shrugged and moved on.

https://www.youtube.com/watch?v=M3sNUFjV7p4&t=254s

https://www.reddit.com/r/Amd/comments/cfli2n/i_discovered_ho...

Issues with Intel or NVIDIA are front-page news, everyone freaked about POSCAP-vs-MLCC back at 30-series launch, the connector at the 40-series launch, some soldering defects with EVGA 1080s, the whole New Worlds thing (even though that killed some AMD cards too, everyone treated it as an NVIDIA issue), etc, and the AMD stuff is completely downplayed and shoved under the rug even though it's pretty often not fixed, or not fixed 100%, and it's pretty aggravating.

But AMD really stokes the reddit set, they were pioneers in going direct-to-consumer and stoking the fires. GN talks about it earlier in one of those videos, and LTT and Der8auer have said the same thing. Like, it's calmed down a lot over the last couple years but during 2017-2021 it was bad.

https://youtu.be/JluNkjdpxFo?t=259

https://www.youtube.com/watch?v=x03FyPQ3a3E

https://www.youtube.com/watch?v=rscDxMrZmGU&t=1304s (this is the LTT video that got him set up with a bribe from AMD to bury his Vega FE review, the threadripper preview and the holocube, if anyone remembers that little bit of pocket-sand!) https://youtu.be/oVxVIkw0xd8?t=261

And honestly the gaslighting aspect of that really has not gone away, which I still find incredibly annoying as far as internet discourse. Say there's a problem and people get defensive and will happily generalize their own experiences to tell you yours aren't real. Just say "AMD has driver problems" and someone will pop out of the woodwork to tell you they've been good for 10 years now, as if 5700XT and Vega and Fury X just all didn't happen. Sometimes the drivers/firmware/BIOS are great, sometimes they're not, and if they're not they don't always get fixed, or don't get fixed fully for everyone.


>The 13th gen have barely been integrated in stuff and they seem to, at least on single thread, smash AMD in the price/performance ranking.

Costs more to run, costs more to make, runs hotter and louder, by "smash" we're usually talking single digit performance gulfs, and performance is worse in a fairly broad amount of applications and where Intel's performance IS better it often is in an application where you would never be able to tell the difference without measurement tools.

The 12400 "kicking the m1 in the teeth" is likely due more to software optimisation than the 12400 actually being meaningfully better.

Your like of Intel's products as a consumer isn't really a reflection of their competitiveness. The only reason you're willing to buy their products at all is that Intel is cutting their margins to the bone whereas AMD is making stacks on every sale. Intel has had the best single core performance for eons and that hasn't saved them from their decline.


In synthetic benchmarks, indeed. In my workloads there's about a 20% lead and yes that might be due to software optimisation etc or something like that but that is a real factor in the outcome. If it saves me 5% of my time that gives me 10 days a year back.

To be clear I'm no loyalist. I went from an AMD 3700X to a 2020 M1 Mac Mini originally because that spanked it. Then to an M1 Pro which was regrettably no real improvement. Then to a 12400 because I had one as a comparison point and it was faster. Now likely a 13700K. Next time, it might be AMD, might be an M3. Who knows?!

I'm mostly just glad I can jump around and pick whatever is on the market rather than being locked into one product or platform now.


Personally, I went 4950, 1700x, 5800x, 12700k. I considered giving intel another try after the 5800x, so I went with the 12700k. I needed a computer quickly, and my previous machine was unavailable.

The next chance I am giving intel is relatively far into the future. Likely in 5-6 years at best given how many stability issues I have had.

Do note that this is pure anecdata, n=1 and likely suffering from a degree of confirmation bias.


What OS are you running on the Intel? I had problems with Linux on it where it'd slow down and then it would freeze entirely. It runs Windows 11 now begrudgingly and have had zero issues.


I am dual booting PopOS and Windows 11.

PopOS has had a surprisingly high number of instability issues and random freezes.

I considered the possibility that the build is the problem, but it’s the same nvidia LTS 22.02 build that I used with the 5800x.

Windows runs fine.

If I didn’t rely on cuda, I’d probably use github’s code spaces for programming and just stay in windows, or run a VM.


> PopOS has had a surprisingly high number of instability issues and random freezes.

Is by any chance your disk encrypted? That's the only (major) reason nowadays why I hear about freezes in Linux.


No, my disk was unencrypted, it was a routine installation with distro defaults. I tried Manjaro that was a horrific experience from the get-go.

It's possible that a lot of this in Pop_OS is related to wayland and gnome. The previous machine run a 5800x with a 1080ti, this runs a 12700k with a 3090ti, same distro otherwise.


Was the same for me then. It’s clearly not year of the Linux desktop yet then.


Amd has been great on Linux, both gpu and cpu drivers, and the linux user base has been increasing at a rate sufficient to cut into windows’ segment.


My 12700k has been rock solid stable. It’s fantastic chip.


I dont jump hardware this often. I would consider am5 100% of the time because the upgrade from a ryzen 7000 would be a used high end am5 cpu years after am6 is out.


I feel the need to interject that in order to fully utilize a chip like 13900kf as done in benchmark scenarios by e.g. youtubers, one needs to put a lot of money on cooling, and control the ambient temperature.

The wattage difference between AMD and Intel cpus is not just a matter of feeding it more electricity. Heat dissipation, air flow if air cooling, or reservoir capacity if water cooling, and control of ambient temperatures matters a lot.

Then, one also needs adequate power delivery to deal with stability issues.


MBP is the only computer I can run intense workloads at high performance while it remains completely silent. I’ve been waiting for such a machine for a long time. When intel can match that they’ll have my attention.


My Tuxedo Computers Pulse Gen 2 was specifically built to be quiet. I also have a Macbook Pro (2019) here that run a similar workload for testing, and I can tell you that the Pulse is far quieter.

I have 16 cores at 100% right now as I type this and there's no fan noise at all. I'm parsing 10,000+ php files and modifying them for an upgrade. I've done this before and the Mac makes a lot of noise.

MBP is good but it's not all there is. This is a AMD Ryzen 7 5700U with a dual fan setup. It's got performance and efficiency cores similar to the newer Apple chips.

Just throwing it out there! I hope more people support great laptops like this one.


Big complaint about your comparison: that (2019) is the last Intel MBP. The 2020 and up MBP are so much quieter under load it's not even funny.

And the Macbook Air doesn't even have a fan.


Sure.. I just meant that Macbooks aren't the only quiet ones. I have them and use them but AMD has made progress, too.

I also spent about $2,500 less than the option with 64GB of RAM from Apple.


My coworker's M1 MBP sometimes sounds like a jet engine.


Is this MBP have apple silicon?


No, 2019 is Intel.


damn that looks really slick. i had no idea anyone cared about fan noise in the windows/linux laptop space. how is the keyboard?


The keyboard is great, normal sized keys and nice to type on. I don't use it a lot but I have big fingers and no trouble with it.


I've found that simply lowering maximum fan speed on my laptop works well. The fan on what I'm using is nearly inaudible at 30%, yet I still get 90% of the performance on long-running tasks. Much as with desktop CPUs, you pay a lot for that last 10%.

On that note, a lot of laptops have goofy fan curves. In theory, you want the fan to move just enough air that the CPU doesn't throttle, since the rate of heat transfer is proportional to the heatsink-to-air temperature difference.


My 14" MBP is not silent under load. It's not loud but not silent.

You can build a suitably quiet PC. Be Quiet power supply, case, fans and cooler. Job done. They run very quiet. I had a 125W TDP CPU in one of them and you could barely hear it under load.


I’ve tried some pretty intense undervolts and power throttling but my PCs have always been annoyingly loud. I think I just need to get a case full of noctuas in my next build. But the dgpu will always be loud as shit regardless.


You can get the GPU to be quiet if you go watercooling combined with quiet fans attached to the radiator. Much more expensive, much more complicated, but also much more efficient when it comes to cooling your equipment.


probably not in their form factor and quality


Can't disagree with the form factor but the quality is equivalent if you ask me. They both are likely to last 5 years+ and stay out of my way.


Who runs intense workloads at high performance on a laptop? There are cloud servers for that.


Apple to oranges. 12400 has a TDP of 65W while M1 has 13.8W. What can a 15W Intel do? Not much


That's not true at all, M1s in MacBooks absolutely draw more than 15W - they can go all the way up to 90W for whole package.

M1 are silly efficient but not 10x magical: https://www.anandtech.com/show/17024/apple-m1-max-performanc...


The equivalent Intel laptop is at 220W for both CPU and GPU in the same scenario.

So no 10x, but clearly 2.5x.


Yeah, doesn't really change the fact that 15W TDP claim is far from truth.


There is no such claim, though.

Apple has published the power figures for some of their systems, and nowhere it says that they draw or dissipate 15W [0]

[0] https://support.apple.com/en-us/HT201897


Well the (nominally) 15W 1260P is ahead of m1 and the 1280P is faster than the M2. Of course Apple still has better GPUs and is ahead in power efficiency.


The 1260P is rated at 28W, not 15W. It also has 4 more cores than the M2, and it could be considered way more expensive at $480 (the Mac Mini is selling at $600 in its base configuration, for example).

Hardly comparable.


Don't really care. I use my MBP as a desktop machine and the TCO over 5 years including the electricity of the PC is much much lower.


Using which equation? I run an 8 year old system76 laptop with a desktop grade CPU and my yearly electricity usage is about 100kwh (I measure pretty much all of our electrical usage), or $35.

In my experience, the biggest electricity hog isn't gonna be your CPU, its gonna be your display(s), if you use any. I don't use any external monitors for my work setup.


The displays are a big part. I did a full breakdown here of costs if you are interested: https://news.ycombinator.com/item?id=34812247


I use my 150$ hand crafted hammer to screw in screws, but it gets beaten by my 5$ screwdriver at screwing performance, shameful.


To get an equivalent metaphor here, you'd have to include "the $5 screwdriver also destroys $300/year worth of projects more than the hammer-screw"

And then yeah. Shamefully short-sighted to use the $5 tool.


> My PC has a 12400 in it which actually realistically kicks my M1 Pro MBP in the teeth on some things which is embarrassing as it's a junk ass end Lenovo desktop machine that cost <$500.

And if you factor in power consumption at EU prices?


Yes measured with an average power meter. 12400 Lenovo shitbox is £0.26 per day including the monitor (Iiyama 27" 4K). MBP is £0.19 a day including the monitor (Apple Studio Display). From spreadsheet:

TCO over 5 years for Apple is £3838 (outlay) + £247 (electricity) = £68.08/month

TCO over 5 years for PC is £1045 (outlay) + £405 (electricity) = £24.17/month

That's shit change really but the cost difference is immense.

Also the PC value has gone up since I bought it as I got it on a one day offer. I can shift it for £120 more than I paid for it. The mac, not so much.

The Mac depreciates in the first 18 months more than the entire PC set up and electricity costs.


> including the monitor (Apple Studio Display)

honestly it's impressive that you gratuitously threw in a 1700GBP display on one side of the scale and the cost difference was still only 36% lol.

Also just because the retail price has gone up doesn't mean you can actually sell the PC for the retail price. Resale value of x86 laptops is garbage... it's better for desktops of course especially if you can retain pieces of it, but this is still a super loaded comparison and even then Apple only came out 36% more expensive according to your numbers?


Not sure how you managed to get 36% more expensive. It’s 2.8x the cost.

I’m just reporting where I am not how I got there (which was mostly me going “ooh shiny”)


Those are your numbers, you said GBP 0.26 per day vs GBP 0.19 per day. You didn't show your math on residual value/etc so there's no way for me to go deeper than that, I'm just taking your numbers at face value.


I assume I write the hardware off as assets are a sunk cost immediately and there is no guarantee of a return. So I’m comparing the monthly TCO (over expected lifespan) not just the energy usage.


All you said is relatively true but ignores better power performance for AMD.

"Smashing" ST performance while consuming 150w is not an ideal situation for anyone but gamers and benchmarkers.


And most gamers probably lack the cooling needed for those chips.


AMD has the edge on performance per watt though.


I have a 12400 and a M1 Air. The Air is faster on pretty much anything I've tried, even most benchmarks.


Not when you hit swap it’s not.


Most stuff is multi threaded now, even games.


When they come down in price they should be a damn good buy since it looks like they're much more efficient than AMD chips


We bought a $9000 single socket AMD (2.4 GHz) 40 core server from Dell and a $27,000 dual socket Intel (2.8 GHz) 100 core server from Dell.

The AMD server is at full load, using around 150W whereas the Intel server idles at 450W doing absolutely nothing and pushes up to 700W if we load it up.

In hindsight, I regret not just buying 3 of the AMD servers for the same price. I had this fallacy that the Intel server would be magically more energy efficient, but that has turned out to be FUD and baloney.

NB Power usage measured at the wall socket with voltage and current monitored PDUs.


Sounds like there is something else going on there, since the Xeon CPU idle power consumption is nowhere near 200W [0]

[0] https://www.anandtech.com/show/16594/intel-3rd-gen-xeon-scal...


You have to remember that it's not just the CPU but also the IO (memory and pcie) and the chipset... collectively it's often called "platform power". Even if you don't have anything plugged into it, a big powerful chipset and lots of on-chip PCIe expansion still eats a decent amount of power.

Also everything you plug into it will eat power, some 10gbe stuff and SAS controllers/etc are not lightweights in idle because it's not expected they'll be idle. Every HDD generally pulls about 5W when idle+spinning, even SSDs are a couple watts apiece.

That said, yeah, that sounds way high for idle. ServeTheHome is a good place for stuff like this:

> motherboard: X11SPM-F

> memory: 6x 8GB RDIMM 2666Mt

> kill-a-watt power on the wall at idle: 30..35Watts.

https://forums.servethehome.com/index.php?threads/xeon-scala...

Single-socket though, but, another CPU probably doesn't add more than another 30 watts or so.

Epyc actually tends to be worse on idle power because it has to run all its infinity fabric links at all times, if it wants to access that memory, and data movement is comparatively expensive. There were some early Milan/Zen3 epyc numbers that need to be disregarded due to problems with AMD's reference implementation motherboard, but, you can see from that thread it's not uncommon for Epyc to idle at 100W for a single-socket board. You can see from that thread that the consensus is Skylake-SP generally idles better.

My own personal anecdote: I am tinkering with a dual-socket 2011-3 server right now (X10DRC-T4+) with a pair of 2697v3 and 2x4x32GB RDIMMs and quad 10gbase-t NICs, at idle I am around 75W at the wall... obviously Skylake-SP has a larger platform (it's the socket after mine) but it's not 350W bigger, guy has something wrong with his setup.

It is definitely fair though that under load the Epycs are a lot more energy-efficient. Skylake-SP is still 14nm and Intel lingered there for far too long, 7nm is more than a full node better and Milan has higher IPC, higher clocks, and more cores. Ice Lake-SP was better than Skylake, but nobody bought it because it was years late to market and AMD was already better. So you are effectively comparing 2016-ish Intel stuff against AMD stuff that is 4+ years newer... but on the other hand that's the Intel stuff that is deployed right now, and people generally want to minimize the amount of generations in their clusters to make it easier to operate VMWare vMotion etc. /shruggy


Not to mention the fact that "pushing up to 700W underload" is.. absolutely nothing since that means, ignoring everything else and just assuming CPU drawing that, the cores are only using 7w PER CORE.

And this AMD system is using apparently a peak of 3.75w per core? really?


This comment made me realize I've never seen power per core specified. That seems like a great metric. Would calculating this be as simple as dividing peak usage by the number of cores? (Ignoring that modern CPUs have multiple core types)


Power per core is pretty variable though. If you've got a lightly threaded workload, you shouldn't be getting 40-52 cores per socket, but many of the cores could be at near 0w and some of the cores will use more power than they would on an all cores workload.


It wouldn’t really. I only used that because the parent poster was making it sound like total power draws with the cores all pegged out. And if they are actually doing work and not just running full bore through a whole lot of NOPs then that would be a sorta average of power on each core.

But as the other commenter said, the actual per core usage will vary /wildly/ even under high load. You would half to run something fairly specialized such as prime95, which can run entirely from cache, to get more accurate per core wattage values under same load maximum load.

As it stands, my i9-13900K seems to have an idle “package” draw of around 10-12w and can peak at 360w under prime95 load. And that’s stock. I can make it ramp way past 360W with very little effort.

But under “normal” desktop usage it happily sits well under 100w.

So… shrugs. I would like to know more on this parent’s setup instead of “My server with literally 40% of the core count uses 1/6 of the power!”

But we may not be so lucky.


The load case is not 1:1 and the experience is only anecdotal, but I put a few more details about the load in another comment below.

My commentary is only really about what we believed about AMD before making our purchase (internet forums like this one warn about energy use), how that wasn't really the case when we measured it on a real load, and why we 'feel' we would prefer to go the smaller and cheaper AMD server route next time.


Right? That is why I want to know more details instead of this vague garbage.


Some people would consider this comment "FUD and baloney" because of the number of confounding factors you're jumping over.

You're measuring wall socket voltage on two different servers to make a commentary on CPU power usage, but that's including storage, power delivery efficiency, cooling solutions, etc

You didn't describe what "we load it up" means for the Intel server, or how that compares to the AMD server, or what load is for that even (you're saying you keep the AMD server pegged at 100% CPU, or you have 100% of your expected load?). Presumably you didn't get a dual socket CPU for giggles, so the load isn't trivially comparable between both systems by either measure.

If you're buying servers because of fallacies about entire brands, please leave the procurement to someone more experienced. That kind of stuff is more for building a gaming PC, server hardware is way too varied for "brand X is Y". You need to examine things at the SKU level.


* dual socket is power hungry for both platform

* still 450W at idle isn't only due to dual CPU

* 150W for full load, really?


Our use case is we're running around 30 GitHub Actions runners on VMs that are just building code in MATLAB. 'Full load' is => all the runners are occupied, CPU shows around 50% on the host.

The idle case on the Intel server is RHEL OS running several Windows VMs that are doing nothing, and the loaded case is when we spin up a large VM with around 40 cores on that host and run something in MATLAB (essentially a similar job to what the runners would otherwise be running).

The comparison is not completely accurate, but our experience was only anecdotal.


I had similar experience in another job in a big corporation. It was obvious that AMD chips were way better and cost efficient. But management only wanted to pay for Intel because it's "Enterprise grade".


Which AMD chip is 40 cores?


There aren't any. There are some 48 core EPYC CPUs with a TDP between 225W and 360W.

None of them are rated at 2.4Ghz. The EPYC 7351P 2.40Ghz has a mere 16 cores (32 threads) and is rated at 170W TDP.


Looking at wattage alone doesn't tell us very much; idle at 450W seems pretty broken, sure... but how much more traffic does the Intel at 700W than the AMD at 150W? With 2.5 the cores, I'd expect at least 2.5x the traffic?


I’ve never seen linear scaling from a dual GPU setup. IMO you’re doing well to get 70% from the second CPU under full load.


Depends, sometimes you get better scaling from having 2x the cpu and 2x the memory (maybe 2x the disk) all in one box than if you have to split it up and have more cross-server connectivity.

You wouldn't buy a dual socket box for 3x the money if you didn't think you'd get 3x the performance out of it, unless you really wanted to keep your server count down.


For us, we only allocate ie 8 runners to a VM with 8 cores, which we pin to 8 cores on the actual server, so our focus is getting as many cores as possible to get as many runners as possible.

We kind of expected that scaling HW with more CPUs instead of more entire rack systems would be more energy efficient, but I don’t think it makes too much of a difference compared to other factors like CPU/vendor architecture.

FWIW our energy bill is $1000+ per month just from servers, and then several $k for various licensing on each host per year, so there was some sweet spot we were optimising for (low power, but less licensing cost via less racks). Our cost in the cloud would be $5k per month or more, so either way we still saved money this year.


My point was simply expecting linear scaling is a recipe for failure. Spending 3x as much for a 50% improved performance can be the best option.


While I don't doubt the AMD server with less than half the core count draws a lot less power, I would /like/ to know more details. Like what model/SKU servers? What are the actual model chips? How are they configured? blahblah

Going "I got X and Y and with no further information X is so much better than Y!" online gets so fucking old.

And I am genuinely curious. Not just being antagonistic. I am looking at some newer servers soon and have been debating Intel or AMD and I like seeing actual reallife results people have had.


From an investor perspective, AMD was a fantastic story due to Intel's 10nm process slip. AMD also got the jump on MCM which is arguably the most disruptive innovation in the mix.

I think the pendulum has begun to swing the other way. Intel is catching up and surpassing in some important areas already.

I've held positions both in AMD and Intel over the years. Right now I am long INTC.


What would be even better than this duopoly would be three or more players. I was disappointed when Windows Mobile folded as look what we are stuck with now. Sucks that markets like this devolve into only two major players like consoles, mobile OSes, CPUs, GPUs, commercial aircraft, rail freight....sigh.


Apple design their own silicon too, it’s a shame they aren’t interested in the data centre given what an advantage they could have in the coming AI wars. They need to wake up on that one and get an ecosystem for AI that could potentially lead to them winning.


Intel stumbled badly on their new semiconductor fab development. Their next generation fabrication plants have been delayed for years and they are losing badly to TSMC, which makes AMD's chips. It would be interesting for someone to tell the story of why they failed where before they had cranked out new fabs process technology like clockwork.


I recommend watching the YouTube channel Asianometry. He's covering this topic and more.


As an Intel fanboy, I am sad to see my favorite team beaten by the competitor. I am holding out hope that their next processor will return them to the crown of price and price/performance. I am currently rocking an i7 3770k iMac 27” and I am ready to upgrade.


Could you elborate why are you an Intel fanboy? I'm not particularly versed in computer chip markets but isn't Intel perceived as awfully incompetent mega corporation that is sustained by the fact that it's very big not because of it's merits? What is there to be a fan of?


I'm not a fanboy, but the idea that you can fire up 20 year old software on the Intel and run it with insane single-thread performance (substantially more than M1 or AMD) is why they are still top dog. It takes a certain level of technological sophistication that is more impressive to me than bolting on more cores.

On a practical level, there are a ridiculous amount of games, simulation, scientific and engineering software that falls into that category and asking people to re-write tens of thousands of lines of code to take advantage of a new architecture isn't happening.


> awfully incompetent mega corporation

That overstates how bad it is. Over time, it feels like they slipped to being a year or two behind in both fab and chip design, and it didn't become apparent until ~3 years ago. The reason it's not that bad is Intel rushed out some chips that are competitive with AMD chips, and despite their process difficulties, there are still only a handful of companies that can make these chips.

What's actually bad is that they lost Apple. It motivated a lot of places (including Microsoft) to seriously look into ARM.


I'm an Intel fanboi.

Objective factors: Much more reliable overall with significantly less jank compared to AMD, especially on the software side of things. Much better power draw at idle and low use. Onboard GPU for practically every CPU, with only very explicit exceptions; great for video encoding (QuickSync!) or when shit hits the fan. Better single thread performance compared to AMD for no personally-practical loss in multithread performance.

Geopolitical factors: Less reliance on the Taiwan Singular Egg Basket(tm) problem.

Personal factors: Sentimental value, I grew up with Intel CPUs and I owe a lot of who I am today to them; I have an Intel Inside. The Intel chime is up there with the Windows 95 and 98 boot chimes for defining my 20th century.


I don't understand being a fanboy at all of any of these companies. I recommended AMD since the first time they released Ryzen. Recommended Intel since the Core series was introduced. What's the point of being a fanboy?


What's the point of being "fans" of hardware?


Air cooling.


Hey Intel, try this:

1) pay more for talent than anyone else.

2) treat your engineering staff amazingly well

3) watch your business completely turn around in two years


> 2) treat your engineering staff amazingly well

You can lean hard into a certain flavor of this. But it means orienting internal incentives away from excellence of outcome. So it gets you modern Google (allegedly):

https://medium.com/@pravse/the-maze-is-in-the-mouse-980c57cf...

Quickly, proper implementation of simple principles becomes complex.


There's a wide canyon between Google and how awful Intel has been treating its engineers in the past decade.


Sure, and I hope their treatment improves. My point though is that the two steps outlined above are likely insufficiently specified for turn-around success. At best.


What is your suggestion for improving things at Intel?


Why would you expect me to have suggestions for a specific case when I'm saying that the general problem is hard, not simple?


2.1) Listen to them.


How does does AMD/Intel stack up against ARM with something like Amazon's Graviton instances?

It seems /a lot/ of the people I know are moving from Intel/AMD to ARM, I suspect the move is bigger than I expect. So I'm wondering if that's chipping away at Intel/AMDs market share.


> How does does AMD/Intel stack up against ARM with something like Amazon's Graviton instances?

From what I've seen, poorly - Gravitons are significantly cheaper (10-30%) for similar performance, workload dependant*.


AMD also has chipsets that are making their way onto motherboards. Video, PCI, USB, etc. We had a problem with an HID device of ours that would work fine with the ubiquitous Intel USB chipsets for the last 10 years, but didn't work properly with the AMD USB chipset on a new laptop.

So Intel is losing chipset share, too.


Great job AMD, now do RISC V so Apple's M1 can get some competition.


If they did, what OS would run on it? The only options I know of is Linux or one of the BSDs so you're either looking at the server market (where M1 is not a competitor anyway) or the Linux desktop market, which is small - nothing close to 30% of desktop/laptop PCs.


I can see server market being a thing, as ARM is just now starting to make inroads there so there might be room for a new emergent RISC market, assuming it could be price/watt competitive. But I agree, it would be a tiny market.

Another option is if AMD decided to get into the mobile / embedded SoC market for real, and didn't want to pay an ARM license to do it.


Actually RISC-V is worse than x64 when it comes to performance per watt. It is designed to be very simple, not to be efficient. It does not natively have common instructions like add multiply. To make an efficient modern processor, you want to larger more specialized instructions that do more per clock cycle and have dedicated hardware for common tasks.


You might want to check your sources on that. Everything you said is wrong.


RISC-V is designed to be efficient; when compared on the same litho processes and gate counts it's very competitive wrt perf/watt.


"It does not natively have common instructions like add multiply"; this is is misinformation. Almost anybody implementing RISC-V -- esp if they are targeting an MPU instead of MCU -- is implementing instruction variants which include signed and unsigned multiply and divide, and my understanding is the P and probably V extensions include various add-multiply instructions.

https://github.com/riscv/riscv-p-spec/blob/master/P-ext-prop...

See section 3.2.

The instruction set is specified in a 'modular' fashion and has standards for various extensions, including compact instructions, vector extensions, etc. Compilers like GCC and LLVM support targeting these extensions.

I'm not saying AMD should make a RISC-V CPU, but if they were to hypothetically do that, they'd clearly be including a pile of the ratified extensions.


> To make an efficient modern processor, you want to larger more specialized instructions that do more per clock cycle

No, not really. The large specialized instructions in x86 run across many, many clock cycles. And IIRC, compilers will often generate a sequence of simple instructions rather than one complex instruction, because the CPUs are so much better at executing sequences of simple instructions.

(And yeah, as others have pointed out, of course high end RISC-V chips have multiply and divide instructions with dedicated multiply and divide hardware.)


This seems like a weird metric, CPU market share seems to only include actual drop in CPUs that go with motherboards, right? If that’s the case, wouldn’t the overall CPU market be shrinking as we get things like the M-Series MacBooks and Macs coming out?

To me, the more interesting question is what does the overall market of compute devices that can be used in a fashion similar to what AMD and Intel sell (being charitable and leaving out mobile) look like and what percentage of that market do AMD and Intel make up? Has the overall market grown? Is it growing faster overall than the laptop/desktop/server/hpc space that intel and AMD are catering to?


Watching an Asianometry episode, I was surprised to see that Intel contributed significantly to ASML's development of EUV.

But I also heard somewhere that Intel, until recently, refused to buy ASML's EUV machines. Trying to go it mostly alone on EUV lithography.

Are both of these facts really true?

Seems insane that Intel would help ASML develop EUV and then watch competitors buy the machines and reap the benefits.


AMD should give desktop CPUs more PCIE lanes and memory channels.


I always bought intel because whenever I bought AMD I regretted it.

Now however Intel has low power cpu cores and high power cpu cores and I feel cheated. AMD has all full power cores.


I haven't used Intel since Zen 1 came out. Never regretted it.


If AMD sells SoC on standard MB sizes, they’d definitely take the gaming market. They’re getting impeded by MB manufacturers price gouging.


I wish this was the case on the GPU side too. Nvidia is playing by their rules and everybody follows.


Every possible forum on the Internet is biased towards AMD. Apparently HN is no exception.


This shows how dangerous it is when a real tech expert becomes the ceo of a large company


Aren’t the CEOs of both Intel and AMD technical experts? Not sure what your point is.


You call someone started as CFO an chip expert? Obviously you have 0 idea about what you are talking about.


holy cow that's a big chunk of marketshare.


When are AMD 3nm part expected? Mid 2024?


That's still up in the air. Probably with Zen5, but if the price isn't right they're apparently taping it out for N4 (ie. N5++++) too to hedge their bets/negotiating position with TSMC for wafer prices.


This is good for competition sake, that's for sure. But the PC market will continue to suffer under these missed opportunities:

- DRAM price fixing and limiting: we should probably have 4x more memory or more for the price we pay these days

- SSD price fixing and limiting: we should probably have 2x more SSD capacity per cost

- ARM in PC is still being foot-dragged. I have no faith that Intel can make an M1/M2 competitor or killer, but AMD, Samsung, and Qualcomm should be able to do it. I mean, Amazon/Aws have their own ARM chips.

- compact low power PCs like raspberry-pi type form factors (or the PC stick, or similar) are ripe for the taking. I have tower case PCs from 5-10 years ago that should be replaceable with a 3"x3" cube design, or imagine a base power supply where you take some sort of FrameworkPC-style headless notebook blades and they sit vertically on the base. The ports all face you. If it was a great multiple-4k display super-KVM, heck yes.

- AMD, Qualcomm, and Samsung should be pushing a linux desktop initiative. Microsoft and Windows are really holding back PC units with their awful OS releases. The disaster of Windows 8 has not really been fixed, and they frankly don't care. A better-than-OSX linux is out there and it is low hanging fruit.

The PC market is so ripe for major innovation. It is stuck with crappy motherboard and case form factors.

Intel has ALWAYS sucked at platform leadership. AMD (and/or Samsung and/or Qualcomm) don't have to be. I really hope if AMD hits 50% at some point they can push more platforms to create new markets and segments for their processors and graphics chips. Intel is beyond saving, they have never understood platforms, software, and their management is moribund in penny pinching and monopoly rent extraction.

Apple still treats the PC market as an afterthought. They have the moxie and hardware/software to do something, but they aren't open enough.

Qualcomm might simply be too OEM-focused to show this kind of leadership, and they'd need a major ARM PC CPU, and that is five years off.

Samsung is too samsung ecosystem focused. They are not open enough.

AMD is, for better or worse, the one with the market position and innovation ability to create new PC opportunities. They don't have the subservient history to Microsoft. They have more EU/world focus, and those markets should be interested/open to non-Microsoft computing.

And that is the path to long term market relevance. Intel may still come back, and they have pseudomonopoly powers and relationships with big business to relegate AMD. AMD needs to continue execution in x86, but do what I described: ARM/Apple M1/M2 competitor, Linux/Steamdeck desktop OS initiatives, and push more revolutionary small form factor computing platforms. It will hedge against Intel's monopoly powers and open new markets that Intel has shown utter inability to compete in historically.

And frankly, with the "Moores Law" family of macro-trends stalling more and more, the path to more CPU and GPU sales isn't just a faster better chip. It's getting chips into more packages, places, and uses that people currently have them. Form factors and packaging are going to make MORE of a difference for upgrading. Right now, if I could get some sort of fancy framework-style compact notebook-that-is-a-thin-but-upgradable-desktop computer in a neat package, that's a much more compelling buy than a machine that has, what, maybe real world 10% PERCEPTIBLE performance improvement over something from 5 years ago?

Realistically this should have been in full swing with the COVID boom. Not sure if the penny pinchers will allow such "radical" investment (the non-ARM initiatives might take, what a billion a year froma 26 billion a year company?)


Such a misleading title.

> [...] AMD now has a market share of 31.3% (up from 28.5% in Q4 2021) versus Intel's 68.7% (down from 71.5%)

That's not "grabbing over 30%" in my book. It's grabbing 2.8%.


I don't think the word "Grabs" specifically implies a change (delta) instead of the overall state. The word isn't that specific, it's really a colloquialism. This seems like a perfectly understandable headline to me.

"I grabbed four slices of pizza"

"dangus grabs 50% of the pizza"


28.5% -> 31.3% is a 9.8% increase in ~1 year.


The result of subtraction of percentages as you have done is typically given units in "percentage points" or "basis points" (with a scalar offset) in order to avoid ambiguity. The "percent change" or "percent increase" is calculated as is typical with any numerical change, (x1 - x0) / x0.

https://www.mathsisfun.com/percentage-points.html


I’m confused. Are you saying he straight up subtracted the larger number from the smaller?


Oh huh. I must not have been wearing my glasses. I thought that said 37.3. Which still is only 8.8 percentage points.

But even doing the percentage difference, I calculate a 10.2 percent increase.

So I'll just shut up and go face the corner with the parent poster.


https://xkcd.com/1102/

We're always having this same rehashed argument over and over again on HN. "My [CPU manufacturer] grew by 100% this year." Market share is an absolute percentage, so you should talk about it using absolutes. It doesn't make sense to talk about relative increases unless we're talking only about AMD revenue without respect to the overall market.


You might want to revisit your elementary school math class material...


I really need to upgrade my personal media server but both companies' CPUs just use way too much power. It's kind of ridiculous. Yes, they're fast but power hungry as heck. Makes my dual xeons I run now look energy efficient.


They're not though? They both offer incredibly fine control over how much power they consume, and will happily run at way lower power. They seem to accept undervolting pretty well too, which is a little riskier-of-instability/fancier.

There is some new inefficiency on AMD builds now that the Core Chiplet Die (CCD) and IO Die (IOD) are separate units, not monolithic. DDR5 also taked more power. The fancier X670 chipset (really, two chipset dies) also takes power & could very much use some tuning, but it's also doing a beastly amount of IO work, in some cases for good cause (more low power capabilities/better idle would be appreciated though! One of AMD's first in house chipsets in a long time).

We've barely started exploring how well behaved we can make the just launched 65w ryzen parts... but it wouldnt be shocking to me if there's not really any advantage, if they literally end up being nearly the same part, rebadged & preconfigured to lower power. Is there even really going to be any binning to have selected these? You know maybe not!

Out of the box most chips come tuned for speed, but ask them to be a little nixer, turn on eco mode or power the power point, they are stunningingly efficient. Alas idle could use some tuning; staying under 40w idle is harder than it ought to be.


I need the performance, thats the problem every one here seems to have missed. Yes, I know there's lower powered chips. This server has 30 HDDs. I use it for a lot of processing on photos and videos I take.


Modern CPUs scale quite non linearly and the default set-point is where it is for marketing reasons. My i7-7800 can do 1 work for 70 watts, 1.25 works for 150 watts or 0.9 works for 38 watts.

Just underclock/undervolt, problem solved.


It is sort of incredible that HN can't figure out how to set the power limits on CPUs. It's almost like they deserve the walled garden Apple gives us without any knobs or controls.


There's a reason they eat up all the marketing around the M1 as being magic.


The 7700/7900/7950 have a 65 watt TDP, idle much lower, and can be underclocked if you want lower. I certainly wouldn't consider 65 watts as "power hungry as heck".

Did you by chance read an Intel i9 review? Yes, Intel's going crazy on the power front trying to match the AMD performance, which of course lead to AMD pushing it a bit as well. But the new non-X AMD CPUs are quite power efficient for the performance.

If that's still to much the M2 or M2 Pro in a mac mini is rather efficient on power and still performance quite well.


> The 7700/7900/7950 have a 65 watt TDP

only if you disable boost. 88W PPT for AMD's "65W" CPUs at stock.

not a fan of the "PPT" terminology, it's pretty deliberately designed to mislead people and give AMD a mental "cheat factor" in these comparisons.

And no this is not something that "everyone did", in the 5960X/7700K days Intel's TDP values were expected to cover boost clocks and AVX as well, it's only 8700K/9900K/etc where Intel started to play fast and loose with their TDP figures. Which was after the introduction of Ryzen and the "PPT" terminology... in those days AMD pulled more than Intel's chips and they needed a way to hide it. Then they took the lead on efficiency.

kinda like TSMC naming their chips "7nm" and that resulting in huge mindshare problems for Intel even though 10ESF/Intel 7 (third-gen 10nm) was actually better than N7. Or Samung 8nm not even being as good as first-gen Intel 10nm, lol. So Intel had to rename their nodes too. A few bad actors ruin it for everyone.

88W is the number that matters for your load power measurement comparison - again, unless you disable boost.

The "X" chips are also clocked higher... 7950X would be 170W with boost disabled and runs 220W at stock.


Note that a typical 27" monitor uses 80 Watts.

So yeah, 100W or 65W TDP is not much at all.


From what year? Maybe 2010? The type that would feel pretty warm when touched, especially on the top edge?

I have a 27" Dell U2719DC from 2 years ago, pretty average monitor, 2560x1440, includes a USB hub, and has a USB-c ports for charging other widgets.

On a bright screen, like full screen tab of HN I get 22 watts. If I have a darker screen like anything full screen in dark/night mode (white text on black background) I get 20 watts. When I touch the screen or top edge it's pretty close to room temperature.

To be fair my work configuration uses two of the 27" monitors.

I'm measuring with a kill-a-watt, which has a decent reputation for accuracy.


Go back another five years.

https://everymac.com/monitors/apple/studio_cinema/specs/appl... the 30" Cinema Display from 2004 consumed 150W. The IBM "Big Bertha" T220 in 2001 at 20" consumed 111W. So that's the era when I'd expect a 27" to consume 80W or even more.

By 2010 some 27" displays like the Sceptre X270W consumed less than 45W. https://images.anandtech.com/graphs/apple27inchcinemadisplay...


The UP2817Q actually runs at 90W (but it's a 600 full / 1000 area nits HDR display).


Have you looked at the power draw of the non-X variants of AMD's cpus? It might be closer to what you are looking for.


Dual Xeon is worst for idle power consumption that is most of the time of personal media server. Just buy old Intel micro PC that is good for idle power consumption.


how much power do you need? you can buy arm, either sbc, if you don't need performance at all, or apple mini, if you do


There are mini nuc like computers that use mobile chips and those tend to go down to 15w.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: