Ryzen has been more power friendly than Intel Core for a while now. I don't think consumers care much about eco friendliness, though, which is a shame. I suppose it could always be sold as "doesn't heat up the room as much", but anyone with a dedicated GPU will have a computational space heater sitting next to them regardless these days.
Gaining popularity in the ever shrinking market of PCs is nothing to brag about. I'd wager it's still not that popular as you make it be. Everyone just buys laptops nowadays and whoever they do buy PCs they get the full sized towers with discrete GPUs for gaming, not small form factors with APUs. That's more of a niche of a niche.
That's why AMD hasn't focused too much on APUs for desktop PCs. It's not a big market. Their biggest sellers are APUs for notebooks and consoles, CPUs for datacenters and desktops, discrete GPUs for ... everything.
It's been an "ever shrinking market" since the 2000s, if you believe the news.
A thing to consider is, for the consumer market, machines have gotten good enough that most don't need a new machine for 5+ years, in terms of performance. Especially for casual household tasks.
No, you are right. I think this APU is basically a spin-off mobile processor. The technology comparison would be translatable, though, as with laptops battery life and weight matters.
However, here we talk about the desktop market, don't we? And not every desktop PC has to be a gaming capable machine. Power-savings will probably be even more relevant for organizations running hundreds of office machines and terminals.
>And not every desktop PC has to be a gaming capable machine.
Yes, but desktop PCs not for gaming aren't really selling volumes nowadays, they're a tiny niche being eaten by notebooks and NUCs (also a niche) which Intel has been dominating as they had CPUs with iGPUs (basically APUs without the marketing) since ... forever.
>Power-savings will probably be even more relevant for organizations running hundreds of office machines and terminals.
When was the last time you saw organizations buy PCs for their workforce? In volume I mean, not just one PC for Bob who needed a CAD workstation, because then Bob most likely isn't getting an APU for CAD but some workstation with discrete CPU and some Quadro GPU.
Organizations mostly transitioned their office machines to notebooks a long time ago, even moreso after WFH became more popular so there was less need of PCs tied to a fixed location. PCs not for gaming or heavy compute workstations aren't big sellers anymore.
Yeah, I'm sure governments do buy some amount of desktop PCs because they don't intend for their workforce to work from anywhere else other than a specific office, but you also can't tell me with a straight face that selling PCs to the DMV, FBI and other bureaucrats represents a big lucrative market that was waiting for the APU revolution.
PCs are a low selling niche now. APUs won't move the needle on that, that's why AMD ignored it for so long. End of story.
Economics matter. People can value a lot of things, but if there's too few of them, then the market won't bother catering to them. See small smartphones.
For mobile processors, it's one of the first things I look at though. And like you and others said I don't just look at the TDP but also rely on the big testing sites for load heat tests.
Or, at least, I did until the m series processors came out and you could get a passively cooled laptop for a decent price.
Maybe not everyone but I definitely care. Lower consumption means smaller PSU and cooling requirements and less issues during heatwaves (no AC). Futhermore, my computer corner with a fair bit of electronics is served by only one electricity socket with a bunch of extensions cords that I really would rather not overload.
I think that yes they're more power friendly when doing work, but consume more power at idle than Intel CPUs. At least that used be the case, don't know if that's still the case.
As for eco-friendliness... it's something I now consider when buying electronics but more from an electricity bill point of view.
Most of AMD's desktop processors for the past several generations have had poor idle power because they're built using multiple chiplets with a cheap, power-hungry interconnect. The 8000G series (and earlier -G parts) are monolithic laptop processors re-packaged for the desktop socket, so they are unaffected by the main cause of poor idle power in AMD's main desktop processor product line.
Also, most CPUs spend most of their time idling or very low load, not belching hellfire upon the scorched earth. Efficiency at idle/low power is far more significant than efficiency at flame belching if power consumption over time is concerned.
I recall this blog post[1], discussed here[2] at the time, discussing how a SATA chip driver prevented the CPU from going to sleep state C6, instead keeping it at C3, leading to higher power consumption.
Switching the SATA card to one with a different chip fixed that.
There were other surprising factors which affected power draw.
My takeaway was that as in real life, sleeping can be hard.
Oh lol. Opened the links in new tab but didn't read before commenting, classic n00b fail. The "visited" look didn't phase me since I had, as mentioned, read it here before.
How big is the difference on a regular setup? And is it a same kind of cpu, with integrated graphics? One of the example you linked says 7 watt, which is impressive, but I think you could achieve similar optimizations on an amd system.
Let's say I install windows or some linux desktop and don't do any optimizations, how big is the difference?
Be very careful about comparing TDPs, especially across manufacturers. It doesn't really say anything about real power consumption. The TDP difference is so big it probably represents some real difference in power consumption, but actual measurements are necessary to say anything concrete.
While you can't compare TDPs directly. Basically all tests of Ryzen processors with 3d cache use about half as much power or less than their Intel counterparts.
So this wouldn't surprise me.
Also, chiplet based Ryzens are notorious for higher idle power consumption than Intel's monolithic dies, which is where CPUs spend most of their time in consumer applications (web browsing, netflix, shitposting on reddit, etc) instead of running at full load all the time.
Synthetic tests that stress everything at full bore, are not realistic representations for your average consumer PC application scenario. Even video games don't cause as much power draw as synthetic benchmarks can which are more akin to power viruses.
In real life daily driving, Ryzen might still be more efficient than Intels but with a much smaller margin than such tests might lead you to believe.
I'm currently typing this on an i7-1165G7 running windows 11, with a few background tabs in edge, teams and outlook. CPU usage as reported by task manager seems to hover around 7-10%. CPU clock hovers around 2.2 GHz with regular peaks above 3 GHz.
Is that idle? Not sure I could get this box at any lower use than this, even though I understand this is different from having all cores pegged at 100%.
I agree that we should be comparing real scenarios, but I also notice that my desktop Linux systems tend to idle at <=1% CPU use if I don't have a browser open, so there's probably some genuine room to consider what counts as idle even with an OS running.
Reporting the clock frequency of a core requires running a bit of code on it which necessarily means the core is in an active state while you take the reading, so you will always read something like "2.5 GHz" even if the core does not execute anywhere near 2.5 billion cycles while you do that. Some tools can report metrics like "average effective clock" which count time spent in inactive power states as zero clock.
Along with comparing what the others have said about comparing TDPs between vendors (or even different product lines sometimes) TDP is about multi-core draw so you'd need to also normalize the multithread performance differences between the two, not look at the single thread performance being near equal (that'll be significantly less than 65 Watts for each).
I think the 8700G is probably more efficient but it won't be anything like 65 vs 181 watt would leave one to believe.
- PL1/PL2 from Intel can be compared to TDP from AMD
- CPU Mark scores alone are the best indicator of what full utilization looks like vs another test
You'd be accurate. You can't really assume either of these things though. The better comparison to TDP from AMD is probably closer to "base power" of 125W than PL1/PL2 of 181 on the Intel side. Other benchmarks, like encoding using all cores efficiently with matrix extensions, are in the 25%-30% difference. Combine these two facts and it's probably closer to 1.5x efficient (instead of the 2.5x the raw numbers above suggest, 3x the relative improvement), but you'd have to actually measure active power doing the same amount of total work to know for sure. Even ignoring one of the facts the encoding benchmark differences are enough to guarantee it'd actually be less than twice as efficient, not at least twice as efficient.
The higher power consumption on the Intel side is mostly a reflection of the futility of feeding it more power. A better way to compare these would have been to dial back the power levels on the Intel CPU until it was only just as fast as the AMD part, and compare that power level.
> But when integrated graphics push forward, it can open up possibilities for people who want to play games but can only afford a cheap desktop
Or those who play on recent handhelds. Steamdeck with integrated RDNA2 is very capable with even modern games. Sure, you can't crank it up to ultimate quality with hundreds of fps - but it's enough, and above the "cheap" level of performance.
Also, looking at geekbench results, it basically matches the M2 Max in a mbp.
There’s either a mistake or this is a bad metric, perhaps.
Currently the most powerful AMD iGPU is Radeon 780M (found 7840U/HS and 8700G CPUs). Judging by Notebookcheck‘s results, M2 Max GPU has up 2× the fps in Borderlands, 2.5× the fps in Witcher 3, and 3× the fps in Shadow of The Tomb Raider.
As for the benchmarks, the M2 Max GPU has 4–6× the fps in GFXBench, compared to Radeon 780M. And the RDNA3-based 780M has twice the raw compute performance, compared to Steam Deck’s RDNA2 GPU.
Unfortunately, GPUs in handhelds are always severely underpowered.
at least historically handhelds can't be too expensive if they want to archive wide success
M2 Max isn't available on the free market (as part) but if it where it likely alone would cost noticable more then what a full handheld can afford to cost
Sometimes even lower with upscaling. But yes, one "advantage" of handhelds is that, thanks to their tiny screens, they can run at lower resolution without too much visible degradation.
Can this integrated graphic units (APUs) work similarly to Apple M series for Generative AI inference, in combining GPU and System memory and give competitive advantage to AMD?
Anything without CUDA is not an existential threat, but anything that supports PyTorch and provides lots of RAM at a low price is a monopoly-profits threat.
How does memory bandwidth of M2 Max compare to memory bandwidth of Ryzen 8700G? 400GBs vs 80GBs? Exactly my point. 400GBs approaches lower end graphics card territory (e.g. 3060)
Would AMD be able to produce a core design with unified memory like that of Apple’s M series such that a Ryzen core with integrated GPU cores _and_ unified memory?
If they do, it will open up a lot of interesting doors.
As good as Apple’s M series CPUs are, one cannot (reasonably) simply build server rigs with them and exploit all that power..
Now that's annoying. That specs page for the 8700G used to say "ECC Support: Yes (Requires platform support)" three weeks ago. They've since removed it.
Even weirder is that the AMD specification pages have stated that both Rembrandt (Ryzen 6000 series) and Phoenix (Ryzen 7040 series) support ECC, for about a year and a half.
Then suddenly, some time in Q3 2023, they have removed ECC from their specifications.
Because Ryzen 8000G is just Phoenix or Hawk Point (the AMD information about this, i.e. about the speed of the NPU, has been contradictory) in another package, I was initially surprised that they have decided to support ECC in it, but now I see that they have reverted again the decision.
Because people get boards that do not support it then complain to AMD.
Easier to just not claim it's there, like AMD used to.
That said, thanks to AMD UEFI modules always exporting their settings databases, even if vendor didn't provide options to configure things, you can always try to enable ECC (and various other things - beware of possibly damaging the hardware!).
The PRO parts are "certified" to have ECC working.
Essentially it's binning - AMD takes portion of the bins and runs them through more verification steps including ECC tests. The rest does not get run through ECC testing, but have the exact same parts, and - unlike Intel - AMD does not lock ECC out on them.
The CPUs are physically the same except for branding detail.
This is (slightly) wrong. ALL Socket AM4 Ryzen CPUs without an iGPU have working ECC support.
ONLY AM4 Ryzen APUs (i.e., CPUs with an iGPU) with the "PRO" infix in their name have working ECC support.
However, ALL AM4 platform ECC support also depends on the motherboard vendor/firmware. There are boards on which ECC simply will not work, no matter if your CPU and DIMM combination actually supports it. You have to check the mobo specsheet to make sure it does - ASUS and ASRock, for example, support ECC on most of their boards with a suitable selection of CPU/APU and DIMM.
To illustrate the CPU situation with a few examples:
- Ryzen 5 3600 - proper, working ECC support possible.
- Ryzen 7 5800X - proper, working ECC support possible.
- Ryzen 9 3900X - proper, working ECC support possible.
- Ryzen 5 5600G - no ECC support possible.
- Ryzen 7 5700G - no ECC support possible.
- Ryzen 5 PRO 5650G - proper, working ECC support possible.
- Ryzen 7 PRO 4750G - proper, working ECC support possible.
Apparently this might be specific to AM4 iGPU models?
I've seen suggestions that for PRO, the memory controllers were redone. Wouldn't surprise me if the story is that Cezanne had a design error resulting in inoperational ECC, especially given that unlike done other models in the series they are not chiplet based.
With APUs, you must buy the Pro SKUs for ECC support. I specifically had to get the 5650g pro (as opposed to the regular 5600g) for my home server due to this exact reason.
Just curious, but why opt for an apu in a server config? They tend to be marketed towards gamers on a budget so not including ecc support isnt that big of a deal.
Still need video output to set up the system and for occasional maintenance. Additionally, I host a media server that needs to transcode videos. The GPU in the APU is perfect for these tasks since it both costs less and draws less power than a discrete card.
Not being able to beat an rx570 isn't something to brag about. Those cards go used for under 50usd in the market and can be undervolted pretty easily.
Just having a hard time figuring out why someone wouldn't just go with a 7700.
It would be interesting to revisit this review for the forthcoming Strix Halo. Particularly, for ML tasks on relative large memory sizes which are prohibitively expensive for GPUs.
Not sure of how well this will work with Linux. I had a AMD4 "5XXXG" CPU and there were contant freezes. The Linux kernel does eventually catch-up but it takes time.
I still have that cpu, I remember the freezes, it affected Windows too, it was because of the fTPM, you needed to update your firmware or disable fTPM.
I just installed Ubuntu on Ryzen 7950X which uses RDNA2 igpu, since Ubuntu delivers both Chrome and Firefox through snap, both web browsers are completely broken with hardware acceleration enabled.
With Debian that delivers Firefox from deb packages, everything works great...
https://bugs.launchpad.net/ubuntu/+source/firefox/+bug/20045...
I have a variety of AMD desktops with XXXXG chips and Laptops with AMD integrated graphics.
Hardware video decoding doesn't work in most of the cases, neither in Chrome nor in Firefox.
My next desktop and laptop will be with the Intel integrated graphics - the only option working OK in Linux + Wayland Plasma + Chrome.
> The Linux kernel does eventually catch-up but it takes time.
65 vs 181 watt
now that is impressive!