Hacker News new | past | comments | ask | show | jobs | submit login
Inside the Steam Deck's APU (boilingsteam.com)
321 points by ekianjo 7 months ago | hide | past | favorite | 227 comments



The steamdeck works so well, because Valve spent a LOT of effort fixing AMD, wayland and pipewire issues for their handheld. Some of that then trickles down to other machine other parts don't(like sleep, and audio). For example, they have a proper filter chain for their speakers and microphone.

Since recently buying an AMD based laptop I've come to realize how much better Intel's software support, both in Linux AND in windows is. And that patterns moves all across AMD product lines.

For example, they pressured all vendors to drop S3, dropped it from Phoenix and went all in with Microsoft's s2idle without any clear way to support it. As a result you have multiple vendors with half working idle implementations, overheating and other bugs.

Intel has also vastly surpassed AMD is their ML stack, even though their GPU's are less powerful.


> For example, they pressured all vendors to drop S3, dropped it from Phoenix and went all in with Microsoft's s2idle without any clear way to support it. As a result you have multiple vendors with half working idle implementations, overheating and other bugs.

Are there still Intel parts with working S3? Linux seems to think that my 11th gen intel laptop supports it, but it doesn't work (hangs on going to sleep). I haven't figured how to coax Windows into using that instead of s2idle. This particular model doesn't offer a BIOS toggle for that, as I hear it was the case on some thinkpads.

I'm also not convinced the s2idle situation is that much better on the Intel side. I sometimes use Windows on my work laptop and let it hang around suspended when I'm done for the day. Yesterday evening (Sunday), after two days of doing nothing, it figured it would be as good a time as any to turn into a jet engine. I also sometimes find it is pretty warm coming out of my backpack after a 45-60 minute commute with a long portion of walking, even when it's close to freezing outside (we've had a few weeks of 0-2º days where I live). This doesn't seem like an isolated thing: see all the people complaining about other manufacturers' laptops not going to sleep properly while being carried around closed in bags.

---

edit: found a way to enable S3 on windows. It goes to sleep but doesn't wake up. It actually seems to mess up the PC so much that after a forced reboot the fan goes crazy for a few minutes before showing the UEFI logo.


The sleeping is so broken on all the laptops these days that I'm seriously considering buying an old laptop because all the nice things you get with a modern powerful laptop aren't worth it if the battery is randomly empty and you can't turn it off.

It's astonishingly inexplicably broken and the industry completely ignores it.

I have 0 idea why.


I can't decide if I'm incredibly lucky or what, but I never had broken sleep in the last decade on Linux.


It is very hardware dependent. So you have to get ~lucky with your machine and then you're set, basically. (Though it's a weighted distribution; ex. ThinkPads have better than average odds)


> It is very hardware dependent.

NVIDIA GPUs definitely make things more difficult, at least in the suspend-then-hibernate case. Here's where I reported a hacky workaround:

https://forums.developer.nvidia.com/t/systemds-suspend-then-...

See also:

https://github.com/systemd/systemd/issues/27559

NVIDIA kinda sorta just doesn't give a shit, unfortunately.


For me, that has technically been the case on Linux, too, including on the laptop I was referring to. On Windows, it happens that the screen will be garbled on wake, but I don't use Windows that much to care.

The only issue is that s2idle drains the battery like crazy compared to S3. But I guess "it's not a bug / works as intended".


> The only issue is that s2idle drains the battery like crazy compared to S3.

Properly working s2idle should use as little power as S3, but it seems to be much more common with s2idle that some piece of hardware is left enabled while it should have been disabled to save power.


> Properly working s2idle

Does anyone actually have that? I'd really love to hear with which hardware/software!


Well it seems that in theory it should exist, amd_s2idle script that tests various things. I can get the machine into what seems to be sleep, but if you listen careful on the fan you will hear that it's constantly restarting. The script then reports that most of the time is actually spent in user space, when what it should really do is spend over 90% in actual idle. Unfortunately no one seems to have a good answer on how I can find out what causes the user space sleep inhibitions. It's not a wakeup.

AMD has this https://gitlab.freedesktop.org/drm/amd/-/blob/master/scripts...


This is interesting. On my HPs, both Intel and AMD, the fan-on-while-it-should-be-sleeping thing only happens with Windows. Under Linux, the fan will turn off even if it was spinning, say if I put it to sleep during a compile and the laptop is hot.

On Linux, the fan never turns on and the pc never gets warm while asleep. Windows sometimes does something that requires the fan to spin like crazy, and the PC is usually somewhat warm to the touch.


Sleep works across all of my homebrew PCs, AMD and Intel, but most have broken login managers after sleep.

It's probably trivially solvable, but I haven't cared enough to fix it when I can just kill/restart the service.


I see you didn't have one of the modern laptops that only support s0idle (aka. Modern Standby) and have completely removed S3.


You can just hibernate though. With fast SSD/NVMe storage it's nearly as quick as S3 standby was. (The one pitfall is that hibernation might require you to disable Secure Boot if you're on a recent version of Linux, due to lockdown-mode shenanigans.)


It depends on other things. For me, waking from S3 has always been nearly instant. Booting from hibernation is nowhere near that. Just the freaking UEFI takes ages to initialize.


If I'm putting my laptop in a laptop bag hibernation is just fine. I couldn't care less that it will have to go through UEFI and OS boot again, I just want it to be off. And for short pauses the CPU idling suspend is okay.


In practice, it's what I also ended doing.

But there's no denying that my quality of life took a hit by this "improvement": I now have to go out of my way to choose hibernation or standby, depending on how long I expect to need it to sleep, instead of just closing the lid.

Under Windows, there's also the fact that you have to go out of your way to enable hibernation.


Shouldn’t any Chromebook come with good sleeping/hibernate support as Google has some quality control and its Linux kernel?


I have a similar issue with my dell xps - if I leave it suspended with being plugged in for a few days the battery drains, but what is more frustrating is plugging in to charge from that state seems to boot to a bios screen and turn it into a space heater.

It's caught me out a few times where I haven't noticed and frankly seems like a safety hazard as it'll get very hot.

(If anyone has tips to prevent this that would be great, running fedora)


I have an intel Thinkpad X1 Carbon, S3 sleep is catastrophically broken:

Putting the laptop to S3 sleep puts the M.2 SSD into some kind of sleep mode, when the system wakes back up, it fails to wake the M.2 SSD back up. The sleeping M.2 SSD mode is persistent across reboots, so upon reboot the laptop fails to boot because it can't find it's main storage. The only way I've found to fix this is to pull the SSD and put it in another working machine.

S0ix is also completely broken under linux with it draining the battery 30% in around 4 hours.


> S0ix is also completely broken under linux with it draining the battery 30% in around 4 hours.

Does it work better under Windows? On both my machines, one zen3 and one intel 11th gen (but otherwise almost identical hp laptops) I don't see any difference in battery drain between the two.


It may, the laptop came pre-installed with linux, so I don't have any license to test it.


You may find the S3 sleep issues are, to my brief experience and understanding, with TPM security - not so much an “amd” or “intel” fault.


Interesting, can you elaborate on that? The behaviour I observed is that going into S3 behaves exactly the same as if you try to unload and reload the amdgpu driver.


Alder/Raptor Lake (-P/-U) definitely supports it, firmware willing (e.g. LG).


I'm really confused by this post as AMD chipset is the de facto recommendation in any laptop discussion forum on Linux. AMD graphics cards are also much better on linux too so it almost seems like you got things in reverse?


I personally find AMD GPU support under Linux to be absolutely excellent. Only thing I don't like is the "secure processor" which requires signed and sometimes encrypted firmware to run. The older cards don't have this.


I have a RX 6600 XT and it was a PITA in many instances. Sometimes black screen at boot which resolves after GDM is started. Freezes when waking up which is an issue that come and go on Arch after each update cycle. Bizarre frame drops in games out of nowhere. Like one day I had stable >60fps but next day after a reboot with no change to the system it stayed ~15fps. This was a huge pain for almost an year after launch but I think it is stable enough by now.


At least on iGPU business Thinkpads, Intel systems still seem to be more robust with Linux. The difference isn't necessarily on graphics, just less bugs overall ranging from wireless (bt/wifi) to monitor / dock compatibility etc.


And in contrast, I find Intel support to be very buggy, certainly on Linux.


That's quite interesting to know. Which laptop/AMD cpu did you purchase?

I may have to buy a new laptop in the near future. Linux is my default OS. I've been happy with my existing Ryzen 7 3000 series laptop for 5 years or so. I want to know if Intel is doing better these days.


Take a look at the 'Ubuntu Certified' Laptops. Most Dell, HP and Lenovo should be supported. Here is a full list https://ubuntu.com/certified/laptops?q=%C2%B4&limit=228&cate...


I just got Thinkpad T14s Gen4 with AMD chipset and it's brilliant! The chipset works out of the box with great performance and battery (Tested on Arch and Nixos) and the 16:10 OLED semi-matte screen is a must upgrade (better than any other laptop screen on the market imo). My only gripe is the fingerprint magnet body material which is annoying to care for.

I'm running nixos and plasma wayland without any issues. Even the fractional scaling works out of the box.


Interesting.. How is the OLED burn in so far? Also does the fingerprint sensor work?


No burn that I could notice in 2 months of use now. I don't think that's a major problem for laptops tbh. I don't use fingerprint sensors on laptops and ordered mine without on but AFAIK it does work on linux if I recall correctly.


Not sure. that's the one thing that horrifies me about OLED laptops. The taskbar always being there = recipe for burn in..


Take a look at this amdgpu issue tracker[1], you will find that there are lot of devices across vendors having the same issue. And even if you get s2idle to work, which people on my machine do by disabling wakeups with something like the following, it looks like the system is sleeping, but it will often still spend most of the time in userspace, which leads to heat issues if you are foolish enough to throw it in your backpack at that point:

    for i in $(cat /proc/acpi/wakeup|grep enabled|awk '{print $1}'|xargs); do case $i in SLPB|XHCI);; *) echo $i|tee /proc/acpi/wakeup ; esac; done
[1] https://gitlab.freedesktop.org/drm/amd/-/issues/?sort=create...


Framework is very well supported on Linux, and have both Intel and AMD builds. Highly recommended.


How is Framework power management in Linux? Does it need lot of tweaking in mainstream distributions/


Not as good as when running windows (when has there ever been a laptop with linux power management as good as windows?), but better out of the box experience than most laptops I’ve tried.


If you're interested in ASUS laptops (they have some very good and powerful models), you can check out this website: https://asus-linux.org/


Bought one beefy Asus ROG model (m16-2021) and it was so poor quality!

Windows had BSOD daily (reddit suggested to replace wifi card - it helped). Sometimes audio output became muffled.

After few months body coating started to peel off in some places.

I wish I bought something else!


Wasn‘t acpi always a shitshow? I remembered https://www.linuxjournal.com/article/7279 even s3 was shitty , s2idle is just a „software wrapper“ around a already broken thing. I think the only way to fix any kind of power management issues would mean to replace the whole system with a more modern interface, which probably would be extremely hard to do, since every hardware vendor and os would need to change a lot of things. Heck we actually have problems with modern standby on windows on dell precision tower workstations, we even tried windows 11. but sometimes it will just fail and it will try to go to standby and than it fails and it will resume in a cycle, it will not resume and you hear the power cycle, it’s the biggest hibernation shit show that I’ve seen so far, the thirst things I do on these machines is disable it. It’s the same thing as https://learn.microsoft.com/en-us/windows-hardware/test/weg/... aka fake s5, which is just stupid on nvme disks. No software around acpi does not fix it, it makes it worse.


I have a Zen2 desktop and Zen3 laptop, and power save mode / suspend works flawlessly. Is there an errant service/app on your system which disables suspend?


seems to me you bought a bad laptop. i bought a cheap generic lenovo and the only thing that is not working is the fingerprint reader, which is probably for the best.

even the touchscreen / stylus works great and that is not my words but a family member that gifted my laptop (due to irrelevant to tech, life reasons)


Sounds like Magic Leap "overspent" on a custom chip design and fab, like they overspent on everything else, and ended up with a fraction of the sales they expected.

And then Valve "underspent" (although perhaps more of an MVP) by taking an existing good-enough chip, perhaps able to get very good yield out of fab (at a time when fabs were costing a fortune), and found yet another corner to cut on a device that is overall pretty low build quality, to hit an aggressive price point. I love my Steam Deck, and I loved the low price, but the build quality is "good enough" at best.

Valve have reversed course with the new generation, having a custom chip now that they've proved out the demand, and I expect the device will improve across the board as they take advantage of higher volumes, and perhaps an ability to price a little higher too.

I wouldn't be surprised if Magic Leap also reverse course (or already have done? are they still around?) by moving to much cheaper off the shelf or pre-existing hardware.


How amazing it is to live in a world that has reached such level of quality that the deck is labelled as "good enough".

We are really swiming in astoninishing objects, even the simplest glass or a ball pen is a marvel.

So much it's now a baseline.


The Steam Deck is really the most delightful electronic device I've ever owned. It's incredible.


Right? I don’t consider it merely “good enough”, it’s actually really well-executed and of a surprisingly high quality. Probably the most meaningful tech purchase I’ve made since my MacBook Pro in 2014, in terms of direct positive effect on my life and my time spent with it. Maybe the Fujifilm X-T2 could be a contender there too, but I don’t do a lot of photography, so… But anyways, the Steam Deck hardware, design, and well-integrated OS, is really impressive to me.


> pretty low build quality

At the same time having a very high reparability. I feel like these two metrics are complementary and I am happy that they went for reparability instead.


> At the same time having a very high reparability

If you exclude the awful glued battery with the audio cable taped on it.


more recent lcd models has the cable on top of the battery and it isn't glued


I hear the second steam deck is repairable, but the first one is not.

The main things that break - buttons, need to be factory replaced and calibrated because the track pads are on the same boards.


> taking an existing good-enough chip

As Nintendo did with the Switch, which has been an overwhelming success. The Steam Deck is the Switch for Valve, both technologically and as a business/marketing point of view.


In other words, Valve pulled a Raspberry Pi.


I don't know much about the Raspberry Pi situation. Are they using existing designs? Are they doing things like fabbing extra cores to increase yields? I assume they're on an older process anyway. I don't necessarily feel Raspberry Pis are low quality – they're low spec, but I believe the ones I've had in the past seemed good, although I guess there's less stuff to be low quality (vs Steam Decks having a large amount of not-great plastic, control surfaces that fail in very noticeable ways, etc).


Raspberry Pi’s original SoC is designed by Broadcom for the set-top box market. Subsequent SoCs up until Pi 4 are of the same architecture design but with the Arm cores swapped out and upgraded.

https://raspberrypi.stackexchange.com/a/563


I believe the story was (at one point at least) that RPi used a Broadcom SoC that had already been developed for some other application, keeping the cost down by using a chip that was already in volume production. I don't believe this is still the case but then again I haven't been following RPi development very closely.

I don't think it indicates any use of inferior materials or build quality per se, just that they designed around existing components to avoid the high initial cost of a bespoke SoC.


Like Valve, they initially repurposed an old SoC that was designed for something else, either set-top boxes or phones, don't remember which. After their success they started doing custom work on the follow-up SoCs.


Yes, the lead of the Raspberry Pi foundation used to work at Broadcom and used a pretty old design that was meant for set top boxes because Broadcom sold it for little money.


Note these chips weren't fabbed with extra cores to increase yields


Nevermind what Nintendo has been getting up to since 1989...


I’ve had the OG Steam Deck and now have the OLED.

Genuinely curious, what do you consider low build quality on them?


Not OP, but the buttons are a bit mushy, the shell a bit creaky and, worst of all, the the Steam and Quick Access buttons feel terrible. It's all much more apparent when I pick up my Switch Lite, which itself isn't something I'd consider high build quality. That said, I'm ok with the compromises considering the price point.


Am OP, and yeah this is pretty much it. For me it's the fact that it's a huge chunk of not-great-quality plastic, when all my other devices are aluminium or better plastic. I'm lucky to have had no issues with the inputs, but they are known for having problems.

I'm also absolutely fine with these trade-offs for the price point, and in fact if pushed, I'd probably rather have a £400 device like this compared to a £500+ device with the same spec made of aluminium with slightly better inputs.


Funny that AMD would design a bespoke chip for Magic Leap that sold a tiny number of units, and Valve would ride on their coattails for a game console that undoubtedly sold way more. I wonder how much Magic Leap paid AMD for this design? Guess that explains where some of their billions went.


One possible explanation is that Magic Leap may have significantly overestimated how many chips they needed, and wondered if someone might buy excess inventory… or at least, might buy out their contract.

Consider the first Magic Leap headset. They thought it would do over 100,000 units. It only did 6,000. https://www.engadget.com/2019-12-06-magic-leap-6000-headsets...

I don’t know how the Leap 2 is doing. But in the above example, let’s say AMD wanted a minimum order of 100,000. If you were Magic Leap, you’re running to find a buyer - any buyer. You’re also going to cut the price for your batch as far as necessary to cover for possible increased production costs in the future. Even if you recover just half of your costs, it’s better than being forced to buy chips nobody wants that will quickly go stale.


Thinking about modern chips in terms of units and inventory isn't the best way to do it. Cutting-edge logic isn't quite priced like software, but it's close enough: the first unit costs $10m in R&D and NRE (Non-Recurring Expenses, like mask sets and test tooling) and then the second unit costs $1 or so.

My guess is that Magic Leap, a company infamous for spending profligately, paid for the NRE on this design, and then didn't buy very many of them or pay for exclusivity on it (or let any exclusivity go). AMD is notorious for shopping around designs they've already got ready to ship (probably related to sibling comment's observations about the console graphics market), and Valve runs lean despite not having to, and so this story adds up quite nicely. There may or may not have been some inventory lying around, but the NRE is usually the real story on these things. (It's also possible that the custom Magic Leap sections of this chip have terrible yield -- actually that wouldn't surprise me at all -- and so these were great candidates for die harvest. Which of course is also the sort of thing AMD loves to sell.)


It seems smart to go by the only successful handheld strategy - Nintendo always shops around for the cheapest chip that can get the job done. Even if it’s many years old as long as it’s cheap , highly available and can run what they need


The difference is Nintendo builds their own killer apps exclusively for the hardware that they choose, so they can guarantee a good experience regardless of the performance level. Steam Deck doesn't have any native games. It's all games written for other platforms first and if the chip's performance doesn't meet a minimum bar then it just won't work.


It's all games written for windows, but at this point valve bankrolls proton and wine development so heavily it might as well be an in-house product. Valve would like all games to be written for Linux (case in point: Steam machines) but the reality is that game devs are slow to adapt and desktop gaming is still largely done on PC.

With that in mind, proton _is_ the killer app.


Nit: it does have the valve on-boarding game for it, but that doesn’t really change your point.

Also Valve do some steamdeck optimizations for every game via their shared shader cache. Which enables it to run a lot smoother than competing portable x86 systems.


> Also Valve do some steamdeck optimizations for every game via their shared shader cache

Isn’t the only purpose of this skipping shader (re)compiling?

As far as I understood it, they did it because compiling shaders both takes long on a handheld device and time is at a premium for handheld players. They don’t want to wait eight minutes for a shader compile.


There’s two places where shader precompilation helps:

1. Load times if you’re compiling them on stage load

2. Dynamically on asset load during gameplay.

The latter is one of the most major causes of stuttering in PC games, and Valves solution can greatly make a game feel smoother as a result.


Do you have any idea what the lead time on a project like this would be? Given the timing for a bad 2019 leading into the pandemic and widespread predictions of severe economic shrinkage, I’m wondering whether what we’re seeing is that they had a design in-flight but someone panicked that they were going to be far short of their minimum commitments a second time. In that case, both Magic Leap and AMD would’ve been pretty motivated to have someone commit to things like fab capacity and Valve’s earlier work at least makes the possibility of shipping in a couple years seem potentially possible.


Interesting, why would one part of a chip have significantly worse yield than another?


It's just where the tightest structures (physically, electrically, or timing-wise) are found. The AMD cores and uncore are, I'm guessing, fairly well tuned to the process by skilled people. The bolt-on stuff often isn't so well done, particularly if an end-client project manager or beancounter is driving the schedule. So it's almost certain to happen if the added stuff has very complex paths, and you wouldn't be going to the effort of adding silicon if you didn't need it to do something interesting....

So, no proof, but it wouldn't be any kind of surprise if this were part of the problem.


It's where the cutting-edge work is, or where the analogue engineering is, and as such is very dependent on your simulation work being good enough. Or the designers may have chosen to put less slack there for higher performance at the cost of yield - this is something of a "bet" on the production process, which can turn out well or badly. Or it's where the chip is hottest.


If I remember right, I think Valve has implied that the steam deck arose out of prototypes for a wireless VR headset, so it could be possible there was some early collaboration to that effect, and Valve ended up taking a different path once they realized what the steam deck could be?


> the steam deck arose out of prototypes for a wireless VR headset

Do you have any source for this? I don't recall seeing this kind of info so far.


Not quite what they said, but this article talks about how lessons learned with the Steam Deck will inform future VR devices

https://www.roadtovr.com/valve-working-on-vr-steam-deckard-o...


It seems to similar to PS3 vs Xbox360 PowerPC story


Interesting to note that the estimated cost for the APU in the Magic Leap 2 is $136.53.[0]

[0] https://electronics360.globalspec.com/article/20179/techinsi...


So that's roughly the power of a 1995 supercomputer or a medium supercomputer (500th place) in 2007, in the palm of your hand... it's bonkers


Great article but i wish they mentioned that the APU wasn't in the AR glasses but in the magic leap's Compute Pack


Will do!


So what exactly were the Magic Leap cores? They don't look like the GPU or CPU cores...


The original source[0] speculates they're computer vision DSPs, specifically Tensilica Vision DSPs from Cadence.

[0] https://www.youtube.com/watch?v=ERm1StY-4uY


It wouldn't be the first time AMD has bundled some Cadence DSP cores into their designs, they used them for TrueAudio back in the day as well.


Intel also integrates Tensillica cores as sound DSP (IIRC into PCH not CPU itself). And NXP has some kind of i.MX part that somewhat closely couples Tensillica DSP core(s) to ARM SoC. So I would say that today probably everybody except TI uses Cadence/Tensillica DSP cores.


I don't think Analog Devices uses Tensillica, and I think NXP still has things in the 56300 derived line shipping, but Tensillica does seem to have soaked up the lions share of DSP functionality these days.


I meant as an DSP core inside some larger chip, not as free standing DSP. It does not make much sense to design a chip that just contains a Tensillica core and nothing else because you lose all of the customizability and the market for such a thing probably is not big enough to offset the NRE and licensing costs (obviously, with Espressif stuff being the exception, but the RF part in there is pretty significant part of the design).


That's a fair clarification. The 56000 had a tiny number of SoC-like build-ins (with a 68000), but I don't think they've been sold in decades.


The specs of the Magic Leap 2 say "14 core computer vision processing engine (CVIP)" which matches the 14 cores shown in the photos.


And a follow on question… could they be turned on and used?


Probably not, these modular bits of silicon are usually tied to eFuses that are blown at the factory to irrevocably disable them in SKUs that aren't supposed to have them. Many years ago it was sometimes possible to re-enable disabled cores by poking the right registers, but manufacturers learned their lesson and now they make sure that silicon stays dead.

Besides, even if you could enable these DSP cores you'd be hard pressed to do anything useful with them, I don't believe there's any public documentation or tooling whatsoever for Cadence DSPs.


How do eFuses work to ensure it's physically irrevocable?


On older AMD hardware it's controlled by lockdown registers which are written to by the SMU (Lattice Mico32 CPU) just after reset is deasserted.

The SMU reads the eFuses to determine what to lock down. So you can interrupt this process via JTAG and re-enable all the locked down cores. There is a window of a couple of milliseconds after de-asserting reset to halt the SMU.

I have no idea if this still works on newer chips with the AMD Secure Processor. There was some mention of a JTAG password in the leaked AMD documentation on the Web.

I'm not into that stuff anymore, I'm playing around with FPGAs now. No more locked down security processors to get in your way.


A paper: https://bunniestudios.com/blog/images/efuse.pdf

Most chip fuses are "antifuses": you pass high current through them, and instead of evaporating a fusewire like in normal fuses, it causes physical changes that reduce resistance.


eg if the fuse blows the line that powers up that part of the chip, then it simply can't power up after the fuse has been blown and thus is unusable.

Going in and fixing that blown trace inside of an IC is beyond the capabilities of almost everyone because decapping a chip renders it inoperable.


I think decapping doesn't necessarily render the chip inoperable, there's some research around attacking tamper resistant hardware and probing the decapped chips while powered on. But you have to be more careful about it of course. Also do all techniques require decapping? Seems in principle you could navigate by x-ray and fix traces disconnected by electromigration efuses using ion beam/implantation, possibly through the packaging.

edit: here's someone talking about running decapped chips. https://electronics.stackexchange.com/a/400899


I love when chip people pipe up on HN, because it reminds me that we absolutely do magic with silicon and most of us don't think twice about it.


never let Cunningham's law get n your own way ;)


If you have a ion beam you can probably afford to just buy working and supported cores.


> possibly through the packaging.

Not if somebody's put a metal layer on top of it. You can only meaningfully fiddle with the top surface of a decapped chip.


> You could try throwing a bundle of chips in boiling acid and see what comes out. But my guess is the bond pads will not be usable anymore.

Decapping with extreme prejudice.


How do they take those CPU pictures? Are those "delidded"? False color xrays?


AFAIK it's the other side of the die from a "delidded" view. So, they desoldered the die, then etched/ground at it until the surface coating was gone, and just the semiconductor structure remains. Due to the sub-micrometer regular structures, you get interference patterns like these. I.e., the color is "structural".


Very carefully delidded and then with a nice metallurgical microscope. And then usually with focus stacking.


Fritzchen's Fritz uses a normal mirrorless camera and macro lens most of the time for the pictures you see everywhere (he posts them under Creative Commons to Flickr, there's an automated pipeline from there to Wikimedia Commons, which is why virtually every Wikipedia article on a CPU or GPU features at least one of his works).

Only the clean head-on die-shots (he also does these) need a metallurgical microscope - those are hard to find and very expensive, they're also highly proprietary optical systems, so accessories and objectives are rare and expensive, too.


Does anyone know what the die space in the top right is used for? Somebody mentioned secure enclave as another component but I doubt it needs a lot of die space.


ACO, the AMD GPU shader compiler from valve (unfortunately in c++ instead of plain and simple C), is full of hardware quirk mitigations.

I remember having a look at the documentation about all those quirks (was a text file in mesa at the time)... and I did postpone my alternative to ACO spir-v to AMD GPU ISA translater... ahem (whishing the hardware to go thru some fixing first...).


Neat. I wonder if these cores are exposed to thr system in any way and if there's anything interesting you can do with them?


No, they are not, and probably disconnected at the chip level


My custom elf/linux takes very few seconds to boot (not a mainstream and massive elf/linux distro which takes foverever on "normal" hardware). I don't even compile sleep/hibertnate stuff. If ever I become super messy, I'll use vim session management in the worst case scenario... :)


Is it running as a general desktop os or is it for some specialized embedded device?


desktop.


What kind of course do I need to follow to understand CPU architecture and more like the guy in the article ?


AMD has almost completely taken over the console market. The PS2 was a MIPS system and the PS3/Xbox360 were PowerPC, but, for the last ten years, Sony and Microsoft have been all AMD. Intel has been out of the game since the original XBox, and nvidia only has the switch to its name. The Steam Deck-style handhelds (like the ROG Ally and the Lenovo Legion Go) are AMD systems.

It’s kind of interesting how they have this hold on gaming outside of conventional PCs, but can’t seem to compete with nvidia on just that one market.


A console wants a competitive GPU and a competitive CPU. Nvidia has the first, Intel has the second, AMD has both. The first one is more important, hence the Switch, and AMD's ability to do it in the pre-Ryzen era. (The original Intel-CPU Xbox had an Nvidia GPU.) Console vendors are high volume institutional buyers with aggressive price targets, so being able to get both from the same place is a big advantage.

For PCs, discrete GPUs are getting bought separately so that doesn't apply. AMD does alright there but they were historically the underdog and highly budget-constrained, so without some kind of separate advantage they were struggling.

Now they're making a lot of money from Ryzen/Epyc and GPUs, and reinvesting most of it, so it's plausible they'll be more competitive going forward as the fruits of those investments come to bear.


For gaming, AMD GPUs are generally better bang for buck than Nvidia - notably, they tend to be about the same bang for less buck, at least on the high end and some of the middle tier. The notable exception is ray tracing but that's still pretty niche.

If AMD gets their act together and get the AI tooling for their GPUs to be as accessible as Nvidia's they have a good chance to become the winners there as you can get more VRAM bang for, again, less buck.

In a market where they have similar performance and everything minus ray tracing for somewhere between 50% and 70% of the competitor's prices it will be pretty easy to choose AMD GPUs.

They already have the best CPUs for gaming and really are positioning themselves to have the best GPUs overall as well.


I also think something important here is AMD's strategy with APU has been small to large. Something that really stood out to me over the last few years is that NVidia was capturing the AI market with big and powerful GPU while AMD's efforts were all going into APU research at the low end. My belief is that they were preparing for a mobile-heavy future where small, capable all-purpose chips would have a big edge.

They might even be right. One of the potential advantages of the APU approach is if they GPU can be absorbed into the CPU with shared memory, a lot of the memory management of CUDA would be obsoleted and it becomes not that interesting any more. AMD are competent, they just have sucky crash-prone GPU drivers.


> AMD are competent, they just have sucky crash-prone GPU drivers.

I have an AMD GPU on my desktop PC and I also have a Steam Deck which uses an AMD APU. Never had a driver crash on me on either system.


When I run a LLM (llama.cpp ROCm) or stable diffusion models (Automatic1111 ROCm) on my 7900XTX under Linux, and it runs out of VRAM, it messes up the driver or hardware so badly that without a reboot all subsequent runs fail.


You're probably using it for graphics though; the graphics drivers are great. I refuse to buy a Nvidia card just because I don't want to put up with closed source drivers.

The issue is when using ROCm. Or more accurately when preparing to crash the system by attempting to use ROCm. Although in fairness as the other commenter notes it is probably a VRAM issue so I've been starting to suspect maybe the real culprit might be X [0]. But it presumably doesn't happen with CUDA and it is a major blocker to using their platform for casual things like multiplying matricies.

But if CPU and GPU share a memory space or it happens automatically behind the scenes, then the problem neatly disappears. I'd imagine that was what AMD was aiming for and why they tolerated the low quality of the experience to start with in ROCm.

[0] I really don't know, there is a lot going on and I'm not sure what tools I'm supposed to be using to debug that sort of locked system. Might be the drivers, might be X responding really badly to some sort of OOM. I lean towards it being a driver bug.


> You're probably using it for graphics though; the graphics drivers are great.

Ah yes. I'm pretty much only using it for games. It does seem that AMD's AI game is really lacking, from reading stuff on the Internet.


I have had a RX580, RX590, 6600XT, and 7900XT using Linux with minimal issues. My partner has had a RX590, 7700XT on Windows and she's had so many issues it's infuriating.


If you’re building a midlife crisis gaming PC the 4090 is the only good choice right now. 4K gaming with all the bells and whistles turned on is a reality with that GPU.


Yup, just did this. Still way cheaper than a convertible or whatever


Especially when most convertibles are slower than a Tesla.


The 4090 is also upwards of $2k. This is more that what i spent on my entire computer that's a few years old and is still very powerful. We used to rag on people buying titans for gaming, but since Nvidia did the whole marketing switcheroo now titans are just *090 cards and they appear as reasonable cards.

My point is that Nvidia has the absolute highest end, but it's ridiculous to suggest that anyone with a budget less than the GDP of Botswana should consider the 4090 as an option at all. For actually reasonable offers, AMD delivers the most performance per dollar most of the time.


Someone really should market a PC directly as a "midlife crisis gaming PC." I laughed so hard at that.


PSA: GeForce Now Ultimate is a great way to check this out. You get a 4080 equivalent that can stream 4K 120Hz. If you have good Internet in the continental US or most of Europe, it’s surprisingly lag-free.


Specifically modern "normal" gaming. Once you get outside that comfort zone AMD has problems again. My 7900XTX has noticeably worse performance than the 1070 I replaced when it comes to Yuzu and Xenia.


AMD tend to have better paper specs but worse drivers/software; it's been like that for decades. At least they're now acknowledging that the state of their software is the problem, but I haven't seen anything to give me confidence that they're actually going to fix it this time.


I'm pretty much all the time on Linux. Have used Windows on my gaming computer mostly to compare performance, out of curiosity.

Generally the performance is at least as good on Linux so I stay there. Never had a driver issue.


The original Xbox was supposed to use AMD CPU as well, and parts of that remained in the architecture (for example it uses Hyper Transport bus), backroom deals with intel led to last minute replacement of CPU for an intel P6 variant.


So last minute that the AMD engineers found out Microsoft went with Intel at the Xbox launch party [1]

[1] https://kotaku.com/report-xboxs-last-second-intel-switcheroo...


Also, the security system assumes that the system has a general protection fault when the program counter wraps over the 4GB boundary. That only happens on AMD, not Intel.


Wow! Good find!


This also led to a major vulnerability caused by different double-fault behavior between AMD and Intel known as “the visor bug”: https://xboxdevwiki.net/17_Mistakes_Microsoft_Made_in_the_Xb...


Also, AMD is the obvious choice for Valve over something like Nvidia due to AMD having a decent upstream Linux support for both CPU and GPU features. It's something Nvidia never cared about.


NVidia has had more reliable and, at least until recently, more featureful drivers on Linux and FreeBSD for decades. They're just not open-source, which doesn't seem like it would matter for something like the Steam Deck.


Here's a secret: a lot of the featurefulness of AMD's current Linux drivers is not due to AMD putting in the work, but due to Valve paying people to work on GPU drivers and related stuff. Neither AMD nor Nvidia can be bothered to implement every random thing a customer who moves maybe a million low cost units per year wants. But with open source drivers, Valve can do it themselves.


Pierre-Loup Griffais recently re-tweeted NVK progress with enthusiasm, given that he used to work at Nvidia on the proprietary driver it's replacing I think it's a sign that Valve wouldn't particularly want Nvidia's driver even given the choice, external bureaucracy is something those within the company will avoid whenever possible.

Going open source also means their contributions can propagate through the whole Linux ecosystem. They want other vendors to use what they're making, because if they do they'll undoubtedly ship Steam.


Reliablie is very moot when they simply support only what they care about and don't support the rest for those decades. Not being upstreamed and not using standard kernel interfaces makes it only worse.

So it's not something Valve wanted to deal with. They commented on benefits of working with upstream GPU drivers, so it clearly matters.


NVidia are the upstream for their drivers, and for a big client like Valve they would likely be willing to support the interfaces they need (and/or work with them to get them using the interfaces NVidia like). Being less coupled to the Linux kernel could go either way - yes they don't tend to support the most cutting-edge Linux features, but by the same token it's easier to get newer NVidia drivers running on old versions of Linux than it is with AMD.

(Does Valve keep the Steam Deck on a rolling/current Linux kernel? I'm honestly surprised if they do, because that seems like a lot of work and compatibility risk for minimal benefit)


The current SteamOS (what Decks run) is based on Arch but is not rolling. The original for Steam Boxes was based on Debian.

Reasoning for the switch: https://en.wikipedia.org/wiki/SteamOS#Development

> The decision to move from Debian to Arch Linux was based on the different update schedule for these distributions. Debian, geared for server configurations, has its core software update in one large release, with intermediate patches for known bugs and security fixes, while Arch uses a rolling update approach for all parts. Valve found that using Arch's rolling updates as a base would be better suited for the Steam Deck, allowing them to address issues and fixes much faster than Debian would allow. SteamOS itself is not rolling release.


Upstream is the kernel itself and standard kernel interfaces. Nvidia doing their own (non upstream) thing is the main problem here. They didn't work with libdrm for years.

Being a client or even a partner doesn't guarantee good cooperation with Nvidia (Evga has a lot to comment on that). As long as Nvidia is not a good citizen in working with upstream Linux kernel, it's just not worth investing effort in using them for someone like Valve.

Stuff like HDR support or anything the like are major examples why it all matters.


Yes, but none of that is important for a console. You're talking about integration into libraries of normal desktop distros which aren't that important when the console can just ship the compatible ones.


Valve disagree. And the fact that they made an updated version that supports HDR demonstrates that it's important.

Form factor (console or not) has no bearing on importance of this issue.


But Valve contributes directly to AMD drivers, they employ people working on both on Mesa and DX => vk layers. And kernel. It's very neatly integrated system.


I mean Nvidia goes All out in AI/ML Spaces, they even rebranded their company lol

I doubt they care to work with valve to release steam deck as much as high end compute market


The playstation 3 was the last "console". Everything that came after was general purpose computing crap.


The PS3 could do general purpose computing. Its built in operating system let you play games, watch movies, play music, browse the web, etc. At the beginning of the console's life you could even install Linux on it.

The hardware of consoles has been general purpose for decades.


The Pentagon even built a supercomputer using PS3s as compute cores.


The current and previous generations of xbox and playstation do use x86 CPUs with integrated AMD GPUs. But they aren't just PCs in a small box. Using a unified pool of GDDR memory* is a substantial architectural difference.

*except for the xbox one & one s, which had its own weird setup with unified DDR3 and a programmer controlled ESRAM cache.


Gaming is general purpose computing, or at least a part of it.


Also I guess AMD is fine with having much lower margins compared to Nvidia which makes them very competitive in this marker?


The CPU really only needs to be adequate - Nvidia can pull that off well enough for Nintendo at least.

Intel is in a far worse position - because they cannot do midrange graphics in an acceptable power/thermal range.


Aren’t the midrange GPUs considered to be pretty decent price/performance wise? IIRC they significantly improved their drivers over the last few years


Intel's?

They're certainly good enough to be the integrated graphics solution for office work, school computers, media centres, casual gaming (the same as AMD's low end), and other day to day tasks.

That's most of the market, but I think it's a stretch to say that they're good enough for consoles.


The A770 seems like a solid midrange card, about on par with 4060 Ti and 6700 XT. Even not that far from 4070 in some newer games Intel’s drivers are better optimized for. You also get 16 GB of memory.

Of course considering it’s Intel the power usage is way too high. Also AFAIK compatibility with pre DX11 games is poor and its not even clear Intel is eve making any money on it..


Last years Xbox leak revealed that Microsoft was at least considering switching to ARM for the next Xbox generation, though regardless of the CPU architecture they chose they indicated they were sticking with AMD for the GPU. It will be interesting to see how that shakes out, going to ARM would complicate backwards compatibility so they would need a very good reason to move.


AMD holds an arm license and uses them as part of the software tpu in the pro series processors as I understand. It's not unreasonable to assume they could make some ARM x86 hybrid CPU (like apple did for Rosetta) or a mixed arch chip we've never seen before that can run emulators native. Who knows.


But what would be the point of that? If you're already buying it from AMD and the previous generation was x86 so that's what gives backwards compatibility, just do another x86 one. The reason to switch to ARM is to get it from someone other than Intel or AMD.


The reason to switch to ARM is to get better performance, especially per-watt. If the supplier that's making your graphics card can deliver that, then why risk onboarding someone new?


In addition to better perf per watt, it’d allow them to shrink the console, potentially to something as small as a Mac Mini or NUC. Less need for cooling means a smaller console.

This could help them make inroads to people who might not have considered a home console before due to their relatively large size. Wouldn’t be surprised if the Switch did well with this market and now MS wants a slice of that pie.


> The reason to switch to ARM is to get better performance, especially per-watt.

I am not expert, but I cannot remember ever hearing that before. Why would arm have better absolute performance than x86?

As a spectator, reading tech press about architectures gave me the impression that even in performance per watt, the advantage arm has over x86 is fairly small, just that arm chip makers have always focused on optimizing for low power performance for phones.


> I am not expert, but I cannot remember ever hearing that before. Why would arm have better absolute performance than x86?

Better general design, or easier to include more cache. All the normal reasons one architecture might perform better than another. I mean, you heard about Apple switching all their processors to ARM, right?

> As a spectator, reading tech press about architectures gave me the impression that even in performance per watt, the advantage arm has over x86 is fairly small, just that arm chip makers have always focused on optimizing for low power performance for phones.

Intel would certainly like you to believe that. But for all their talk of good low-power x86s being possible, no-one's ever actually managed to make one.


There's an anecdote from Jim Keller that Zen is so modular that it'd be possible to swap out the x86 instruction decode block with an ARM one with relatively little effort. He's also been saying for a while now that ISA has little bearing on performance and efficiency anymore.

Apple's decision to switch to ARM had many reasons, licensing being just as important as performance, perhaps more so.

The low power variants of Zen are very efficient. You're still looking at Intel, but they've been leapfrogged by AMD on most fronts over the past half decade (still not market share, but Intel still has their fabrication advantage).


I'll believe that when I see a Zen-based phone with a non-awful battery life. Yes AMD are ahead of Intel in some areas, but they've got the same vested interest in talking up how power-efficient their x86 cores can be that may not actually be based in reality.


You won't see that happen because AMD have no interest in targeting phones. Why bother? Margins are thin, one of x86's biggest advantages is binary backwards compatibility but that's mostly meaningless on Android, there's additional IP and regulatory pain points like integration of cellular modems.

Intel tried and ran into most of those same problems. Their Atom-based SoCs were pretty competitive with ARM chips of the day, it was the reliance on an external modem, friction with x86 on Android and a brutally competitive landscape that resulted in their failure.

Regardless of architectural advantages from one vendor or another, the point remains that the arguable preeminent expert in CPU architecture believes that ISA makes little difference and given their employment history it's hard to make the argument of bias.


> Intel tried and ran into most of those same problems. Their Atom-based SoCs were pretty competitive with ARM chips of the day, it was the reliance on an external modem, friction with x86 on Android and a brutally competitive landscape that resulted in their failure.

The way I remember it the performance and battery life never quite lived up to what they said it would.

> Regardless of architectural advantages from one vendor or another, the point remains that the arguable preeminent expert in CPU architecture believes that ISA makes little difference and given their employment history it's hard to make the argument of bias.

Current employer is a much heavier influence than prior employers, and someone who's moved around and designed for multiple ISAs and presumably likes doing so has a vested interest in claiming that any ISA can be used for any use case.

There's a long history of people claiming architecture X can be used effectively for purpose Y and then failing to actually deliver that. So I'm sticking to "I'll believe it when I'll see it".


My impression from the tech press/tabloids is that apple was somewhat forced to switch away from Intel because Intel had begun to stagnate. Intel was stuck doing 14nm+ iterations and could only ship slow inefficient chips, resulting in very noisy and hot apple laptops.

Maybe apple could have stayed with x86 from AMD with some sort of co-design, but apple likely preferred to design their own chip so they would control the whole process and keep more of the profits. Designing their own chip also seems like it allowed apple to leverage their investments in TSMC for making their phone chips to also include their laptop chips. I wonder whether they already had a long term plan to dump Intel, or whether the plan started as a reaction to the serious problems that began developing at Intel years ago.


AMD has previously made a variety of ARM CPUs, such as the recent Opteron “Seattle” parts.

https://en.wikichip.org/wiki/amd/cores/seattle


All Zen 1 CPUs and newer have the PSP / ASP security processor which is ARM based and runs before the x86 cores are released from reset. This applies to all Zen models, not just the PRO versions.

The fTPM does indeed run on the PSP, so on the ARM cores, among many other things like DRAM training.


The stuff on the PRO is a Xilinx FPGA, or they have both an ARM Processor and an FPGA?


The NPUs are effectively hard IP blocks on xilinx FPGA fabric (i.e. there are close to no LUTs there - the chip takes in Xilinx bitstream but the only available resources are hard IP blocks)

There's also Xtensa cores (audio coprocessor) and ARM cores included (PSP, Pluton, and I think there's extra ARM coprocessor for handling some sensor stuff optionally)


> AMD holds an arm license and uses them as part of the software tpu in the pro series processors as I understand.

AMD uses ARM for its PSP as well which is more than just a TPM.

But afaik it also matters what type of license AMD has.

AMD would be need an ARM architecture license (like Apple has). This license allows you to do whatever you want with the chip (such as adding a x86 style memory model).

The downside to the architecture license is you have clean room design the chip entirely.


> It will be interesting to see how that shakes out, going to ARM would complicate backwards compatibility so they would need a very good reason to move.

Sticking with AMD for GPUs makes sense no matter the CPU architecture. AMD is competitive on x86, and at the moment Samsung is working out the kinks to integrate Radeon GPUs with their ARM CPUs... so once Samsung has proven the concept works (and, of course, paid for the effort), maybe large consoles will make the switch.

It shouldn't be much of a problem for game studios in the end, most games run on one of the household-name engines anyway and they have supported ARM for many years for mobile games.


Apple recently demonstrated ARM is pretty capable of emulating x86


They did, but to get it running as well as they did they had to deviate from generic ARM by adding support for the x86 memory model in hardware. Apple was in a position to do that since they design their own ARM cores anyway, but the off-the-shelf ARM reference designs that most other players are relying on don't have that capability.


> They did, but to get it running as well as they did they had to deviate from generic ARM by adding support for the x86 memory model in hardware.

Note that this is actually very cheap in hardware. All ARM CPUs already support memory accesses that behave similarly to x86. It's just that they're special instructions meant for atomics. Most load and store instructions in Aarch64 don't have such variants, because it'd use a lot of instruction encoding space. The TSO bit in Apple's CPUs is really more of an encoding trick, to allow having a large number of "atomic" memory instructions without needing to actually add lots of new instructions.


You can do it with standard ARM instructions these days.


> these days

Which instructions?


Those that are part of FEAT_LRCPC.


You could always do it with instructions itd just be slow no?.


I vouched for this comment, but it looks like you're shadow banned. You might want to email support.


They haven't demonstrated it at console-level prices.


If you wanna play semantics, and argue that the demonstration requires a M-Something chip, then they have in a $599 iPad Air with M1. Or I guess a Mac Mini with M2 for the same price.

If you want to be more lenient, the $129 Apple TV has A15 which is ~same design, but with less cores.


As already mentioned by others, AMD won the console market because back then the only thing they could compete on was price.

Which coincidentally, the most important thing to Sony and Microsoft with regards to consoles ("make tons and sell tons" products) is cost of materials. Even getting that cost 1 cent cheaper still means a $1 difference over 100 units, $10 over 1,000 units, $100 over 10,000 units, and onwards. Remember, we're talking many millions of basically identical units sold.

AMD couldn't compete in performance nor efficiency, but they could absolutely compete in price, while both Intel and Nvidia couldn't due either to their business strategy or the logistics for Sony/Microsoft of procuring more materials from different suppliers.

So long as AMD can continue to undercut Intel, Nvidia, and any other contenders they will continue to dominate consoles.


I generally agree with this reasoning, but your example could use some scaling down to convince a reader.

1 cent cheaper would net Sony a total of 500K USD for all PS5 units sold till date. So about a hundred PS5 units at retail as pure profit. A company of the size of Sony for a product of the scale of PS5 would absolutely forego that profit if the alternative offered any tangible benefits at all.


>if the alternative offered any tangible benefits at all.

That's the thing though: A new generation console only needs to be better than it's predecessor. It doesn't have to have groundbreaking technologies or innovations, let alone be a pioneer paving the way forward for other computing hardware products.

So cost of materials remains the chief concern.


Supposedly a big reason is that Nvidia is difficult to work with.


Is there any info to substantiate that?


There are a few examples / anectodes:

The first is MS' trouble with the original Xbox (https://www.gamesindustry.biz/ati-to-provide-chips-for-futur... - not a great example (20+ year old articles are hard to find) but mentions the issues MS had with Nvidia)

Then there's Apple's drama, which involved warranty claims for laptop parts that led to them being AMD only until the Arm move (https://blog.greggant.com/posts/2021/10/13/apple-vs-nvidia-w...)

Sony only went with Nvidia for the PS3, but that may be more about AMD's APU offerings than Nvidia's shortcomings.

Whether these are signs of a trend or just public anecdotes is in the eye of the beholder or kept away in boardrooms.


> Then there's Apple's drama, which involved warranty claims for laptop parts that led to them being AMD only until the Arm move (https://blog.greggant.com/posts/2021/10/13/apple-vs-nvidia-w...)

It’s second-hand info so take it with a grain of salt, but I read somewhere that there was a lot of friction between Apple and Nvidia because Apple likes to tweak and tailor drivers per model of Mac and generally not be wholly dependent on third parties for driver changes, but that requires driver source access which Nvidia didn’t like (even though they agreed to it for quite some time — drivers for a range of Nvidia cards shipped with OS X for many years and those were all Apple-tweaked).


Well EVGA withdrew from making Nvidia GPUs despite being one of the best board partners due to how unreasonable Nvidia was.


That's a nice little story until you find out that "unreasonable" in this case meant "nVidia didn't buy back units EVGA overstocked on with hopes of scalping people in the crypto-craze and refused to take the loss for EVGA".

Sure, "unreasonable".


The question is... why EVGA only?


...and why did EVGA withdraw from the GPU market altogether rather than pivoting to making AMD/Intel cards, if Nvidia was truly the problem?


I think EVGA’s withdrawal had to do with how impossible it was to compete with Nvidia’s first-party cards with the terms Nvidia was setting for AIBs. Other card makers like Asus have several other flagship product lines to be able to sustain the hit while EVGA’s other products consisted of lower-profit accessories.

They may have seen AMD selling their own first-party cards and anticipated AMD eventually following Nvidia’s footsteps. As for Intel, at that point they were probably seen as too much of a gamble to invest in (and probably still are, to a lesser extent).


Because GPUs tend to be low margin to begin with. Nvidia putting all these restrictions on them meant running at a loss.

This compares with PSUs which apparently have a massive margin.

EVGA might come back to the GPU manufacturing space with AMD eventually but Sapphire and Powercolor already fill the niche that EVGA filled for Nvidia cards (high build quality, enthusiast focused, top of the line customer support, warranty, and repairs). So it probably was just not worth picking that fight when the margins aren't really there and AMD is already often seen as "the budget option".

If AMD manages to pull a zen style recovery in the GPU segment, I would expect a decent chance of EVGA joining them as a board partner.


Linus Torvalds said a couple of words about it ... https://www.google.com/search?q=linus+nvidia


The friction between Linux wanting all drivers to be open source and Nvidia not wanting to open source their drivers isn't really relevant to any other platform besides Linux. Console manufacturers have no reason to care that Nvidia's drivers aren't open source, they can get documentation and/or source code under NDA if they choose to partner with Nvidia. Secrecy comes with the territory.


Its not about open sourcing their drivers. Its about providing ANY drivers for their hardware. Also note that this was from 2012. Nowadays, they actually do provide decent closed-source drivers for Linux.

https://www.youtube.com/watch?v=xPh-5P4XH6o


> It’s kind of interesting how they have this hold on gaming outside of conventional PCs, but can’t seem to compete with nvidia on just that one market.

As for PC gaming, word of mouth plus a huge chunk of money for advertising deals. AMD, or back then rather ATI, drivers were always known for being more rough around the edges and unstable, but the cards were cheaper. Basically you get what you pay for. On the CPU side, it's just the same but without the driver stuff... until AMD turned the tide with Zen and managed to kick Intel's arse so hard they haven't recovered until today.

The console market is a different beast, here the the show isn't run by kids who have had NVIDIA sponsorships in games since they were first playing Unreal Tournament 2004 but by professional beancounters who only look at the price. For them, the answer is clear:

- generally, the studios prefer something that has some sort of market adoption because good luck finding someone skilled enough to work on PowerPC or weird-ass custom GPU architecture. So the console makers want something that their studios can get started on quickly without wasting too much time porting their exclusive titles.

- on the CPU side there's only two major architectures left that fulfill this requirement, ARM and x86, and the only one pulling actual console-worthy high performance out of ARM is Apple who doesn't license their stuff to anyone. That means x86, and in there there's again just two players in town, and Intel can't compete on price, performance or efficiency => off to AMD they go.

- on the GPU side it's the same, it's either AMD and NVIDIA, and NVIDIA won't go and use their fab time to churn out low-margin GPUs for consoles when they can use that fab time to make high-margin gamer GPUs and especially all the ludicrous-margin stuff for first coin miners and now AI hyper(scaler)s => off to AMD they go.

The exception of course is the Nintendo Switch. For Nintendo, it was obvious that it must be an ARM CPU core - x86 and performance under battery constraints Just Is Not A Thing and all the other mobile CPU archs have long since died out. Where I have zero idea is why they went for NVIDIA Tegra that was originally aimed at automotive and settop boxes instead of Qualcomm, Samsung or Mediatek but I guess that the former two demanded inacceptable terms (Qualcomm), didn't want to sell low-margin SoCs when they could run their own high-performance SoCs for their Galaxy lineup (Samsung) or were too sketchy (Mediatek), so they went for Nvidia who could actually use a large deal to showcase "we can also do mobile" for a world that was dominated by the three before-mentioned giants.


> Where I have zero idea is why they went for NVIDIA Tegra that was originally aimed at automotive and settop boxes instead of Qualcomm, Samsung or Mediatek

I think a big part of this comes down to two things:

1. If you’re Nintendo, the Tegra X1 was the fastest mobile graphics chip available. Mali and Adreno weren’t anywhere close at the time. The alternative would’ve required shipping a Switch less powerful than the already-derided Wii U, which just was not an option.

2. Nintendo uses their own, in-house operating system that fits into under 400MB and runs on a microkernel. Naturally, you want good GPU drivers. NVIDIA’s philosophy is to basically make a binary blob with a shim for each OS. Not great, but it demonstrably shows the drivers can be ported with fairly little effort - Windows, Mac, Linux, Android, whatever. Qualcomm and MediaTek’s strategy is to make a bastard fork of the Linux kernel and, maybe, upstream some stuff later, with a particular interest in Android compatibility. I think it goes without saying, that the implementation which isn’t tied to a specific kernel, is a more desirable starting point.


>until AMD turned the tide with Zen and managed to kick Intel's arse so hard they haven't recovered until today.

Let's not revise history. Zen was better than Bulldozer, but it still took until Zen2+ or Zen 3 (I don't recall exactly) until they reached parity with Intel Core i.

Before that, the vocal crowd was buying AMD because they are the underdog (and still are, to a point) and were cheaper (no longer the case).


If we're going to not revise history it's probably important to not leave out the actual meat of what made AMD's Zen processors so compelling: core count.

Zen 1 launched offering double the core count of any of Intel's competing products at the same price. Intel was ahead on single core performance for a long time but in any well multi threaded benchmark or app Intel was getting absolutely demolished, with AMD offering twice the performance Intel was at any pricepoint. Intel failed to compete in multithreaded apps for 4 product generations, giving AMD enough time to close the single threaded performance gap too.

Now they are both pretty close performance wise, but AMD is well ahead from a power efficiency standing compared to Intel's competing CPUs.


>Intel failed to compete in multithreaded apps for 4 product generations

Sure, and they got away with it because multi-thread workloads aren't relevant for the vast majority of the population. They still aren't today.

Most consumer computing workloads, including gaming by far, are dependent on single-thread performance. The vast majority of people do not spend their computing hours encoding video, compiling source code, or simulating proteins. They play games, surf Facebook, watch Youtube, read Mysterious Twitter X, and chat or call friends and family on LINE/Discord/Skype/et al.

In case you are detached from reality, I ask you to realize most peoples' computing needs today can be satisfactorily satisfied with an Intel N100. That's a two generations old 4 core, 4 thread CPU among the lowest tiers of consumer CPUs availabe.

Hell, I personally can satisfy all my daily computing needs with an Intel i7-2700K Sandy Bridge CPU without feeling hindered. I surmise most people will be satisfied with far less.

Another way to put it is: For all the core counts AMD Ryzen (and now Intel) brought, most people can't actually make full use of them even today. That's another reason why AMD Ryzen took so long to become a practical competitor to Intel Core i instead of a meme spread by the vocal minority.


The big factor is before Zen it was dual-core low-end, quad-core high-end for Intel consumer chips on both the desktop and laptop. Advancement in core count and multi-threaded performance clearly was a direct result of Zen.


It also helps that console makers like Sony and MS are less concerned with driver quality since they're exposing a much more barebones OS to purpose built games and game engines, with fewer bells and whistles and much more control by Sony/MS over the software on the console.


From my understanding AMD holds the console market because its basically lowest price wins so the margins are business that Intel and NVIDIA doesn't really want.


They also more or less won the PS4/XB1 generation by default as the only supplier capable of combining a half decent CPU and GPU into a single chip - Intel had good CPUs but terrible GPUs and Nvidia had good GPUs but terrible CPUs. That tide is shifting now with Intel's renewed push into GPUs, and Nvidia getting access to high performance ARM reference designs.


It's also interesting that at one point, Nvidia's chips were going to support both x86 and ARM (see Project Denver), but x86 support was scrapped due to Intel's patents.

https://semiaccurate.com/2011/08/05/what-is-project-denver-b...

Would of been interesting to have had another large x86 player


ATi made the GameCube’s GPU if that counts


Close. They bought the company that made it.


There is a new one steam-deck-like that's intel, but I don't remember the brand


MSI. But expect very bad power consumption


Yeah, but I would be interested in the performance. I use my steam deck always connected to a power outlet (I have multiple connected around the home), so power consumption has never been a problem, the most I use it in handheld mode is on the subway for a total of about 1 hour/1 hour and a half (this is the total time spent for a roundtrip)


They all have very bad power consumption lol


Apart from the Steam Deck, that is.


The OLED Deck in particular is great. On less demanding games or streaming with Moonlight where I can crank down the TDP it’s not hard to squeeze 10+ hours of playtime out of its 50Wh battery.

Linux probably helps here, with its greatly reduced background activity compared to Windows.


Msi is making an intel based handheld called the claw. Its pretty sweet.


Is it? It doesn't seem to me that Intel can compete on performance per watt, which is what really matters in a handheld. From what I can tell, the APU alone can get to 45W in the claw, while that's the max power draw for the entire steam deck (LCD). Add to that the fact that MSI is not selling games to recoup the very aggressive price point of the steam deck and you just get a worse machine for more money




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: