Hacker News new | past | comments | ask | show | jobs | submit login

Early personal computers used a basically passive backplane (eg. S-100 bus). After that, the Apple ][, IBM PC, and later Macintosh models used a motherboard with expansion cards.

Throughout this era (from, say 1974 through until 2022), the elements that were composed to create a personal computer were MSI, LSI, and VLSI integrated circuits mounted onto a PCB, and wired up with traces on the board(s) and expansion card slots.

The M1-based Macs introduced a new era in personal computer architecture: previously used for phones and embedded devices, the SoC, and particularly, the chiplet-based SoC, took over from the motherboard.

The elements composed to make a PC now are chiplets, wired up with silicon interposers, and encapsulated into an SMT chip. The benefits in speed, bandwidth, and power usage of a SoC over VLSI-on-a-PCB are enormous, and reflected in the performance of the M1-based Macs.

Where do expansion cards fit with an SoC-based model? They're slow, narrow bandwidth, and power-hungry devices. A GPU expansion card on an SoC-based computer might as well be accessed over an Ethernet.

Of course, it's disruptive. People legitimately _like_ the ability to compose their own special combination of functions in their PC, and it allows a level of _end-user_ customization that isn't (currently?) possible with SoCs.

But retaining that level of customization means significant performance costs, and the gap is only going to grow as further "3D" chiplet assembly techniques become common.

The cost of creating a SoC from chiplets is sufficiently high that a market of one (ie. end-user customization) isn't possible. Right now, we get base, Pro, Max, and Ultra variants from Apple. It's possible we'll get more in future, but ... it's fundamentally a mass-market technology.

The era of end-user hardware customization is very likely drawing to a close.




That the SoC has the same PCIe 4.0 x16 bus as a typical PC has doesn't seem to be an inherent issue in utilizing more powerful GPUs itself. After all, we stick more powerful GPUs on typical PCs and they outclass the M2 Ultra in many workloads just fine, slow bus be damned. Being closely integrated with the rest of the SoC definitely has its benefits but it often isn't all that important unless your workload specifically needs to share across CPU+RAM+GPU+VRAM nearly constantly. Which for laptop stuff or media editing, sure - it fits that use case perfectly and I see why they'd never want to start copying the raw video files over a bus to the GPU just to run it through the encoder and copy it back to main memory.

I think the more interesting question is: what additional would having add-on GPU support realistically provide for this type of product? Enable you to play a limited subset of games on a ~$10,000 Mac Workstation which will run them no better than a standard PC which can play anything? Enable you train AI models faster or kerchunk GPGPU faster while connected to really expensive SoC with no other purpose to contain more peripheral I/O? All while you use an OS that's just extra inconvenient for the task vs. the Linux environments everyone is already using?

A Mac workstation is already a niche and if they have something that can target the main use case of that niche (media production) perfectly already then putting as much work into designing new products for additional niches of the already most niche use case just isn't worth it. Particularly when the only reasonably priced way would be to get rid of the integrated solution that serves the main use case better. Not that the technology couldn't be made to do it as good as any other system could, it just doesn't make sense to go after since it's so far from their markets. Really the M series is already an expansion of "well our phone chip is fast..." as is. It has a hard enough time adapting to being a useful laptop (e.g. display outputs) let alone replace every type of system.


Have you never in your life used a dedicated gpu? If not, maybe that's why you imagine that using one leads to "performance costs," but the rest of us would like to get a proper frame rate in a graphics program like we can get on PC


I've used many dedicated GPUs on an almost daily basis for over 20 years. Mostly for graphics, but sometimes for GPGPU, crypto mining, and ML training.

My point is: Apple doesn't want to support external GPUs, because they've moved to a new architecture with the components of the PC integrated on the SoC, not across an off-chip PCIe bus. They've done that because the on-chip GPU has performance advantages on multiple dimensions.

I've no doubt that Apple would love to retain those customers who want 1.5TB of RAM, and 6 GPUs churning away at whatever task, but ... it's not worth building a SoC to support that niche, and it's not worth the performance compromise that an off-chip solution would imply.

The solution they've implemented in the Mac Pro, using PCIe switches to multiplex a bunch of slots across a handful of lanes, is treating PCIe as a secondary, low-performance I/O bus for things that don't need low-latency/high-bandwidth access to core CPU, GPU, and RAM. Which is fine for some stuff, but not really what you need for a GPU.

I'm personally no fan of losing the expansion card model, but the tradeoff is worth it for most of their customers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: