Hacker News new | past | comments | ask | show | jobs | submit login
Apple Executive Discusses New Mac Pro's Lack of Graphics Card Support (macrumors.com)
55 points by SenHeng on June 12, 2023 | hide | past | favorite | 59 comments



An apple event; they bring Kojima in to talk about gaming on Mac; they introduce gaming mode; they are excited by gaming on Mac; they still don't offer dedicated gpus? Not even in their workstations to try to appeal to game developers?

I don't know what to say. Is it because gpus are upgradeable, and they are allergic to this, retreating from it like vampires from holy water? Is it to keep the price of GPUs from muddying the water in their careful pricing ladder where the cpu and gpu are bundled?

It's insulting


it's really really weird man. There are so many pieces in place here for Apple to take over gaming but they seem allergic to it or that it's beneath them or so embarrassed by it that it is beneath them or something. It's so weird.


Gaming can make them a lot or money but not at the cost of everything else. They would want to compete with nvidia like they are competing with intel. High-end graphics studios are the core consumers for this product. What the guy said also makes sense, to use a third party GPU apps would need to be designed to use non-apple memory/api which is more stuff that could go wrong (bad experince for users and support cost) and not a whole lot of gamers will jump to apple anyways.


no because trying to tackle gaming means they need to compete, and competing with vendors is a race to the bottom of the barrel.

apple's whole business model is completely reliant on them occupying a segment of the market that no one else can.


Core gamers aee well known for not wanting to spend money, because they'd rather spend money on the games, so it is a little bit tricky for Apple to capture that market


If you're insulted you should relax. That discussion isn't aimed at you at all.

Casual game revenue (mostly mobile) >> console game revenue >> revenue from AAA games on gamer rigs and those rigs themselves. Perhaps those greater thans should all be ">>>>". If you look at the entire (huge) gaming ecosystem, Apple is already a major player. If you look the segment I can tell you care about, it's tiny.

IIRC gaming got Nvidia off the ground but for a long time hasn't been their revenue pump.

Apple's gaming boost can really only get them one thing: if you're going to buy a PC anyway, perhaps you'll pay a little more and not buy a console. Maybe that already happens a bit in the PC world, I don't know, but for Apple anyway it makes sense since unlike almost all PCs, Apple have a differentiated product. And if they talked about AAA gaming (if they did -- I didn't bother to listen) I imagine it's like Honda fielding an F-1 team: staking out a presence in a high end, that makes the intermediate product look like a better deal. Like the Mac Pro: no faster than a product they sell for less than half the price, with a couple of extra features very few need. It drives real sales elsewhere.

Finally: Apple has a "problem" that a Bungie or Alienware (new Dell) don't: Apple's revenue is so huge that they simply can't enter into certain markets unless it's for marketing reasons (like the Mac Pro). Look at the ipad: it basically is the tablet market, and if it were a standalone product it would be one of the great business success stories of all time. But because it's in Apple's product mix it's ignored by financial analysis which is just one step above "failed product": "at least it doesn't lose money."

BTW I suspect NVidia had some gamers in its early team and has some in its culture. Apple carea a lot about Music because Steve Jobs cared a lot about music, but hasn't had gaming DNA for decades, really since the Apple II IMHO.


I'm not much into gaming on a Mac — most of the games I like to play only run on Windows, and I have a gaming PC for that...

But I have tried a few games on the Mac laptop I use for work, which doesn't do anything graphics intensive so it has the slowest Apple GPU available (as in, the first one where they stopped using Intel or AMD GPUs).

Fore very game I've tested (not many admittedly) I can turn the graphics settings up to 11 and it they run without dropping frames at the native resolution and refresh rate of the display (3K @ 60Hz) even on battery power in "low power mode" which substantially under-clocks the GPU and CPU to improve battery life.

Obviously high end GPUs like the NVIDIA A100 are significantly faster, but I'm pretty sure everyone using those on a gaming PC is doing it for bragging rights and nothing else - and those people want to build their own gaming rig.

Pretty much nobody would upgrade a modern Mac if it was possible. The built in GPUs are perfectly fine even on the lowly M1, let alone an M2 Ultra. It's just not worth the engineering effort (by Apple or by GPU manufacturers) for the small number of customers who'd be willing to spend the money.

And honestly even the A100 would be a downgrade in some ways. NVIDIA's flagship has an order of magnitude less memory than the M2 Ultra for example.


Maybe you’re confusing an A100 for something like an RTX 4090?

No one is using an A100 for gaming. They’re > $10k. They don’t even have video out. They’re also actually slower for most gaming and related tasks - the RTX 4090 is juiced to the gills power and heat be damned all to push frames as fast as possible. The A100 is heavily optimized for dense datacenter applications where power and heat are among the top concerns.

They also have up to 80GB of RAM, and the RTX 4090 has 24GB which is a ways away from “an order of magnitude” less memory when the M2 Ultra supports a max of 192GB of unified memory.


in a way the confusion proves the point.

in terms of real usage, a sizeable population of users won’t actually benefit from upgradable graphics. for gaming or otherwise. even a lone A100. (how many users not only use an A100 but also installed it? yes, that’s silly.)

knowing an upgrade is possible, knowing an upgrade is sensible, knowing how to upgrade, and actually performing the upgrade (and successfully) are all steps down the upgrade path. would suspect the conversion rate on that is quite low considering the size of the funnel at the start. especially with the last few years of logistics.

as for these unified macs, if the system starts to show weakness, then yes, time to consider that it isn’t the right tool for the job, and maybe it is time to push that workload to a dedicated gaming pc (for now) or the cloud (for scale.)

and, yes, there is still option 3. maybe time for an upgrade… of the entire system. even with the benefit of a curated environment, that is a hefty cost. everything gets replaced or nothing gets upgraded. which, full circle, only matters when an upgrade matters. will it?


>Fore very game I've tested (not many admittedly) I can turn the graphics settings up to 11 and it they run without dropping frames at the native resolution and refresh rate of the display (3K @ 60Hz)

You tested retro and mobile games or cant tell 30 from 60 fps. Just one example using almost 10 year old game medium details 30 fps https://www.youtube.com/watch?v=WGTCSKMz-ec. Even older Tomb raider with lower quality and resolution down to 15 fps https://www.youtube.com/watch?v=ZMPri2zgfak. M1 Max with 4K high settings 30 fps https://www.youtube.com/watch?v=wZVOo9WTIsU


What games are you running and how old are they? The base M series GPUs are decent for integrated cards but they aren't powerhouses. For a modern title like Resident Evil Village they just aren't equipped to handle 2560x1600 at 60FPS locked on Ultra settings.

The M2 Ultra costs $4000+. For that money a 4090 or 7900 XTX is very much in the cards and will offer considerably better GPU performance.


It can not be news that when it comes to user experience vs flexibility Apple choses UX. They do this every single time.

Remember every (GPU) configuration has to be supported to not hurt the user experience, kind of like making webpages that support 1x1 pixel screens (is there any?) to 8k.

Every configuration setting increases the test area exponentially. This is what makes Linux great (for some)!


You're telling me they can't restrict which GPUs are allowed to run on their hardware and OS?

They can test and bless a handful. Unless I'm missing something, it's not an all or nothing decision


They definitely have hardware blacklists in the OS. I could not run my Dell Thunderbolt dock on my Mac, even though it used all standard components in it. It was literally a blacklisted piece of hardware and said so in the "System Report."


They can, and then people will go and buy some gpu card that is not whitelisted and they will blame Apple and give them bad press.

The "don't let people shoot themselves in the foot" mindset has worked great for them.


Also, no official Vulkan/OpenGL support. Garbage Metal instead. They really try their hardest to drive away gamedevs from their lame walled-garden platform.



A lot of people in these comments don't seem to be really engaging with what was said. The unified memory architecture is fantastic, everything is in one place, easy and fast to access, up to the limit of the size of memory and compute you can fit on one chip it's massively advantageous. There is obviously a point where you reach the limit of how much you can pack in, but the disadvantages of having to shovel data back and forth mean you need a really powerful GPU before it becomes worth having.

But let's assume you really want more compute Apple has to first design a standalone GPU, then it has to do a PCI or CXL core or some custom interconnect, then it has to redesign it's OS for non-uniform memory doing all that fun stuff hiding the memory communication, then it has to add the primitives to it's languages for you to manage memory (because to get performance you have to actively design for it) and then you need to get all your performance critical applications to re-write their stacks with this in mind.

It's a massive lift! And for what? Only Desktop can take advantage of it, and in Apple's product line only the Mac Pro. And that's assume you execute well and the device ends up more capable than just using the SoC. And all of this we're talking about on top of the massive lift Apple literally just finished moving all their products onto Apple silicon. It's just a crazy ask for a v1 of a product.


> then it has to redesign it's OS for non-uniform memory doing all that fun stuff hiding the memory communication, then it has to add the primitives to it's languages for you to manage memory (because to get performance you have to actively design for it) and then you need to get all your performance critical applications to re-write their stacks with this in mind.

Yeah, it's not like their OS and Languages already support this model given that everything before the switch to Arm worked this way


Before it was just like on windows with a graphics card that holds the compute relevant data. Now they have two cards with different memory bandwidths and ideally you would want to combine them of some sorts instead of wasting the whole m2 ultra gpu


Since macOS Sonoma will continue to support discrete GPUs on intel Macs, is there a fundamental technical reason why it would be impossible to support them on Apple Silicon Macs?


> Apple has to first design a standalone GPU

Why would they ever need that? They just need to provide the usual PCIe interface which the existing cards support.


The Mac Pro has a lot of significance in the professional media industry. Final Cut changed the landscape forever and at $3500 entry level Mac Pro that could compete with a $70k AVID station after a $1000 software purchase and an Nvidia card upgrade. Now as a college student I could go work for myself because I could (barely) afford the tools. I still work on a Mac to this day.

But that $3500/entry level and upgrade into your career machine isn't the goal in whatever the heck these are. Aside from maybe like a Hollywood editor who wants to use a mac and gets their way, I have no idea who these are for. I think the goal is to just look really cool in a fancy office.


I've noticed dedicated Mac rendering stations have been getting rarer and rarer in favour of Windows or Linux builds. Small businesses are buying multi-core CPU systems and then upgrading the RAM and GPU every four or five years now instead of buying an entirely new Mac Pro every four to five years. A print shop I went to recently had a Windows machine running 32GB of RAM and a 3060, whereas previously that would've been entirely the domain of a Mac.


> Aside from maybe like a Hollywood editor […], I have no idea who these are for.

That’s exactly who these are for


Early personal computers used a basically passive backplane (eg. S-100 bus). After that, the Apple ][, IBM PC, and later Macintosh models used a motherboard with expansion cards.

Throughout this era (from, say 1974 through until 2022), the elements that were composed to create a personal computer were MSI, LSI, and VLSI integrated circuits mounted onto a PCB, and wired up with traces on the board(s) and expansion card slots.

The M1-based Macs introduced a new era in personal computer architecture: previously used for phones and embedded devices, the SoC, and particularly, the chiplet-based SoC, took over from the motherboard.

The elements composed to make a PC now are chiplets, wired up with silicon interposers, and encapsulated into an SMT chip. The benefits in speed, bandwidth, and power usage of a SoC over VLSI-on-a-PCB are enormous, and reflected in the performance of the M1-based Macs.

Where do expansion cards fit with an SoC-based model? They're slow, narrow bandwidth, and power-hungry devices. A GPU expansion card on an SoC-based computer might as well be accessed over an Ethernet.

Of course, it's disruptive. People legitimately _like_ the ability to compose their own special combination of functions in their PC, and it allows a level of _end-user_ customization that isn't (currently?) possible with SoCs.

But retaining that level of customization means significant performance costs, and the gap is only going to grow as further "3D" chiplet assembly techniques become common.

The cost of creating a SoC from chiplets is sufficiently high that a market of one (ie. end-user customization) isn't possible. Right now, we get base, Pro, Max, and Ultra variants from Apple. It's possible we'll get more in future, but ... it's fundamentally a mass-market technology.

The era of end-user hardware customization is very likely drawing to a close.


That the SoC has the same PCIe 4.0 x16 bus as a typical PC has doesn't seem to be an inherent issue in utilizing more powerful GPUs itself. After all, we stick more powerful GPUs on typical PCs and they outclass the M2 Ultra in many workloads just fine, slow bus be damned. Being closely integrated with the rest of the SoC definitely has its benefits but it often isn't all that important unless your workload specifically needs to share across CPU+RAM+GPU+VRAM nearly constantly. Which for laptop stuff or media editing, sure - it fits that use case perfectly and I see why they'd never want to start copying the raw video files over a bus to the GPU just to run it through the encoder and copy it back to main memory.

I think the more interesting question is: what additional would having add-on GPU support realistically provide for this type of product? Enable you to play a limited subset of games on a ~$10,000 Mac Workstation which will run them no better than a standard PC which can play anything? Enable you train AI models faster or kerchunk GPGPU faster while connected to really expensive SoC with no other purpose to contain more peripheral I/O? All while you use an OS that's just extra inconvenient for the task vs. the Linux environments everyone is already using?

A Mac workstation is already a niche and if they have something that can target the main use case of that niche (media production) perfectly already then putting as much work into designing new products for additional niches of the already most niche use case just isn't worth it. Particularly when the only reasonably priced way would be to get rid of the integrated solution that serves the main use case better. Not that the technology couldn't be made to do it as good as any other system could, it just doesn't make sense to go after since it's so far from their markets. Really the M series is already an expansion of "well our phone chip is fast..." as is. It has a hard enough time adapting to being a useful laptop (e.g. display outputs) let alone replace every type of system.


Have you never in your life used a dedicated gpu? If not, maybe that's why you imagine that using one leads to "performance costs," but the rest of us would like to get a proper frame rate in a graphics program like we can get on PC


I've used many dedicated GPUs on an almost daily basis for over 20 years. Mostly for graphics, but sometimes for GPGPU, crypto mining, and ML training.

My point is: Apple doesn't want to support external GPUs, because they've moved to a new architecture with the components of the PC integrated on the SoC, not across an off-chip PCIe bus. They've done that because the on-chip GPU has performance advantages on multiple dimensions.

I've no doubt that Apple would love to retain those customers who want 1.5TB of RAM, and 6 GPUs churning away at whatever task, but ... it's not worth building a SoC to support that niche, and it's not worth the performance compromise that an off-chip solution would imply.

The solution they've implemented in the Mac Pro, using PCIe switches to multiplex a bunch of slots across a handful of lanes, is treating PCIe as a secondary, low-performance I/O bus for things that don't need low-latency/high-bandwidth access to core CPU, GPU, and RAM. Which is fine for some stuff, but not really what you need for a GPU.

I'm personally no fan of losing the expansion card model, but the tradeoff is worth it for most of their customers.


I suspect Apple would be perfectly content if they never sold another Mac Pro. I suspect most Mac Pros manufactured are for internal use. Their main products don't have a GPU, so neither does this. It already had an F-U price. Whatever customers use them for, it's an irrelevant side hustle for Apple.

I'd say they should just kill the product off to stop all the incessant whining about what a ripoff it is, but I'm sure they'd counter that there's no such thing as bad publicity.


Their main products have integrated GPUs (well, on chip), just not discrete ones. It is an important distinction, especially since they’ve been seen as a winner for portable applications.


When pressed by Gruber, Ternus says that there are PCI cards made specifically for Avid, Pro Tools, video capture cards, networking, and even some custom hardware configurations. The Mac Pro can also be loaded up with a lot of PCIe flash storage.

So yeah, a niche machine but there is a lot more to PCI than graphics cards.


You can most of that via USB, though. And for storage you'd be better served by M.2.


Professional AV rigs can get quite hairy. This video shows how one composer’s computer setup went from a pile of boxes and cables to a single rack mounted machine. I’ve set the playback point to where he shows the difference.

https://youtu.be/xNrG2mwt4Uo?t=1995

And in this 4 minute video you can see all of his cards installed into his Mac Pro with nary a GPU in sight.

https://youtu.be/kIQINCWMd6I

And he is just a single person. Imagine a company with 7-10 or more of those machines for both audio and video production.

Desktop “workstation” computers have become relics of a bygone age. Having one or two external graphics cards is really only relevant to gamers and hobbyist ML tinkerers. Actual model training is done on the cloud. The few people/organizations that actually use PCI cards in a desktop will be well served by the new Mac Pro.


You would but there’s a finite amount of M.2 you can put on a motherboard. NVME is just PCIE at the end of the day so expanding storage is a real use for these slots.


> "Fundamentally, we've built our architecture around this shared memory model and that optimization, and so it's not entirely clear to me how you'd bring in another GPU and do so in a way that is optimized for our systems,"

So... why expose PCI lanes in the first place, then? "Optimization" isn't really an excuse, as long as you can saturate the PCI bus it should run smoothly. Some support would be better than nothing, especially if you're brave enough to pay $7,000 for a Mac Pro.

This whole thing is kinda baffling. Surely they haven't gotten rid of the multi-GPU address space support, they still support Intel/AMD systems after all.


What I don't get is, why the lowered RAM limit? Wouldn't being able to run a 512GB model on the GPU be a big selling point?


I think it’s a symptom of the same system design - Apple needs the RAM on the chiplet or as close as possible, because it’s shared with the GPU and wiring the GPU up to standard DIMMs all the way out on a motherboard would yield unacceptable latency.

These are essentially phone CPUs that got super super beefy - they’re limited by what they can fit in their chiplet/system-on-chip. There’s no allowance for heterogeneous hardware access like a second tier of RAM via a motherboard northbridge or something.

I think they’ll get to those higher RAM densities by M4 or M5 once they have more die space to spend on memory controllers.


For M5 maybe


I think quite a few people would be happy to be able to put an Nvidia card in one of those Mac Pro slots and use it through CUDA only, as a compute/ML accelerator.

But Apple doesn't want to allow that either because they see CUDA as a competitor to their own APIs. So they do this whole spiel about the indivisible beauty of their own shared memory architecture.


I don’t get how Apple is going to block nvidia from releasing cuda only drivers?


Nvidia hasn't released a Mac driver in years due to Apple's dislike for them, so they effectively already did block them.


Apple executive 1: "Let's add last gen PCIe slots that don't do anything."

Apple executive 2: "Great! What are they for?"

Apple executive 1: "They're decorations to make them look like prosumer systems with the hope of future expansion."

Apple executive 2: "Okay. Then what will we promise?"

Apple executive 1: "That's a tomorrow problem. We're selling hope here."


Apple consistently fails to fully implement hardware protocols because they only need to implement the subset that their hardware and peripherals use. This is yet another example in a very long list for the M1/M2 series. They're cut corner hardware.


They’re big phones. Why implement all that legacy stuff for a phone? I image they get a ton from simplifying their hardware engineering from supporting both the PC-style architecture that necessitates supporting a bunch of interop AND the phone style architecture that necessitates close integration to only doing things phone-style.


I suspect there is a more fundamental truth here that’s practically reliable.

Apple wants you to upgrade your hardware by purchasing new devices, not components, because that sells more units.

If you think about all their insulting decisions from that lens, everything else makes more sense than any other basic assumption.


What actually stops amd/nvidia form slotting a card into it, writing a driver, and letting people use it?


Same thing that stopped Nvidia from releasing a driver for their modern cards for Intel Macs... Apple.


their bottom line


This is the long term result of Apple not wanting any further dependency on AMD/Nvidia, but at the same time not realizing that AI was going to hit and be very, very Nvidia centric.

But for most tasks which were known to exist 5+ years ago when these decisions were made, their on die gpu really is plenty fast and offers a larger memory area than most every gpu known to be coming at that time.


Apple has AI hardware acceleration in their devices.

Nvidia's advantages in AI come from industrial grade hardware that lives in datacenters, and Apple hasn't been competitive in datacenter hardware for decades.


You'll take what Apple gives you, or go back your adventures in plebian life. Who needs an industry-leading gpu for graphics or compute.

I'm curious to see how far gaming goes with their translation+gpu vs. likes of nvidia. Same problems linux has with wine, translation against windoze directx, or lack thereof.


People that want this from a computer won’t buy a Mac.

It’s like complaining that a tractor is too slow to drive down the interstate.


Maybe Apple will release PCI expansion cards itself that you can take from Pro to Pro.


If expandability is not what Apple wants, then what is the space inside Mac Pro for?


Non-GPU PCI cards. A lot are I/O for audio and other scenarios.


Allows for lower energy consumption per cubic foot. Planet saving dontcha know?


There is no excuse to not support eGPUs though


history repeats itself.

it's 1989 all over again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: