Hacker News new | past | comments | ask | show | jobs | submit login
Apple unveils new Mac Studio and brings Apple Silicon to Mac Pro (apple.com)
456 points by 0xedb on June 5, 2023 | hide | past | favorite | 393 comments



The best thing about this whole event is that they didn't mention AI even once, all they're saying is ML. Which is what it is. AI is a hype word.


I agree. I was really impressed that even with regard to the Autocorrect update, they used technical terms like "transformer model" without using hype words. They very clearly labeled it as a predictive text engine rather than some magical pseudo-sentient enigma or something.


Why were you impressed by that? WWDC is a developer's conference. I bet many developers would have been offended if they were told AI is some enigma.


Although WWDC is a developers' conference, the keynote tends to be a popular event for regular Apple users as well, and over the last decade the keynote announcements have become significantly more user-facing while technical details are relegated to the Platforms State of the Union (or the specific sessions throughout the rest of the week).

So I was impressed — or maybe "pleased" would be a better word. I was pleased that they chose not to hype this feature using the common buzz words of the year, and instead just stuck to technical terminology. Some other companies would not have done the same, in my opinion.


cough. Google I/O. cough


Yeah, that's what I was thinking of too.


> I bet many developers would have been offended if they were told AI is some enigma.

You think? The "AI" enthusiasm i've seen everywhere online borders on religious fanaticism...


Apple marketing uses weasel words all the time.


"logarithmic leaps" in their Welcome IBM Ad is my favorite industry example


And they didn't in this instance when I think other companies would have, so I was impressed.


> magical pseudo-sentient enigma or

Since when does artificial intelligence imply sentience? They're interdependent and neither are reliant on the other.


Yes, but that’s not how “normal” people take it. People believe these are thinking machines and programs, capable of some amount of reasoning. It’s coming from somewhere between decades of terrifying sci-fi and a few months of impressive model results.


Well they are programs capable of some amount of reasoning…

Basic/moderate reasoning tasks are absolutely in the capability of GPT-4.


Note that reasoning is _another_ thing that doesn't require sentience. Plus, GPT-4 is capable of doing simple reasoning tasks.


The other response has my thoughts right. Although the two things are separate, laypeople (and a surprising number of others too) seem too easily duped by false claims of modern language models exhibiting "emergent properties" of potential sentience. It's absurd, to be sure, but that also means it would've been easy to tap into that to generate hype among such people.


I've noticed the hype and the doomsayers implying both.


Much better than their previous marketing 15 years ago measuring hard drive capacities in "songs".


How dare they speak in a language their users might understand!


And yet somehow it's better to say transformer models and not AI?


Sir, this is a developer conference.


The rest of the conference, sure. But the keynote has always been designed for a mainstream audience.


That wouldn't make sense. "Let's have a conference strictly for developers, but let's speak to a completely different audience in the keynote."


That is actually how it goes, though. The WWDC opening keynote is the typical Apple computer user's best look at what's coming up. It's not not technical, but it's very much deliberately made to be approachable to non-developers.

Whether you agree with this choice or not, it's the way things have been for at least as long as I've been watching (which is over a decade).


Think of it by analogy as the keynote for the conference is like the back cover summary of a book.

Despite technically being part of a developer conference, the intro keynote has always functioned more like a consumer-focused annual hardware and software update for Apple device users.

Hence things like the ever-present use of things like impressive-sounding "marketing benchmarks" that wouldn't pass as up to snuff on slides in a room of developers.

And also the very high level of abstraction so that non-technical users can understand the major updates — that's not exactly a trait one would use to describe a developer conference.



"AI" is a technically meaningless term. In contrast, a transformer model is a very specific technical thing.


ML is subset of AI. AI that is inclusive of other concepts and it's not a buzzword. It's valid to call anything ML as AI. Sure there is a lot of AI hype but it's not some made up marketing jargon.


You're technically right that AI is a superset. But, at least in a computational context, "AI" is hardly ever being used to refer to cognitive science and other AI subsets that are not directly related to ML. So ML is usually the more precise terminology. But I've pretty much given up on that one.


Large language models are definitely AI.


> large language models are definitely AI

The term “AI” is becoming over inclusive to the point of meaninglessness. Cupertino is smart enough to pick up on that. “Statistical linguistics” is the best general term for LLMs I’ve come across.


LLMs are the apex of NLP research to date, and NLP is obviously a branch of AI. You may have some sophisticated notion of AI, AGI, human-level or human-like AI or whatever, but NLP has been considered AI for generations now.


> NLP has been considered AI for generations now

I'm arguing the term AI has become "inclusive to the point of meaninglessness." That doesn't mean it was always meaningless.


IDK, I feel like people are making it so narrow that it basically cannot exist.


Expert systems used to be considered "AI". Certainly some optimization algorithms like genetic algorithms were "AI". Pretty much any system that makes a decision was considered "AI" at some point.

The problem is the general use of the term changes in a way to often make the meaning unclear to the point of being near useless. Outside of marketing, of course.


AI is a pretty well defined field I feel like. It combines symbolists, connectionists, evolutionaries, bayesians, and analogizers. Just because those not in the field misuse the term does not make it less useful.


It is useful when you use it like those knowledgeable of the field. Marketing departments are not using it that way.


I still consider expert systems to be AI. Is there some reason why I should stop?


The best example I've seen to contest that LLM's are "AI" is to make it print the total number of line's it's response will be, essentially add

"First answer with the total number of lines your total message will be, including the line with this number"

For example, GPT4 said "12" for this prompt: "First answer with the total number of lines your total message will be, including the line with this number

Make a program in Cpp that sums all prime numbers from 1 to 100"

LLM's cannot "think", they can only make sequential predictions based on their previous answers - so they cannot formulate a response and then modify that response on-the-fly


GPT very much can, in a limited way. It responds to feedback.

It's not exactly a huge leap of imagination to suggest that it won't be long before it can create an internal feedback loop by comparing its own abstractions with its memories and live experiences of external feedback.


That’s more like an inherent limitation of autoregressive prediction than that of LLM. Maybe LLMs can be trained or finetuned in other ways that allows it to think before answering.


Thinking before answer would just be like writing its text output into a hidden buffer that only it is including in the context of the conversation. So its like its always also having a private conversation with its self. I can't prove this isn't more or less what I am.


Try asking an engineer how long a task will take


It’s a chameleon of a term that can mean anything you want. That’s not a good thing.


See also: "Best practices."


Even better: "digital quality"


Not anymore in common usage. ML is roughly "learning from data". AI has had some historical meanings, but in the last 10 years became first a marketing term for Deep Learning (a subset of ML) and now a term for LLMs and Diffusion Models (and the like). Which are subsets of deep learning.

There still exist people who refer to AI as the general study of computerizing intelligence, just like somebody somewhere is still telling people that "begs the question" means dodging it. But the most applicable definition of AI as it's commonly used right now is the as the brand under which OpenAI and friends are releasing generative neutral neural network models.


I'm not really following what your argument for "ML is not a subset of AI in common usage" here is exactly.

AI as an umbrella term has been the usage in the whole CS field for decades upon decades.

A few marketing people trying to hijack and misuse the term over the past 5–10 years didn't just magically change the meaning that has been very well established for a long time.


> marketing term for Deep Learning

Deep learning is just neural networks. With multiple layers ("deep") because we only recently have built hardware that can handle "deep" neural networks fast enough.

They've still been defined in the 1970s.


The general public assumes AGI when they hear AI and that’s a problem worth fighting against.


The goalpost keeps moving on what is sufficiently "general" to meet a hypothetical AGI definition. At one point the distinction was more meaningful: AIs were always highly domain-specific (eg, playing chess). Now the same transformer model can write a string-parsing JS function, concoct a recipe that uses six arbitrary ingredients, pass a biology exam, sort unstructured data, and give tax advice, but somehow that still isn't general enough to qualify.


The lines are fuzzy for sure, but the point is more about public perception instead of actual definition. The issue is really more about the public worrying about an AI being 'conscious' enough to be a threat to humanity (AKA skynet) and LLMs/Diffusion models just can't be that.


I agree there's a strong element of misplaced anthropomorphism. But the issue isn't even that LLMs are (themselves) a threat to humanity, or that we have any reason to think they are "conscious" (in a human, or even animal, sense). The reason for concern is that the emergent behavior we're seeing from a mere token-predictor should give us serious qualms about blindly pursuing the creation of alien intelligences, which are highly capable of unintended consequences precisely because they are neither human, nor conscious.


IMHO it’s AGI when can do everything humans can and at least do it just as well.

It’s a pretty high standard but I feel it’s a “irrefutable” one - if you can find an information processing task that humans can do but the AI can’t or does poorly compared to a human, then it has failed the AGI test.

It’s also a useful one in that no one will have a problem with AGI taking over a task if its capabilities matches and exceeds any human’s.

Else where should the goalpost be then? Your claim that LLMs have reached AGI is about as valid as someone claiming ELIZA is AGI - in both cases standards are completely arbitrary.


I think human-level intelligence is a useful metric in its own right, but it's not as though humans necessarily hit some arbitrary "general" threshold either (none of us can do tasks that are trivial for computers, like remembering one million digits with perfect fidelity or calculating the millionth digit of pi).

I think any goalposts are only useful for answering specific questions:

- What practical problems can open-ended intelligence-mimicry solve?

- Will solving those problems potentially put humans out of work (and if so is that good, and either way what should be done about that, if anything?)

- Might this technology (or its perception) kick off a military arms race?

- Can these alien intelligences become clever enough to pursue goals in contraction of human well-being, or even the intent of their creators?

The intrinsic "isness" of intelligence categories is about as interesting as "whether a submarine can swim".


So is AGI AI? Why use such a broad term that puts AGI, a term defined by science fiction, with Transformer, a practical next token predictor based on gradient descent and attention mechanism, in the same basket?


AI as a term is poorly-defined, and conjures associations that range from sorta-accurate to totally fictional. ML is a specific set of concrete technologies, so it's a better and more precise term in most contexts


AI is a general term with very fuzzy meaning, much like intelligence. This is on purpose as we don't understand these things well and their definitions/usage reflect that.


Right, but we (humanity) understand the purpose-made ML that Apple talked about today just fine, which is why it's better not to use a fuzzy term


Totally. Different terms for different use cases. They wanted to be more specific here and ML suited that better. They could have been even more specific (which would have been cool) but it would have overshot their target audience's context.


AI has been used in a lot of ways, from a philosophy standpoint I’d like to insist it be used only when a meaningful definition of intelligence is applicable and other usages be considered incorrect going forward.


Why would they mention their weakness? Have you heard of Siri lately? What do they have to offer in the AI space? Nothing. At best they can contribute to ML by hardware. With that said. ML != AI. ML is a means to AI. Google, Facebook can talk about AI. They have Bard, Llama to show off. I'm sure Apple is furiously working behind the scene trying to catch up and if they do, they will be sure to be noisy about it.


Games have talked about "enemy AI" for 25+ years. Hell, Halo 1 was considered revolutionary for its advanced enemy AI. Was that a hype word? Was that an incorrect misnomer?


Different fields use the same term to mean unrelated things. "AI" in gaming just means "non-player-controlled entity behavior" and has meant that all the way back to the first chess computer games (when they did think what they were doing was "AI").

"Theory" in law, versus in science is another example.


Scripted enemies are "non-player-controlled entity behavior", yet they are not AI. For NPC behaviours to be considered game AI, there needs to be some calculation depending on the current state of the game and objectives of the character. What makes a game behaviour "AI" is its feedback with the player actions and/or events evolving in the game.

Super Mario mushrooms and turtles are not AI controlled; Pac-Man ghosts are. (Possibly the earliest and simplest form of game AI, but quite effective for its purpose).


> Scripted enemies are "non-player-controlled entity behavior", yet they are not AI.

I've worked on and seen games and game engines where scripted NPC behavior was lumped under the "AI" umbrella and implemented by the same people and systems that support the non-scripted behavior.

Architecturally, it doesn't make that much sense to separate out scripted behavior from non-scripted, because non-scripted behavior has most of the same needs to playing animations, triggering, audio, interact with physics, etc.

Scripted behavior is just an "AI" that happens to not read any inputs before deciding its outputs.


Super Mario mushrooms have an ai component.

When they hit walls, they change direction


The enemies didn’t learn though, so it definitely wasn’t machine learning.


The AI was pre-trained before the game shipped. Kinda like how you can have a conversation with ChatGPT and it will eventually forget what you've "taught" it


Pre-programmed.


Which leads us to the question; in the biological world, are actions stemming from instinct, but that looks intelligent to the observer, also intelligence?


Yes


No. In games AI is a jargon term for the behavior of an NPC. It has a long history in this use. When people talk about a game’s AI it is often clear what is being discussed regardless of the specific technology used. It is therefore useful for communicating an idea and that’s usually all that really matters.

There’s even a distinct Wikipedia article on this use: https://en.wikipedia.org/wiki/Artificial_intelligence_in_vid...


If it's written in Python, it's probably ML

If it's written in PowerPoint, it's probably AI


They also mentioned transformers a few times, without saying AI.


I don't understand the distinction. Why is it okay to claim that a machine can learn, but not that the result is an artificial form of intelligence?

Most AI systems we're interacting with today don't do any learning. They've been trained, and now they are being used to generate content or classify things, but they aren't doing 'machine learning' any more.

They are being used to do tasks that require intelligence. But they are accomplishing them via artifice.

Like a sort of artificial form of intelligence.


They only mentioned "transformer model" several times and one other type of model for generating our avatar (forgot which type that is).


Indeed, BUILD 2023 was impossible, if we would be drinking shots by everytime AI was mentioned I wouldn't have survived the keynote.


Yes, and it's shameful when people descibe a wrong or bogus answer as "hallucination". They are just stimulating unknowledgeable people's fears and/or fanboiism.


Hallucination is a far better word than the recent verbiage I've seen around LLM's lying to you.


BS and travesty generation are well attested.


"Hallucination" comes from vision models, I think. "Confabulation" would be a better term for text.


They need to leave some hype for the next year as well.


Yet all of their other marketing is convoluted non sense. It just shows them being more tactical with marketing to knock some competition down a notch.


This is a very specific type of virtue signaling that I find very funny


$7k starting price. High but compared to? Just glancing at the new HP Z6 G5, which may be a fair comparison, with a 16-core CPU, 8x16GB of memory (the lowest configuration that populates all 8 channels of that CPU), minimal storage, and a parts bin GPU that nobody wants, $6k. To get 8 thunderbolt ports like the mac pro you'd have to fill each and every one of its add-in card slots with a HP dual TB4 card.

Edit: The HP 340L1AA TBT4 card is only compatible with one expansion slot in that machine, so what I suggested is not even possible. Perhaps the Mac Pro is the only workstation you can get with 8 Thunderbolt4 ports.


If you actually want 8 Thunderbolt 4 ports that probably puts you towards that price range anyways because nobody has really seen the demand to make a quad Thunderbolt 4 card yet (as far as I know) so you're eating up a lot of slots which were probably designed to have way more than PCIe 3.0 x8 plugged into them. Same with the memory bandwidth, if you actually need 800 GB/s of RAM bandwidth then there isn't really a traditional option to compete.

If you compare with what most people actually need out of a workstation instead of what this can do as a workstation you run into a lot of opposites though, and just as easily. 192 GB as a maximum cap is honestly pretty low for a workstation these days, as is a max CPU configuration of 2x10+2.

Overall I don't think it's horrendously priced as some of the previous Mac workstation components could get, but at the same time, unless you have a very specific use case or specifically need macOS, it's not exactly compelling. It is "good enough" to finally round out the lineup though.


> Overall I don't think it's horrendously priced as some of the previous Mac workstation components could get, but at the same time, unless you have a very specific use case or specifically need macOS, it's not exactly compelling. It is "good enough" to finally round out the lineup though.

It's such a specific use case that I'm not entirely sure what the use case even is. Capturing off an SDI camera? Great! Why do we need so many pcie cards and so little memory? These things aren't even setup to hold that much storage. It appears to not work with PCIE GPUs, so that's out. You probably don't need additional thunderbolt ports since it already has those. Maybe additional USB, but probably not that many cards worth? Most audio equipment is external?

I get it. Apple is saying "This machine is for a very specific type of video editor" lol.


> It's such a specific use case that I'm not entirely sure what the use case even is.

Live TV production, I think. Mostly in the rackmount form-factor. A plethora of "IO breakout boards" is what turns a regular computer into a "video production system" head-unit.

Though also, at least three of the PCI-e cards shown on the slide were for fibre-optic networking. So, presumably, this would be the Mac to get if you're trying to Beowulf the M2-Ultras together for some kind of NUMA-friendly ML model training. Or just for a render farm. Insofar as Apple dogfoods things, I would guess this is what they use them for themselves.


I liked how one of the cards on the screen was a sound card, as if a PCIe slot wasn't 1000x overkill for that amount of I/O. We had USB ports that could handle that when Bill Clinton was still president of the United States.


There was also what I believe to be an SDR card (the one with all the antenna-inputs) — which is pretty interesting in its implications, but perhaps not the brightest one in this context. Aren't RF antennas also lightning rods? :)

There was also something there that had DB9 and DB25 connectors, but both female. (I would think this was a weird SuperIO card, but the computer side of a serial port is usually male.) There was also a lot of stuff on that card. Anyone know what that one was?


That's the sound card. It's a Lynx E44. It has two giant breakout cables.


Plus for audio tasks that USB isn't good enough for, there is Thunderbolt (Avid will do 256 channels via TB3) and Ethernet (256+ channels via AVB possible).


I thought that too, but the existence of highly affordable SDI-to-Thunderbolt adapters means the Mac Studio should still suffice unless you need a very large number of inputs... and I don't know how many multi-SDI PCIe cards there are.


AJA makes a couple of Thunderbolt 3 external SDI capture/output devices with 4x bidirectional SDI I/Os. If you want more than that you need multiple devices or a PCIe card. AJA and Blackmagic both make 8x SDI cards, as do a couple other vendors.


I don't even see the use case for SDI capture. You can buy SDI-to-Thunderbolt converters starting at $150.


SDI->Thunderbolt “converters” are capture cards, they just use TB as the physical connector. Also, not all capture cards have the same feature set. Some of these cards have SDI capture and/or output, sometimes simultaneously and some with up to 8 SDI IO connectors. Some cards also do HDMI capture and/or output of the same or different feeds as the SDI signal(s).

The landscape of capture cards (not just SDI) is pretty diverse. It all depends on the kind of workflow you’re dealing with. Live productions in particular will often demand many connections, to be able to capture multiple camera angles, as well as to output to confidence monitors and/or backup recording devices, frame syncs, switchers, SDI routers and more.


Thunderbolt falls short at 4k resolutions. It is OK for 4k24 through 4k30, but for 4k60 or higher (HDR, high frame rates and not to mention 8k) you would probably want an internal PCIe card. 40gbit thunderbolt sounds like it should be fast enough, but it turns out that for PCIe traffic it tops out at 32gbit, which just isn't quite enough.


I am probably the minimal target market for the mac pro m2 ultra. I am an artist and I do a lot of 3D rendering. I think it's a great price and I would love to own one, but I wouldn't even consider it unless it had support for Nvidia GPUs. Good GPU-based 3D rendering engines need CUDA. Even with the ones that don't, GPU rendering on a 4090 is 4x-5x times more performant than on an M2 Max, and building my own PC allows me to have multiple of them. Also Octane, the rendering engine in their demo, is trash. Specifically, it's fine for fancy titles and cartoons but terrible for realistic renderings.

Also, I still have a chip on my shoulder about Apple failing to update Mac Pros for about a decade and then rubbing salt in the wound with their pathetic trash can. It would take A LOT to get me back after that BS. Moving to Windows was a horrible experience and they gave me no choice.

Lastly, VFX software is heartily embracing Linux these days and I'm loving it, but I did have to invest in a KVM switch system and 10Gbe network so I can comfortably run Photoshop and Substance on a separate Windows machine.


> I am probably the minimal target market for the mac pro m2 ultra. I am an artist and I do a lot of 3D rendering. I think it's a great price and I would love to own one, but I wouldn't even consider it unless it had support for Nvidia GPUs.

Please don't get me wrong, but when I said that Apple seemed to have narrowed their target audience to the point where I was confused who the target audience even was - I was taking stuff like your use case into account when I said that!

For my own personal use cases? Ehh. I don't need a $7000 Mac Pro. A $2000 Mac studio would suffice, but it doesn't do the one thing I wish any of these machines did - and that's accept a bunch of m.2 NVME SSD cards!

Honestly I always thought a "more pro" laptop would be one where you could open a door on the bottom and have a couple of m.2 slots.

Anyway, the last Mac Pro that I really would've fit in the target audience for was the original. Those were great. I have a trash can because I always loved that design and got an incredible deal on one. I'll probably pick up one of the last intel ones if/when I find a killer deal on one.

I ended up just switching to building my own PC desktops and using the Mac Pro for things that I prefer using a Mac for.


No idea if the operating system will support nvidia graphics cards on arm.

However, even if it does, the mac pro only has one 8 pin power connector and two 6 pin connectors.

The 4090 needs a 16 pin input, and I've only ever seen adapters that convert 2x8pin to 16. Maybe you could get one card running, but any more than that and you will need some kind of hacky external power supply setup.


Seriously - they do not even talk about GPU in the PCIe slot - then again they do not expressly rule it out either.


I think it's ruled out by the fact that Nvidia would need to release macOS drivers, and there is no realistic chance of them doing so without Apple's support.


at this point, for mac pros which have expansion slots, why would not apple do that. both for nvidia and amd cards?


Why did they discontinue support for eGPUs? They have some legacy beef with Nvidia and now they are increasingly competitors.


FWIW Ampere Altra dev kit is $4k for 128 cores and supports 768GB of RAM. Bring your own memory, storage, PSU, GPU

https://www.ipi.wiki/products/com-hpc-ampere-altra?variant=4...


Curious if you can feed all those Ampere CPUs or if it is memory bandwidth constrained.

Disclaimer: this is not a "zomg apple grate; others must suck" comment. Apple claims that their integrated design balances things out to get the best performance. It will be interesting once there are some real benchmarks to see how well that claim continues to stack up.

The M transition has been amazing, but not every iteration can be a winner.


It has 8 x DDR4 3200, so about 204GB/s of memory bandwidth, 1/4th of the M2 Ultra.


Unless somethings dramatically changed with the M2, Apples chips can't leverage their full memory bandwidth in CPU-only workloads so comparing their numbers to those of a discrete CPU isn't especially meaningful. When Anandtech tested the "400GB/sec" M1 Max the most throughput they could coax out of it via the CPU was 224GB/sec.

https://www.anandtech.com/show/17024/apple-m1-max-performanc...

The huge bandwidth numbers on the M-series are mainly for the benefit of the GPU, less so the CPU.


Sure, but that's a real number, not peak. The top ryzen (7950x) or top intel (i9-13k) manage only 83GB/sec peak, and something in around 60GB/sec real. Generally the arm memory model is looser than the x86-64, which allows a greater fraction of peak bandwidth. So apple chips that fit in thin laptops have 4x the memory bandwidth of the 150 watt and up high end desktops from Intel and AMD.

Sure the AMD Epyc (12 channel) or Intel Xeon (8 channel) compete, but at even higher power ratings and large physical sizes.

Not to mention that 224GB/sec leaves a fair bit of bandwidth for compression, matrix multiply, ML acceleration, GPU, video encode/decode, and related.


Which means that M1 (previous gen) REAL numbers beat maximal theoretical numbers of ampere.


With the M2 transition, I feel we're back in the days of the PowerMac G5.

It was a beast on its own, but the main issue for comparison was that you couldn't just do the same things. If you're working on video editing or Photoshop perhaps, but if you're developping games for instance you'll want an x86 computer, and however good the M2 specs are on benchmarks, you can't compare it on real world usage anymore.

For ML it seems there are efforts to be ARM compatible, then VR is a weird bit where current ecosystem is x86 only, but the introduction of the Apple headset could probably help ?

For scientific research, I was under the impression that Nvida/AMD GPUs were a given, and Metal had very little support.

What would the other serious uses of a high power machine that could help make comparison with other actual workstations ?


High compared to the Mac Studio, I'd say. $3,000 extra for the exact same specs and an extra 6 PCI slots. I guess if you need them, that's the cost of entry, but $500/slot seems like a tough sell.


Agreed, and the small max memory is disappointing. However, maybe the thermals of the Pro give it a significant performance edge over the Studio?


No, because the Mac Studio doesn't throttle to begin with. (Hilariously, LTT water-cooled one and it made no difference.)


M1 Ultra Mac Studio almost never ran the fans above 1300rpm, I doubt the M2 version is thermally constrained either.


The HP is usually at least 30% off and has ECC memory - so depending on your use case it may still make sense.


I think the lust for ML performance has made people a lot more likely to arbitrage these costs by building their own machines so the usual like for like comparison doesn't necessarily hold the same way it does for Apple Silicon laptops.


How many simultaneous 4K input video steams can that HP handle (input and encoding)?


I haven't the slightest idea. I assume people with such requirements know how to specify and buy machines. That said, these guys who specifically target the video production market sell machines with the latest Xeons and e.g. an RTX 4080 (which I suspect is the more relevant part). https://www.pugetsystems.com/workstations/xeon/w790-e/


You could probably configure it for more with those alveo ma35d cards, bifurcation, and risers if you really cared and you're after volume of AV1 instead of higher quality prores. It'd also be much more expensive at that point and look like a Frankenstein machine. Need a proper rack mount server.

But this is an extremely niche use case anyways


this comparison is with a major caveat of the platform you're buying. using workstation-grade hardware to contrast is not quite fair imo, unless you really want 100s GB of RAM for cpu alone.

you can easily go with faster cores and like 128 GB ram with the latest ryzen platform and buy consumer-tier gpu and save on much more. however, it might make more sense if you just want to buy in bulk and not worry about all that. then might as well throw in the 1k monitor stand in your shopping cart.


Genuinely curious: folks with desktops or workstations with 1TB or more of RAM, what do you use it for?


I’d been wondering how they were going to handle expandable memory with the M chip, since the integrated memory seemed pretty central to the design - seems like the answer is, “they’re not.” Be interested to see if PCI expansion is sufficient to satisfy the Max Pro market.


Mac Pro = Mac Studio + Expansion Cards

I have to imagine this will be a huge disappointment to some, because 192GB of shared memory is way less than the 1.5TB of RAM available on the "old" Mac Pro.


Perhaps someone will come out with a new PCIe card with a load of RAM slots on it, and then writes a kernel driver to map the pcie card pages to appear as regular pages


Someone already has :) It's called CXL. Both Intel and AMD are diving head first into it for servers. Apple would just need to support it...


That's an interesting idea. But do you actually need to go all the way to making the extra memory appear as a contiguous part of the system memory? I am thinking about CUDA unified memory and perhaps some parallels to your idea.

The number of applications that are likely to use the extra memory is probably pretty small. So if you have some sort of framework that those developers can integrate into their software, you've probably done everything you need to do.


You could use them as some kind of swap or ram disk, but I don't believe as normal RAM would be possible due to how CPUs work.


Everything seems slightly parallel-universe in Apple world, looking at it from over here in x86/linux land.

I can see why they’d decide to go in-package with their memory, it is really very fast. And 192GB of memory is not a huge amount of memory in server/HPC land, but it is still a decent chunk of space. You could load up a Mac Pro with a bunch of PCIe nvme drives or something, I wonder if it would really be that hard to adapt to that.

I certainly wouldn’t turn down the chance to try, haha.


NVMe wouldn't give you the best latency, but a 16x PCIe card loaded with DRAM and addressable as a scratch disk doesn't sound bananas. I wonder why Apple didn't market something like that as a first party solution.


A RAM disk formatted as swap? Probably would work. Funny concept.

Let’s lie to OS and tell it that this memory is actually a disk. Then, it will lie to the programs, and tell them that that disk is actually memory.


I expect that expansion cards from Apple are going to be the way they differentiate the mac pro line. I'm betting that Apple is going to come out with discrete GPU's based on their integrated graphics and sell them as add-ons for the Pros for ML/AI workloads and likely to help with normal stuff like rendering in Autodesk/Adobe/Finalcut type apps.

That said, it isn't the bet I'd make if I was tasked with purchasing for a company. I'd get the studios in a reasonably high config and then sell them every 2-3 years and replace with newer model. You'd pass the pro's in perf pretty readily from CPU bumps alone and likely recoupe a lot of your investment.


I've got a PCIe RAID card loaded with NVMe SSDs on my 2019 MP. It's really great. Unfortunately, the latest macOS versions are not amenable to using other drives as a boot drive (IIRC may be related to secure enclave stuff?). Lots of (third party) macOS apps don't play real well with additional storage devices either, and the OS itself has some (relatively minor) hurdles to deal with, too. For project/data/"working directory"/general filesystem usage, it's great.

That said, up to now it's probably provided me with ~ fastest possible drive speeds available for macOS. For business/dev purposes, I really can't complain (except about price & lack of modern Nvidia support - but alas), even though I'm already using M1 Max these days - the single-core speed is just too good.


The approach they took is certainly enough to satisfy the video and related markets. It won't help those who need truly staggering amounts of ram for their workloads. It may not be a big use case for the Intel Mac Pro, but it a use case nonetheless.

For video editing, color grading, audio editing, 3D animation, etc. this new machine seems really strong. I am not sure if there is anything beyond that, however.


I think that was deliberate. I am kinda amused at how they keep on narrowing the scope of what the mac pro is intended to be used for lol.


They probably have very good data on how their machines are used, and are optimizing for the fat part of the market.

This may not be a great machine for training models, which is what I happen care about (I couldn't care less about video). I wonder how big the model generation market actually is though.


I do wonder in these conversations how many people are actual users of these things - I'm not, I got a Studio last year to manage my photography addition better, and the Pro is pointless for that - but you'd assume that Apple have talked to major Pro users, and they've said things like "oh, no for training a model I just spin up terabyte instances on AWS, why would I do that on a workstation?", or whatever.


I assume the audience is the Disneys/Pixars of the world, and the people making ads, etc in 8K. They presumably need that I/O. For them the absurdly expensive monitor is a rounding error, and the extra local disk is worth it over running over a SAN. Perhaps also some computationally adjacent applications like CFD or weather prediction.

I struggle to think of anyone else with such an application. There are applications that could benefit from the features but it wouldn’t be cost effective (e.g. something more systolic where it’s better to rent cloud service, even if it were nominally slower, because once you’re done you don’t have the hardware lying around).


>3D animation

Disagree. All the good GPU-based rendering engines need CUDA, and none of them are optimized for Apple silicon. Octane (the one in the demo) is trash, only good for fancy titles and that sort of thing.


Octane, Redshift, and eevee/Cycles all work on Metal, and Renderman and Mantra are CPU-only for now -- RM/Mantra have beta-versions that are GPU enabled, and Karma is nvidia-only and RM XPU doesn't work on macs at all yet, but I don't think many people are using either in production very much yet. I think the only GPU renderer people are using in production pipelines that's really locked to NVIDIA chips is Arnold, right?


I can't speak for cycles, but Redshift is very slow on OpenCL. I've never heard of using it on Metal. I know it was written for CUDA and didn't even support OpenCL for a long time. Do you know how it runs on Metal?

I don't think V-Ray GPU will run on anything but Nvidia, and if it will then it's definitely slow.


Ah yeah forgot about V-Ray. I haven't tried Redshift on Metal, nope; all my work has been Renderman and Cycles recently. Curious to hear how well it works, people online seem to say it's pretty fast but hard to tell marketing copy from actual performance when it comes to renderers.

FWIW Cycles is definitely slower on my M2 than it was on my 3080 but it's not a huge difference -- maybe 20% slower? I still have to let the render run overnight either way haha.


Interesting. Which M2 machine are you using?


Mac Mini with M2 Pro


192GB ram does seem like enough for a large fraction of the potential market, especially @ 800GB/sec which makes it much more usable than similar amounts of ram on Intel/AMD desktops at 1/8th the bandwidth.


> much more usable than similar amounts of ram

People keep repeating this but how does higher bandwidth (probably not 8x higher though) compensate for a lower amount of RAM?

It's not quite as silly as the people saying that 8GB in the base config 'feels' much faster than 8GB on a PC cause the drive/swap are "so fast" but still..


The DDR5 bandwidth is ~51.2 GiB/s per "channel".

With consumer desktop CPUs having 2 "channels" (~102.4GiB/s).

And prosumer desktop/workstation CPUs (e.g. Threadripper) having 4 "channels" (~204.8GiB/s).

While apple is claiming 819.2GiB/s.

That is _max throughput is 4x/8x more_ (depending if you compare it to prosumer (fair comparison) or consumer (unfair comparison for a $7k system) hardware.

Now the _max_ part is important, Apple mainly reaches this by having more bandwidth, i.e. parallelism.

Mainly (oversimplified) the per "CPU memory channel" bandwidth for x86 is 64bit (for 51.2GiB/s) so the M1 Ultra is roughly comparable to having 16 "CPU memory channels" instead of 8 (prosumer) or 4 (consumer).

Some applications can take advantage of this nicely and will scale potentially even to 4x/8x speed, most probably will not some might even have neglible improvements. But applications which use a lot of RAM (as much as they can get) and most of the RAM they use is "warm" (e.g. doesn't just lie around with little access but isn't super "hot", i.e. highly contented either) will profit quite a lot.

On the other hand applications which use little RAM but the same small RAM region very heavily contented likely will hardly at all profit and will likely run faster on overclocked RAM on consumer systems.

Luckily for apple most the typical use case they sell their pro desktop model for belong more/mostly in the first category.

Additionally if you can run Linux on this system they might become _very_ interesting for some scientific applications for some users. I mean even e.g. Zen 4 EPYC CPU only have 12 "CPU memory channels" not 16 and it's much easier to put a desktop box "somewhere" then a server unit.

Side note: I say "CPU channel" in quotes because while it tends to be the marketing term things are more complicated in practice, e.g. in general DDR5 is splitting the 64bit channels into 32bit sub-channels, and just listing the channel width and throughput is still not painting the whole picture at all, e.g. the latency also matters for some applications (hence why OC can make sense) etc.

EDIT: Correction: The Threadripper PRO models have 8 "CPU memory channels" so it's just 2x on a fair comparison.


AMD's Strix Halo is an APU targeted next year that supposedly has 256-bit wide LPDDR5X memory, on laptops I think, which should be nice.

I do hope folks other than Apple can get fast! The new CAMM dual channel module will hopefully help reduce footprint but there's plenty of soldered down systems so it's not really required. It also surprises me there's not a GDDR based APU, except I guess too many games need both a bunch of system ram and video ram and 32GB vram is expensive and has significant lower draw. Apple going wide is really the obvious move, & doing it on package was the best way to do it, it seems.


> not a GDDR based APU

AFIK Apple currently doesn't have a foodhold on gaming outside of phone/tablet games (where they are strong, but the games are used to LPDDR perf.).

And while many of the Graphic applications people do use would profit from it I'm not sure it's that big of benefit.

In the end a lot of users just do daily tasks on Apple laptops and for that battery life absolutely trumps any speed benefits GDDR gives.

I guess the desktop versions could/should use GDDR, but there are two issue: 1. Heat, 2. more design/variance in the CPU production supply chain.

The 2nd can drive up cost by quite a surprising high amount.

The 1st point I'm not supper sure about. But AFIK one big problem with things which are stacked on-die is heat as all the heat of the CPU needs to go through whatever is stacked onto it to reach the cooler. And while there are probably all kinds of tricks to improve this I could imagine that using a GDDR which has a higher power draw and produces more heat itself could make this more of an issue. But that is purely speculative.

I guess we have to see until around 2025+ to know if such stacked chips are more prone to die a early (i.e. <5 years) heat death. Especially some of the Air models without heat pipes could be at risk if used in a less climate controlled environment. Or it could be all perfectly fine. I'm looking forward to finding out.


Apple already outclasses everyone in throughput & throughput/watt especially. In case it wasn't already crystal clear, I was talking about everyone else finding ways to get their throughput up to more competitive levels, such as by supporting GDDR.


But a huge part of their throughput numbers is them having the equivalent of 16 memory channels.

AFIK it's not that e.g. AMD couldn't do that and use on-die LPDDR5, I mean that is a bit different but not sustainable harder then using their chiplet + X3D tech.

The problem is that for AMD it's not a good business decision. First they need to be price competitive (a problem Apple doesn't have). Most applications outside some areas scale much more with the speed/latency of RAM then with bandwidth increase (beyond some basic level). In turn for a lot of e.g. desktop Ryzen processor use cases which are not media processing there is not enough value into adding many more channels. Especially if you consider that for large parts of the prosumer/server space IT admins will be really unhappy with on-die RAM Apple can afford forcing it, AMD can not. The reason is that while for desktop systems RAM death is rarely an issue in the server/heavily used workstation spectrum RAM death is not uncommon. Even for media processing or multi VM servers going beyond a certain number of channels (less then 16) is unlikely to be a good financial decision. I would go as far as arguing if the M2 Ultra wouldn't be based on tightly gluing 2 processors together and that processors have 8 channels because they are sold with a focus for media processing it wouldn't have anywhere close to 16 channels (but for Apple the cost of having less then 16 channels with their design is higher then the cost of having them, partially because they also don't sell server they don't want to compete with accidentally and similar).

Where I'm going with this is that outside of some media processing targeted products they don't need to "get their throughput up to more competive level" and getting it to having a some additional 8 channel choices (e.g. for some high end laptops) would likely be good enough in practice. And in turn we are unlikely to see more. And in turn they have no reason to try tricks like using GDDR memory.

The way Apple can push hardware vendors here is less because of a need for many more channels, but because for a want.


I don't know but I could see the possibility that maybe AMD really can use more throughput for mobile chips.

To spitball some figures, the rx7600 is 13b transistors, 165watts, and runs off 128-bit 288GBps GDDR6 memory. Or take 2017's rx580, but which was 256-bit GDDR5 good for 224GBps, at similar power.

An APU is going to be considerably lower power than either of these discrete gpus. I think I somewhat overestimated AMD's ability to scale up their APUs to be throughput limited, in most cases. The 256GBps LPDDR5X memory they're planning should offer a nice bump over where we are be pretty good.

I'm a bit surprised to see the memory bandwidth not being as constraining a factor as I had first guessed. It's also seemingly bizarre how overbuilt it makes Apple's memory setup look.


I mean it is a constraining factor for live on-the fly real time multi 8k-track video editing and similar ;=)

It's just that most people don't do that on a daily usage basis.

Maybe not a I need 800GiB/s constraining factor but definitely a I don't want just 200GiB/s constraining factor AFIK.


> The problem is that for AMD it's not a good business decision.

The PS5 and XboxX have an AMD APU (CPU+IGPU) with a wide memory interface. Seems like a fine decision. What surprises me is they haven't brought it to low/medium range desktops, until Strix Halo in 2024.


> People keep repeating this but how does higher bandwidth (probably not 8x higher though) compensate for a lower amount of RAM?

I guess it depends on your use case, but back when part of my day job was debugging performance problems with JVM-hosted applications, one of the things that was most noticeable was that the degree to which latency blows up memory use - whether the latency was GC, disk, network, DB queries, whatever. It all turns into holding items in memory longer before they get processed (which in the case of older JVMs, turns into a death spiral of GC thrashing, which blows up processing times further, until your application is staggering along).

Increasing memory can alleviate this, but only up to a point - and it can make things worse, because you've now got significant overhead managing the in-flight workloads.

It's also possibly that this is simply a decision driven by what Apple can produce with their M2 chips at this point, and that they wanted to offer 384 or 640 GB as the maximum, and everyone is making excuses for them.


Maybe it adds another step to the memory mountain? Folks who use these kinds of workstations might think of in-package RAM as just the next level of cache, if someone goes ahead and makes a card with comparatively slower memory card slots.


This is competing with servers though, and Epyc 9004 series has 460 GB/sec (and up to 6TB of ram per socket). Apple still gives you faster connection, but I feel like servers below 256GB ram are pretty rare these days.


Except that my fairly modest upper midrange desktop has ~1.2TB/sec of memory bandwidth...

Cause, I just added the GPU and system RAM bandwidth numbers together. Which is what needs to be kept in mind with much of this. Yes that is a lot of memory bandwidth and its hella useful for some subset of users, but its shared, and largely pointless for a lot of CPU bound tasks. But OTOH, may not be enough for many GPU bound ones.

It also assumes that pretty much every other CPU manufacture on the planet are idiots for optimizing for latency and putting in large caches to compensate (aka the desktop parts from AMD/intel have only _two_ channels, vs the 8+ in the server/workstation parts) and price discriminating for the parts that have more CPU bandwidth. AKA, you can get amd machines in the same ballpark (or possibly faster depending on how fast you can get 24 channels of DDR5 to run).

So, I'm not saying which is better because its likely workload dependent, but to claim its a blanket insurmountable advantage is questionable. Particularly since the price ranges we are talking about a similar machine is probably a 64 core threadripper plus a fat nvidia GPU or four and the shear core count and raw GPU compute is probably a win in most workloads.


Sure, but what if a normal C code needs more bandwidth?

Or if a GPU code needs more than 12-16GB of memory (normal cards) or 24GB (if you get a 4090)?

What I like about the apple approach is that low end laptops/desktops get 100GB/sec. Pay another $500 get 200GB/sec. Pay another $500 get 400GB/sec. Pay another $1000 get 800GB/sec and still fits in a small desktop. On the PC side with AMD/Intel you get the same memory bandwidth for the low, medium, and high end chips. Until you upgrade to a threadripper, which is a 280 watt chip, on an expensive motherboard, usually in a rather large PC case and makes the mac studio look cheap.


> Sure, but what if a normal C code needs more bandwidth?

https://www.anandtech.com/show/17024/apple-m1-max-performanc...


Putting things in context, the 3-thread measured memory bandwidth measured there on the M1 Max is approximately equal to the maximum theoretical memory bandwidth available across all channels and chiplets of a current Ryzen Threadripper PRO. If the M1 Ultra doubles the achievable memory bandwidth by virtue of having twice as many CPU clusters, then an 8-thread test should still be able to match AMD's latest EPYC processors with 12-channel DDR5.

(I don't know how well AMD's current processors do with utilizing the socket's full DRAM bandwidth from a limited number of chiplets, but I wouldn't be surprised if it's a more severe limitation than what M1 Max/Ultra show with their CPU clusters. It looks like only the 12-chiplet EPYC processors actually use all the links from the IO die to the CPU chiplets.)

So the inability to use all the DRAM bandwidth from the CPU cores, while perhaps disappointing, isn't exactly a weakness for Apple's processors compared to the competition.


I believe the chiplet links are pretty generous, they handle L3 <-> chiplet traffice and chiplet <-> chiplet traffic. On the smaller configs they use 2 links per chiplet.

Thinks like McCalpin do not seem to show much difference on the different number of chiplet Epycs, although I've not personally tested the newest Genoa chips.


I just built a workstation with a 32 core Threadripper Pro, 128 GB ECC RAM, Thunderbolt 4, and an RTX 4090 for $6,500.


Likely better on any workloads that are GPU heavy and fit in 24GB of vram.

Two data points from Apple:

  a) M2 Ultra 24 cores, 128GB ram, 2TB storage = $5,200
  b) M2 ultra 24 cores, 192GB ram, 4GB storage = $6,600
Likely 1/4th the size, 1/4th the power consumption, and 4x the ram bandwidth. Have you by chance played with any LLMs? Just saw a post that someone managed 5 tokens/sec with the llama 65B model.


I don't care about playing with LLMs, so this is simply meaningless to me.


But for many people training or running inference on large models is important and the apple approach actually makes that feasible in a desktop solution which is pretty awesome.


The integrated memory design doesn't prevent doing it over a dimm slot. It seems more they just didn't want to deal with a ddr5 or some bespoke connector.


Also allowing users to upgrade RAM themselves would lower Apples margin and their computers would remain usable for much longer which would result in even less profits... Now Apple can release a 512 GB version in a year or two then a 1TB one etc.

The technical issues are totally insignificant compared to this.

Edit: having said this extra memory for Mac Pro seems cheap as f** by Apple standards. Just $800 for 64 -> 128GB. 8 -> 24GB for Mac mini/Air is $400 and you only get 48GB for $800 in a MBP.


Congrats to Apple on completing another IA migration! It really is an incredible accomplishment to get such a massive base of customers, partners, and developers to run on a new architecture so quickly and relatively seamlessly. I remember the PPC to Intel move, which was also well done, and they'd improved even on that ... with what must be many many more users. Awesome!

P.S. Hopefully this transition frees someone to make a Pro Display with a webcam!


And before that was the 68k -> PPC transition, which was even smoother: 68k and PPC code could co-exist and call each other in the same address space.

(Well there was no memory protection in those days, so everything was in one address space. Still, impressive!)


You're both forgetting the ppc32 to ppc64 migration, i.e. G4 to G5 processors.

It was a lot easier than the rest, but still come with its own migration and compatibility issues.


I own an M1 Ultra Mac Studio that I primarily run Asahi Linux on. Prior to that, I ran a trashcan 2013 Mac Pro 6-core Xeon that I primarily ran Ubuntu Linux on. Thus, I buy Apple primarily for the hardware.

After watching today's WWDC product announcements regarding the Mac Studio and Mac Pro updates, I really don't see myself ever buying a Mac Pro in the future. While I can understand how very large studios may value the additional expandability, a massive case with ability for expensive upgrades just isn't something I would need or pay extra money for.

It looks like Apple has targeted the Mac Studio for the largest number of professionals, while reserving the Mac Pro for a niche high-end market - and in these regards, the Mac Pro is a continuation of the 2019 Mac Pro, whereas the Mac Studio is a continuation of the trashcan 2013 Mac Pro.


Would you mind expanding a bit on your experiences with running linux on mac hardware? Especially the M1, what is your daily experience like? Any pain points or gotchas?

Reason for my question is that I used to run linux on the mac as well (10 years ago), and I love the hardware. I don't think there is anything that even comes close hardware-wise. But currently I am on mac os, well, because it works basically ;) But I would be curious to know if switching over again would make sense now, without too much hassle.


Of course - I have a blog post detailing that here: https://jasoneckert.github.io/myblog/ultimate-linux-arm64-wo...


Thanks, that was a very nice read, and also great news. Maybe I will give it a try this year.


Damn, the Asahi team has done God's work


Was it hard to get Linux to run well on the trashcan? Mine is still my main machine, but there are no more MacOS upgrades for it.


It was trivially simple - boot the Mac from a USB thumb drive that has Ubuntu 20 or later installation media. No need to search for any drivers afterwards either.


You might need to install 'bcmwl-kernel-source' for the WIFI drivers, but other than that I've installed Ubuntu Server on all sorts of Intel Macs and they work perfectly.


Thanks


I have a 2013 Mac Pro running Alpine Linux and a bunch of Docker containers.

Installing any Linux distro is trivial, just boot the installer for your distro of choice off a USB stick. Hold down the Option key when the Mac turns on and it'll appear as a boot option alongside the internal drive and internet recovery.


Thanks


192GB of memory on the Mac Studio is enough to run Llama 65B in full FP16.

And at 800GB/s bandwidth, it will do so pretty quickly. I think my M1 Pro memory bandwidth is 200GB/s and I was running quantized 13B Alpaca relatively quickly, I'd say useable for a personal chatbot, and I think it was swapping every now and then causing pauses.

So having 4x the memory bandwidth should allow large models to run pretty damn fast. Maybe not H100 GPGPU speeds but enough for people to do some development on.


> I think it was swapping every now and then causing pauses.

What you're seeing is probably "context swapping", not swapping memory to disk. The model can't keep the entire history of its output in context at all times, so LLaMA periodically resets the context and re-prompts it with a portion of its recent output.

https://github.com/ggerganov/llama.cpp/blob/f4c55d3bd7e124b1...


How many tokens per second are you getting from Alpaca?


When llamaCPP came out, I was running 13B at 100ms/token on a base model MacBook Pro 14".

Edit: apparently llama.cpp supports running on GPU, so I imagine it's gonna be a bit faster. Maybe a fun evening project for me to get going.


What do you think the odds are we can get H100s or equivalent in Mac Studio?


Nvidia gpus haven't been supported by Macs for a long time. Apple and Nvidia relations are not good for some reason.


https://blog.greggant.com/posts/2021/10/13/apple-vs-nvidia-w...

Nvidia had a knack for putting Apple into difficult situations.


I think Apple is afraid of the AI people switching to CUDA and neglecting Apple's own accelerators. Whatever the case is, it's entirely petty - Nvidia still ships FreeBSD drivers and would probably gladly pick up Apple support again (at least for CUDA drivers).

If it was as simple as "Nvidia was an unreliable hardware partner", they wouldn't go out of their way to arbitrarily limit the drivers you can run.


Apple’s icy relationship with Nvidia started right around when CUDA came into existence and before Metal ever existed. Nvidia created tensions between Apple and Intel and later handed Apple the headache of a class action lawsuit by knowingly shipping faulty GPUs.

I think Apple Silicon and Metal vs CUDA might have been a strategic reason for not repairing that relationship but that came years later.


  > knowingly shipping faulty GPUs.
wow, how did i miss this...

https://blog.greggant.com/posts/2021/10/13/apple-vs-nvidia-w...


> […] Nvidia still ships FreeBSD drivers and would probably gladly pick up Apple support again (at least for CUDA drivers).

Heck, Nvidia still ships Solaris drivers:

* https://www.nvidia.com/en-us/drivers/unix/

I guess now that they have the infrastructure for these more niches OSes, it's not too hard to just keep going (especially if the kernel APIs are relatively stable).


Apple doesn't arbitrarily limit the drivers you can run.

And up until now there hasn't been a Mac Nvidia could even sell graphics cards for.

Even now Mac Pro users will be such a tiny segment it's probably not worth the effort.


you can connect external graphic cards via thunderbolt and external casings


The best part to me is that this looks like a "platform" that can be updated year over year. They can just keep putting the updated M-whatever chip in it (and hopefully eventually figure out how to quadruple it vs. just having the Ultra). Ideally they can bump it up to PCIe 5 and Thunderbolt 5 "easily" too. In other words, the fact that this is so similar to the Mac Studio means it hopefully won't suffer the same fate as the previous "one-hit wonder" Mac Pros. An M3 (3nm) Mac Pro with PCIe5 and Thunderbolt 5 would be a very good machine I think.


I’ve been draggggggiiinnngggg my feet on a new desktop workstation. Waiting for the store to come back to truly make a decision but I think that I’m gonna go for a Studio. The hacker in me wants to build a beefy Linux workstation but the pragmatist in me wants a machine that just works. I think the Apple tax is worth it here.


I'm pretty certain you can buy yourself a Linux workstation that just works. ThreadRipper, ECC, NVidia proprietary drivers. Put Fedora on it. The trouble is leaving it alone and not messing with it after you get it going.


Curious - why do you suggest fedora? I'm familiar with ubuntu/debian and arch, but have never delved into why fedora/redhat/centos/rocky.


Ubuntu would probably work fine too, as any other well supported "big" distro. The reason for Fedora is more psychological, by choosing a "boring" distro one is less likely to be tempted to futz around with the config, etc. Which in my experience is now the primary source of Linux problems (as an over-experimenting user)


lol, very true


tons of people run redhat and derivatives in production for various reasons. it's also owned by IBM. you know how the old saying goes...


stable & works, at least in my experience with rhel


One option is to use macOS as Linux VM launcher. You won't lose much computing power, you'll have perfect driver situation, you'll have x86 compatibility with Rosetta, you'll have VM versatility with disk snapshots.


That's an interesting option. Can you get direct GPU passthrough similar to what is offered with CUDA & WSL(g) on Windows?


I bought the prev generation for that kind of purpose, about 3 months ago. To get 2tb disk/128gb ram cost over $5k. Curious about new prices. The perf seemed good running 65b models but not 16bit. You need the ram and disk space, but the cost was astronomical.


$5200 for an M2 Ultra Studio with 128GB RAM and 2TB SSD. Still costs $200 to upgrade a MBA with a 512GB SSD.


At the rate that Asahi Linux is making the desktop models work on Arch, you might be able to have both if you wait a bit


Well there was 3 interesting questions about the apple silicon mac pro going into the keynote:

- how will they provide more RAM than the Mac Studio?

- how will they provide more GPU than the Mac Studio?

- how will they provide more CPU than the Mac Studio?

And the answer is « let’s not! »

I’m disappointed there was no surprise on that front.

I’m sad they mentioned gaming and created a « gaming mode » and then the Mac Pro has no GPU story to speak of. So all the 3d artists will keep stacking green team or red team GPU in their intel or amd boxes. This is not a good sign for 3d authoring software.


It's bizarre that you'd pay $3000 more over the Mac Studio for zero extra performance, upgradability only for storage, and a bunch of useless PCI slots that don't support GPUs.


There are a lot of professionals who travel with their computers e.g. movie industry who would appreciate being able to fit everything in one box.

Likewise if you're in a studio being able to reduce the amount of equipment in your rack is always a win.


Yeah, it’s a bit baffling. Maybe the Pro has better thermals and can run the chips at higher frequencies/at peak frequency all the time compared to the Studio?


Yeah the Pro feels like a miss. I figured it would have same specs as the Studio for its SoC but would come with a bunch of options for co processor cards akin to the Afterburner card. So a studio you can kit out for additional specialized performance.

Maybe those cards will trickle out over time, but not having them ready at launch makes the Pro feel like an afterthought right now.


It might have been a good idea to put the M2 itself on a replaceable board. Other than adding some professional video out or capture cards, which you could probably just do via Thunderbolt, I'm just baffled by this product.


I would love if Apple used something like COM-HPC or even the new daughtercard approach Intel and Nvidia are using in their high end server CPU+GPU chips.


The GPU situation is getting really dire, and showing the obvious limitations of putting everything on one die at-scale. The Blender benchmarks tell the entire story now that Metal is supported: https://opendata.blender.org/benchmarks/query

The M2 Max renders on-par with a 90 watt, 12nm RTX 2070 laptop card. From 2018.


So that Blender release contains an alpha release of OSX Metal: https://code.blender.org/2023/01/introducing-the-blender-met...

Would be good to at least see it stabilise and optimised a bit before jumping to conclusions.

And yes Apple could make a better GPU by building a 450W discrete card but given that they make the most sales from lower end devices probably not a sensible strategy.


> Would be good to at least see it stabilise and optimised a bit before jumping to conclusions.

How much further do you think they can optimize? Nvidia cards have hardware-accelerated BVH traversal and have been designed with ray-tracing in mind since 2009. Apple Silicon in it's current incarnation closer resembles a phone GPU with dedicated media engines. Maybe Metal will improve in time, but it's not going to close the performance gap. It probably won't even close the performance-per-watt gap, at least in Blender demos.

> And yes Apple could make a better GPU by building a 450W discrete card

They're already halfway there with the M1 Ultra's 200w TDP. It's concerning that their desktop focused SOC is being outperformed by last-gen 90w laptop GPUs, at least in my book. It signals that scaling the Mac beyond mobile SOCs will require a unique approach.


Is there really no GPU passthrough? This seems insane.


I think this new Mac Pro is more geared for PCIe developers so they can start testing drivers etc and the big launch will be with the M3.

It really doesn't offer any huge benefits over the Mac Studio.


Yes presumably a decent % of previous Mac Pro customers are ok with a Studio


I know (of/first hand) a bunch of musicians who buy the top of the line one every generation because they "need" it. For video work you do need as much raw power as possible but for just about anything else you can honestly get by with a Macbook air these days (especially given that a lot of the top of the line customers are probably using external inputs rather than software synths!)


Yeah. I know a bunch of media composers that switched to the M1 Studio from a Mac Pro and are very happy.


How many pcie developers are there anyway?

Apple had a renaissance with the old pre-2013 mac pro, and mostly people used pcie to add graphics cards to their system. the 2013 mac pro sort of offhand killed that with its mediocrity.

But nowadays it seems like graphics cards would need non-trivial OS support, so who would put together pcie cards for macos?


Incoming: “Not going to upgrade, I’m fine with my 1996 toaster, thank you!”

It’d be actually interesting to read from people who buy a top config and how they use it.


I'm using an M1 pro macbook for work and it's fast as fuck. Seriously considering getting a studio or a macbook pro for some home game programming and asset creation work.


I have an M1 Air and an iMac Pro (3.2 GHz 8-Core Intel Xeon W) and quite often I feel as though the M1 is faster. Seems to be jumping ahead remarkably!


I'd really love to hear from somebody currently using a 1.5TB Intel Mac Pro


My ex-boyfriend had a 128gb Mac he would regularly max out with nothing more than Spotify and Firefox tabs. I still don't understand it.


I do not understand why people make those comments. All we know about their systems is that they can load this website. You can be perfectly happy as a dev running vim on an ancient netbook…


It's a little interesting that that are going to the Video 1st/ Training second model and abandoning the HPC market where they can't compete with high, multi TB workstations.

But I guess it's playing to the strength that video decode/encode has right now with the M series chips.

I wish that they would have a tiered memory expansion, eg 192gb fast tier, and expandable to 1.5TB slower but DDR5 expandable.


I imagine they did the research and found that most people with HPC needs are just renting it, and those that aren’t renting aren’t filling a data center with fucking apples.


Interesting that they added a rackmount option though. At a $500 premium too, I guess it’s mostly for iOS developer build servers.


It's for audio/video people who have their gear in racks in the studio, or have a flight case with all their audio gear pre-configured that they haul from gig to gig.


Mac Pro and Mac Studio, spec'd to the same maximum possible Ultra cpu, 192GB ram, 1tb ssd, Mac Pro is $9600, while the Mac Studio is $6600. How many people really need the Mac Pro's PCI-E expandability (which probably no third party GPU's can use) to justify the $3000 premium, in an arguably worse form factor?


The amount of customers who really need PCIe slots are so low that the number of sales to amortize the R&D on is way lower, hence the premium.

The people buying the Mac Pro are probably all going to be high-end video and audio professionals for whom the price difference isn't as noticable

edit: one of the PCIe cards they showed was the Avid Pro Tools HDX card which costs $5000 alone. People who need that card are the target market for a $3000 PCIe chassis.


The live video encode use-case seems like one that makes sense, and likely requires more bandwidth than TB4 can support.


Anyone else look at the motherboard & think, wow, heck yeah? It was barren. Flat, hugely unpopulated, painted black.

Seeing such a stark & severely empty slab of pcb is something I've been looking forward to. With more and more on chip, we don't need all this extra componentry all over our systems.

PCB might well be cheaper than cables.. but I can perhaps envision MCIO (Mini Cool-Edge IO)/SFF-TA-1002 taking over some day, disaggregating peripheral cards off the motherboard.


Kinda unimpressive since it seems that it's just mac studio with pci-e expansion


Config/pricing pages are open now, and it looks like the Pro does match the higher-end Studio configuration. The upgrade choices and prices match too, so they top out at the same spec. Minus the tiny detail of PCIe expansion of course. Pro has a few more ports (extra HDMI and 10GbE, couple more TB4s) as well.


More so given the drivers issue.. I expect OWC and a few companies like AVID will develop proper tools for their hardware but all in it's not going to be nearly as approachable. Also the limit of 192gb on Unified Memory might still be an issue too.

We'll see what happens with release, but as someone with two Intel Mac Pros in use, not quite sure I see a reason to switch still (though my laptop is the M1 Air released 2 years ago).


Since they've now completed the transition to Apple Silicon, I wonder if that starts the timer on deprecating MacOS Intel compatibility. Though plenty media industry users especially stay on older OSes for stability.


It's bound to happen soon enough. I noticed under the dev portal (as of yesterday) the only Mac OS beta for Sonoma was for the M series processors .


It also doesn't have ECC as well which was a staple of the previous Mac Pro line.


I wonder if on-die RAM is less susceptible to memory errors?

I suspect that it is. Feels like less can go wrong. You have physically shorter interconnects, and the RAM is perhaps more of a known quantity relative to $SOME_RANDOM_MANUFACTURERS_DIMMS. But that is only a guess.

However, I don't know if that's true. I guess it's not necessarily more resistant to random cosmic rays or whatever.


Given that the Mac Pro = Mac Studio + Expansion Slots

It seems clear that Apple never wanted to launch the Intel Mac Pro (cheese grater), but they saw a timing gap between the trash-can Mac Pro and the Mac Studio that needed to be filled.


With fairly good support for Apple Silicon, the $4K Mac Studio might be a reasonable choice for a home deep learning rig. 64G of shared memory for the GPUs/neural units, and CPUs sounds good.


As a developer I'd be terrified if Apple was showing my app in one of these events.


I'm pretty sure they reach out to the developers in advance if it's actually used directly (and not just in the background, like on the Dock or something). But I'll bet it's a real opportunity to gain new users, so it's probably more exciting than terrifying!


Developer of Apollo was completely surprised that they mentioned his app today[0]

[0] https://old.reddit.com/r/apple/comments/141kfmi/wwdc_2023_ev...


Oh wow, I didn't realize! Well never mind then, my mistake. Thanks for the correction — and with a citation, even!


He also says he was invited to the event, so he's likely saying that he had his mind blown by being offered the opportunity beforehand.


I'm thinking more in the lines of getting this kind of attention it a strong indicator to get sherlocked [1] by Apple in the future. For example the hydration app.

[1] https://www.howtogeek.com/297651/what-does-it-mean-when-a-co...


Which hydration app was sherlocked?


It's all pre-recorded now, so what would the worry be?


If Apple likes your macOS app, that’s the first step to being sherlocked.


Does that mean acquired? 'cause that's more likely


No, it means that Apple rewrites your app and releases their version for free with macOS, destroying your business.

https://en.wikipedia.org/w/index.php?title=Sherlock_(softwar...


Apple has also acquired successful apps. In 2001, the same year that Apple "sherlocked" the app Watson, Apple acquired SoundJam MP and turned it into an app affectionately known as iTunes.

https://en.wikipedia.org/wiki/SoundJam_MP


I guess I've never been blessed enough to work at a place that will spend $7k on a base model edit station.


What do you do for a living?

I'm just curious which kinds of workplaces/industries are splashing out for $7K workstations. Would love to hear from people whose workplaces do provide such things.

I wouldn't expect many software engineers to be answering in the affirmative but I suspect it may be fairly common in other realms...


Anyone working on a large iOS or MacOS codebase needs a serious machine due to compile times. Apple/Xcode tooling isn’t really set up to let you run the compiler and IDE brains on another machine.

Any companies with these mega-monolithic apps (Facebook, Uber, AirBnB) with 16 companies worth of functionality inside one app used to buy Mac Pros for those developers. Now I imagine they’ve mostly switched to Mac Studios.


Would the Mac Pro help me at all for my computing needs? I write code all day and have several IDEs and DataGrip running, use Docker, etc. I currently use an MBP with the Apple Chip. Would a beefier machine actually do anything for me, in the form of faster compilation or anything...or nah?


Does anybody have the specs on the M2 Ultra chip? Looks like it supports up to 192GB of unified RAM, which is twice the 96GB of the M2 Max, so is this just 4 silicon dies jammed up against each other? (Apple website hasn't been updated yet with this info, but I'm very curious!)

Edit: Ah, looks like they made a separate press release with that info here: https://news.ycombinator.com/item?id=36199637


The fact Apple didn't announce that their product line was 'now fully ARMed' during today's WWDC keynote was a lost opportunity.


They seemingly avoid calling it ARM whenever they can do it. The only place where I myself saw Apple spell out that Apple Silicon is ARM was somewhere deep in developer docs.


The Mac Pro link in this page shows the Intel Xeon-based system. I was very confused


The wild thing about that 192GB of memory: it's all potentially VRAM.


Two questions I'm interested in:

1) Are these machines still limited to running a maximum of two macOS VMs?

2) Can they drive more than a single 8k display?


Yes to 2), the Max can run 3 8k60 displays and 6 6k60 displays. Which are pretty crazy specs considering my Air can only push a single display of any resolution.


Interesting, only single CPU. I was thinking, that for Mac Pro they will go somehow with multiple processors and some magic with shared memory access solved in OS. Interesting though, how the external GPU support will look like if you have PCI and if it will be expanded to the TB4 as well.


> I was thinking, that for Mac Pro they will go somehow with multiple processors and some magic with shared memory access...

That is what they did. Read what they wrote about their interconnect. It's just all inside a single package. Look up "chiplets".


Well, no, they didn't. I understand the idea behind M1/2Ultra. However, it's not exactly the same. Can they put 4 of these dice? And 8? Currently it maxes out at 24 cores per machine, in a rack mounted chassis. That is nothing near what I can get with EPYC based servers. And there I can have multiple processors in a machine.


What exactly do you mean when you say “multiple processors”? The way I use the term we are talking about the same thing, except apple can only deploy fewer cores.


Well, multiple processors mean multiple CPU's, not multiple dice. For example as Dell R840 series, 4xXeon 28 Cores, or R7625, 2xEpyc 96 Cores. I absolutely understand that these are specialised machines for specialised workloads. OTOH Mac Pro in rack mount is not necessary mainstream system as well, and with current approach it maxes out at 24 cores / 196GB RAM or if they somehow double it in a year or two at 48 Cores / 512 if you're lucky.


Yup, I was waiting for eGPU support to come back to the mac this year but it looks like no dice :/


I didn’t see a reason to upgrade and I feel I am their audience here, I own the last one.


TBF I had the Macbook from 2015 before I felt like upgrading to the M1 in 2021. You usually buy Macs every year?


Well, the previous Pro came out in 2019 (and the model before that 2013). Every four years is not unreasonable.

https://everymac.com/ultimate-mac-lookup/?search_keywords=A1...


First/last Mac Studio was released in 2022 which is I thought what we were talking about

https://en.wikipedia.org/wiki/Mac_Studio


I did get an Apple Silicon MacBook 18 months or so ago but my 2015 MacBook Pro is still fine for pretty much everything except ML and video/image processing.


Or anything that requires a quieter, fan-noise-less environment that doesn't burn your lap on direct skin contact. But yeah, the 2015 MBP was a truly great model that precedes soldered RAM and USB-C-only port selection.


Yeah, it predated the infamous butterfly keyboard and touch bar. Mine did have to have its screen replaced (which Apple extended the warranty on because of a manufacturing issue) and I've also had a new battery installed but it still works pretty well on my dining room table for day to day purposes (which are mostly web-based use of some sort).

But it's got a lot of miles on it. No complaints.


> 2015 MBP was a truly great model that precedes soldered RAM

The last MacBook with upgradeable RAM was the non-retina 2012 Pro.


Yep. My mistake. I meant to say soldered storage.


Almost no-one do, however there is a bunch that buys new cars and flip them all the time!


Don't you think their audience might be people who do have a reason to upgrade?


Their audience hasn’t had a 6 slotted mac since the 9600.


I'm vaguely considering it because it does 8K video, and it'd be nice to replace three 4K monitors with an 8K screen.

But that's quite the price bump. The M1 Ultra studio handles my workload pretty well, so I'll maybe save up my pennies for the Vision Pro.


People with the immediately previous generation are not usually the audience. This was only the case for iPhone between about 2011-2019.


Specs look cool but I haven't bought into apple silicon because I will probably always need to do debugging of x86/64 binaries.

The one workload that would make consider apple silicon is hashcat and password cracking. I am sure that's much faster compared to intel but what is the comparison between the latest nvidia 4090 vs mac studio? I don't know how the unified memory affects gpu workloads, but I do know a lot if graphics people only use macs. If I have to buy a bunch if 4090's anyways, macs don't make sense unless I am a millionaire and this was a hobby.


In this discussion: people who know little about Apple Silicon architecture ("no discreet GPU, not buying"), who are not the target audience for this ("$77k for a comoputer!?!?!"), who do have no idea what video creatives need (see: discreet GPU), raging.

These systems (especially the Pro) are for people who spend all day working on 4k and up video.

Also, guys: do you really think that any of you are smarter than Apple? That Apple doesn't spend a lot of time talking to top creative professionals?

These systems aren't developed in a vacuum, especially at these price points.


$3,000 just for slots certainly sends the message that Apple views their customers as completely captive though.


The press release mentions a bunch of use cases for the slots, but does anyone actually sell Apple Silicon-compatible cards yet? It's a brand new processor architecture after all. I checked the store page and it doesn't show any. (The old page would let you configure a Pro with more GPUs or afterburner cards)


Rosetta works for drivers as well (as long as they use the user-mode DriverKit interface rather than kernel extensions)


People will need to get the new Mac Pro in their hands to develop and test drivers, although any card that works in a Thunderbolt enclosure should also work when plugged in directly.


They're just ridiculous with Mac Pro pricing. $3k for pretty chassis and $3.5k for chassis with wheels. It's a joke. They didn't even match specs for previous Intel Mac Pro, when it comes to RAM.

IMO this announcement is just a funeral for this product.

Mac Studio is fine, I guess... I hate small computers so I would prefer huge empty box with lots of air inside which is likely to be silent. But not with this overprice.


mac studio is silent


>Today, Apple is carbon neutral for global corporate operations and is focused on its Apple 2030 goal to make every product carbon neutral. This means every Mac Apple creates, from design to manufacturing to customer use, will have net-zero climate impact.

I love my bullshit green washing of hunks of metal produced by the millions too. Buying carbon credits from I-Promise-I-Will-Plant-Trees Inc. is still lying, Apple.


>This means every Mac Apple creates, from design to manufacturing to customer use, will have net-zero climate impact.

My guess is that the largest contributor to carbon emissions comes from Apple's employees living their lives: Apple pays the employee a salary, then the employee uses that salary in a way that result in heavy carbon emissions unless that employee is one of the very few who seriously rearrange their lives to intentionally pessimize their climate impact.

I doubt Apple is counting that.


> I doubt Apple is counting that.

Actually Apple hires private investigators to spy on the activities of their employees in order to determine how much carbon to offset.

Common knowledge.


After apple stopped including chargers with their phones, the marketing department sold it as a green initiative, yet with the new rackmount mac pro, they include a keyboard and mouse with every purchase, how many customers are actually gonna use those?


Just 1/5 of their total carbon neutral claim is from purchasing credits. I am sure it is even lower in 2023.


Only 24 cpu cores? Doesn't that seem skimpy when compared to say, a threadripper or sapphire rapids xeon?


what do you use the cpu cores for on a threadripper, is it work that any of the other computing units available on this chip can do?


Compiling and running tests


For all the accolades about the Apple cpus, market share remains within historical ranges (5-10% per my recollection going back to the late 80s).

For Q1 per IDC: The top five PC manufacturers by market share were Lenovo (23.9%), HP (21.5%), Dell (16.0%), Apple (7.5%), and Acer (6.4%).


Glad the Studio is sticking around.

For many tasks the M1 Max base Mac Studio became an incredible value.

For other than 3d Rendering, the performance bump isn't that huge between M1 Max and M2 Max from the graphic on Apple's screen.


Feels like a missed opportunity they didn’t design the Mac Studio part of the Mac Pro as a replaceable module you could upgrade every other year. Or buy the non-Studio part to upgrade your Mac Studio.


The M1 Mac Studio has just disappeared from the Apple website. Maybe this disappearance could indicate how great of a deal it would have been if it had remained on sale at a lower price.


I would keep an eye out at Costco, M1 Pro MBPs still pop up on sale there regularly, they might get some Studios.


I wouldn't buy a $7000 computer without a discrete GPU.


I would consider that for many people in your position, the desire for a discrete GPU is a proxy for the real desire, which could be one of several things: - Performance, which apple's GPUs may compete sufficiently with - Upgradeability, which apple's GPUs may not compete sufficiently with

If all you care about is the performance, does it really matter if that perf is achieved via a discrete or integrated GPU?


Apple's GPU does not compete sufficiently with the discrete GPUs one would put in a $7000 PC.


Out of curiosity, what is your benchmark here? I have a $2000 RTX card that is great for games, but pretty poor for LLMs. For LLM development, I'd be much happier with a Studio and an M2 Ultra. How much would it cost me to get 192 GB in discreet cards I wonder?


I think the statement "I'd be much happier with a Studio" is a little hypothetical? Sorry if that's not true, but everywhere I've looked, it seems like these are not ML training chips, and people are just hoping they will handle LLMs well.

You can absolutely build (with real support from the PyTorch folks) a 4x3090 deep learning workstation that has 96 GB of VRAM for roughly $7k. Or, more likely, you'll rent a A100 from AWS for ~$0.15/hr.


Depends. The "integrated" GPU shares a memory address space with the CPU. Depending on the workload that can compensate quite a bit.


Bad news everyone, modeless isn't buying one. On the other hand I look forward to the steep discounts to be had at Apple's going out of business sale...


The PCIe slots look easy enough to get at.


Apple showed a lineup of compatible PCIe cards, with a variety of accelerators and I/O hardware but conspicuously no GPUs.

https://i.imgur.com/28J1KfN.jpg

The Apple Silicon transition ended support for external GPUs, so I think it's safe to assume they won't support internal ones either.


> conspicuously no GPUs

That is interesting. I wonder how hard it would be to do PCI passthrough to enable GPUs to work with Windows 11 ARM running in a VM?

I wonder if it is even possible to write a driver for an external GPU for macOS on Apple Silicon? It seems that Metal on macOS Sonoma intel still supports external GPUs.


I guess (although, I don’t actually know, low level stuff is confusing) this is an OS thing, right? Rather than hardware. Of course since it is Apple, the concept is bundled together anyway. But I wonder if Asahi Linux could bring support?


Man I'm so out of what's going on with desktop computing lol, feeling old

Could somebody explain what these are?


left to right, had to search a few of them. You're probably not out of it, it's just relatively niche professional stuff. Half of them are only relevant for media professionals for example.

Sonnet card - adds storage via a couple SATA SSDs

OWC 8M2 - adds storage via up to 8 NVME drives

Avid HDX card, runs DSP for ProTools (audio)

Kona 5, video capture and I/O

Lynx E44, high-quality audio I/O

Blackmagic decklink SDI 4k - SDI video capture

ATTO high-speed ethernet card, maybe 50GbE

ATTO Celerity Fibre Channel Adapter - Storage HBA

edit to add linebreaks


Nice thankyou!


Expansion cards, they were supported by the original PC from the 80s and even before that.


I'm not that old lol, was wondering specifically what they were


As you can see they're still a thing, so you could be any age from 0-60.


The product page specifically touts the Radeon Pro W6800X Duo.


The product page hasn't been updated yet, it's still describing the Intel model.

If it were current then they'd have something newer than the years-old W6800X Duo.


Sorry, you're right. I thought I saw a mention of an M2 spec, but it must have been something else.


Anything you install is a brick unless hardware vendors port their drivers.


Isn’t that true of any PCIe device? Is Apple supposed to develop their drivers too?


Apple is supposed to maintain good relationships with hardware vendors and support them in porting their drivers. Apple has done a poor job of this. They are practically enemies with Nvidia due to legal disputes and as a result I don't expect to see an Nvidia driver for Apple Silicon in the foreseeable future. Maybe AMD or Intel will write one but at least one should have happened before launch.


Are they? I think you want a PC. Which is totally fine.

Microsoft goes around playing nice with every Tom, Dick and Harry with a hardware device and a dream. Apple in recent memory has never been that company.


What if you needed its other features?


It's a computer. I'm sure the other features are available in other packages in one form or another.

That said, for business-to-business I bet these are great machines.


If one of your requirements is running macOS, I guess it will be hard to get elsewhere.


You’re not the target audience.


With all those thunderbolt ports you could add eight eGPUs.


No you couldn't because eGPUs are not supported.


Not yet.


Yeah and when it is it would be for next generation hardware.


I’m confused, where are the Afterburner expansion cards when you go configure a new MacPro?

It’s missing and is basically an overpriced MacStudio without them.


It sounds like they're leaning on the internal video encode/decode of the M2 ("media engine"), which they claim can decode 22 streams of 8K ProRes on the Ultra. It looks like this replaces dedicated Afterburner cards? And would apply to both the Studio and Pro. But I am not a video production professional.


Then why does the MacPro exist then?


Video capture and Pro Tools dedicated cards.


Why didn’t they release GPUs for those PCIe slots? I just don’t get why they couldn’t do a simple thing instead of AR/VR.


Because it's not that easy due to Apple Silicon architecture display controller limitations with the current chips. Note that none of the PCI-e cards in the demonstration were GPUs. They were all network/storage/accelerators/etc.

In the short term, I could see shoving an Nvidia GPU in a slot for offloading CUDA and GPU compute, but it wouldn't be really suitable for video gaming and such.


Will the 192 GiB of RAM be properly addressable by all GPU for AI stuff or are there some NUMA-style constraints?


Is this event entirely AI-generated? Backgrounds seem too perfect, speeches seem too tight.


We've been discussing this as well, the Virtual Production production value is incredible, are they also standing on a stage outside, and this is all being composited live, with foreground passes, etc. without green screen?


Surely it's all pre-recorded and edited together


You're right. I thought this was a live event, but the people watching it live are just watching a video, just saw a live photo from twitter. Makes a lot more sense!


Ever since COVID, many companies have basically switched to these "fake live" announcements.


Tim may be live but everything else is prerecorded


Yep, you're right. Still they put a TON of work into this. Incredible.


agreed, it's very impressive


Apple spends at minimum a month prior to an event rehearsing. During a live presentation they have people following along with alternate presentations that can be switched to immediately. These are full AV productions that would make producers of Super Bowl halftime shows jealous.


All the Apple presentations like this are very polished


Does anyone else find the specs of apple hardware really hard to understand?


Mac Pro is honestly underwhelming. It's entirely for those you really need macOS + PCIe combo. Other than that, with no expandable RAM (beyond top 192 GB) and no external GPU support (I assume), there's no reason to pick it over Mac Studio (when choosing between the two).


Amazing how often they directly dogged Intel, PCs and Intel based Macs.


Damm the annotated voice mail feature seems awesome feature.


Can one install a regular AMD gpu in this?


I was asking myself the same, but I am assuming there is no way. The specs list only 300W of extra power budget (with only 150W on the single 8-pin PCI-E connector) and the x16 slots are shown as single height on the images. Also I don't think there are any drivers for Apple Silicon afaik and using AMD GPUs purely as accelerator cards seems pointless when you have M2 Max.


They finally did it - you can use an Nvidia card I assume with the new Mac Pro?

edit:

Apparently not actually - they only list I/O cards and others as connectible. No mention of GPU.


Apple doesn't sign Nvidia drivers anymore. It's unlikely they'll be compatible with Nvidia cards specifically until they reverse that particular policy.


How is 192GB of RAM impressive? Why not 32TB?

Esp given the unified ram you cannot upgrade it later on either. (I think?)


Example of a desktop computer that comes with 32 TB ram?


Last gen threadripper pro can "only" do 2TB, so 10x as much memory.

The upcoming models should allow 6TB, which you can also get today with a server chip.

I can't find much using the newest workstation Xeons but they supposedly will do 4TB.


Ignoring the fact that this comment has a completely different bar than OP, I’m not asking about chips, I’m asking about desktops. Is there a desktop for sale at any price, from any manufacturer or custom builder, that goes 2 TB ram?


> Ignoring the fact that this comment has a completely different bar than OP,

Op didn't say anything else did that much, they asked why apple didn't.

Any valid answer to that needs to also explain why apple didn't do intermediate amounts, or even as much as the old mac pro.

So I thought it would be helpful to give you examples that have much more memory than a mac pro, even if they're not 32TB.

Though I didn't even realize the old model could do a full 1.5TB, so I didn't even need to bother linking other systems. 1.5TB is plenty to highlight the weakness of a 192GB maximum.

> I’m asking about desktops. Is there a desktop

If you're being picky about desktop versus workstation, mac pro is the latter and workstations are completely valid here.

If you just want a link, here: https://boxx.com/threadripper-pro


Unified with the GPU

You aren’t getting that much vram in a single product.


What the hell is that cheese grater looking thing? I can't make out what that is.


That’s the Mac Pro design introduced with the last Intel model. It’s a similar design to the original Mac Pro (before the “trash can” tube thing), which was very close to the design of the PowerMac G5.


It's the front of a desktop tower, except it looks exactly like a cheese grater and triggers some peoples visual phobias. While it's claimed functional for quiet airflow, it's also possibly Apple's worst visual design, ever.


The return of the rack-mount Mac? Nice one, Apple.

But do I get it right, a professional machine with zero ways to upgrade the system? Come on.


Did you miss the 6 expansion slots? The 8 TB ports?


I was talking about RAM and the CPU.


They've sold rack-mount Mac Pros since 2019.


For the modest price of 77k dollars!!!

I will use vectorization and multithreading instead, thanks!


Are you off by one order of magnitude? Or does this cost as much as a couple quite nice cars.


I was looking at a six figure workstation last week, including GPUs. The goal is to replace 2-3 loud servers in my home lab with something more compact and quiet.


What do you do with your "home lab" that can't run on a consumer-level desktop? I'm genuinely curious.


Currently I'm revisiting some proteomics work I did as part of a DARPA project a while ago, as well as experiments with cinema ready geo-located media workflows.

It helps with my day job too, indirectly.


I spend a whole bunch more time at my Mac than I do in a car, and gain much more value from it too, so that doesn't seem unreasonable...


I’ve connected to plenty of machines that are worth more than my car (although it isn’t as if my car was $35000 in the first place, that’s a bit excessive!). I was just using cars as a unit of measurement, to aid intuition. Maybe I was too circuitous in my original comment.

I only see prices in the 7k range on the site.


Cars don’t usually become obsolete and borderline worthless after 5 years.


i still use my 2013 macbook pro and it still works great and fast


What’s its resale value compared to what you paid? It’s just the nature of technology.

I can buy one for $100, literally a fraction of what you paid originally for it.


I'm surprised they kept the cheese grater case for the new Pro. It is one of my least favorite case designs of any high end mac. I'm really surprised they didn't go with something simpler and more like a tall, scaled up Mac Studio. It's strange that given it is such a big architecture change on the inside isn't mirrored with a physical change on the outside.

Mostly I hate the juxtaposition of the chrome legs/handles with the aluminum case. It's very mixed-material. The chrome reminds me of the early iPhones with the chrome bezels.

Meanwhile the Mac Studio design is clean and monolithic in comparison.


I love the cheese grater but I don't care for the chrome handles and feet either. They remind me of bed frames and office chairs.

I think the G5 case was peak design for a tower that's hard to top but Apple's surprised me before.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: