Hacker News new | past | comments | ask | show | jobs | submit login
Apple M1 Max Geekbench Score (geekbench.com)
488 points by mv9 on Oct 20, 2021 | hide | past | favorite | 880 comments



I just can't figure out what I'm missing on the "M1 is so fast" side of things. For years I worked* on an Ubuntu desktop machine I built myself. Early this year I switched to a brand new M1 mini and this this is slower and less reliable than the thing I built myself that runs Ubuntu. My Ubuntu machine had a few little issues every no and then. My Mini has weird bugs all the time. e.g. Green Screen Crashes when I have a thumbdrive plugged in. Won't wake from sleep. Loses bluetooth randomly. Not at all what I'd expect from something built by the company with unlimited funds. I would expect those issues from the Ubuntu box, but the problems were small on that thing.

*Work... Docker, Ansible, Rails apps, nothing that requires amazing super power. Everything just runs slower.


> I just can't figure out what I'm missing on the "M1 is so fast" side of things.

Two reasons:

1. M1 is a super fast laptop chip. It provides mid-range desktop performance in a laptop form factor with mostly fanless operation. No matter how you look at it, that's impressive.

2. Apple really dragged their feet on updating the old Intel Macs before the transition. People in the Mac world (excluding hackintosh) were stuck on relatively outdated x86-64 CPUs. Compared to those older CPUs, the M1 Max is a huge leap forward. Compared to modern AMD mobile parts, it's still faster but not by leaps and bounds.

But I agree that the M1 hype may be getting a little out of hand. It's fast and super power efficient, but it's mostly on par with mid-range 8-core AMD desktop CPUs from 2020. Even AMD's top mobile CPU isn't that far behind the M1 Max in Geekbench scores.

I'm very excited to get my M1 Max in a few weeks. But if these early Geekbench results are accurate, it's going to be about half as fast as my AMD desktop in code compilation (see Clang results in the detailed score breakdown). That's still mightily impressive from a low-power laptop! But I think some of the rhetoric about the M1 Max blowing away desktop CPUs is getting a little ahead of the reality.


You're missing out on the fact that Apple didn't release a 12980hk or 5980hx competitor. These are 30 watt chips that trounce the competition's 65 watt (e.g. the 12980hk and 5980hx) and beyond chips.

Hell, this Geekbench is faster than a desktop 125 watt 11900k. It's faster than a desktop 105 watt 5800x.

Apple intentionally played to the competition here. They know AMD/Intel reach some performance level X and released CPUs that perform no greater than X * 1.2. They know they are in the lead since they are paying TSMC for first dibs on 5nm, but they didn't blow their load on their first generation products.

Intel will release Alder Lake and catch up, AMD will reach Zen4 and catch up and Apple will just reach into their pocket and pull out a "oh here's a 45 watt 4nm CPU with two years of microarch upgrades" and the 2022 MBP 16 will have Geekbench scores of ~2200 and ~17000.

There's a de facto industry leader in process technology today -- TSMC. Apple is the only one willing to pay the premium. They also have a much newer microarch design (circa 2006ish) vs AMD and Intel's early 90s designs. That's a 10-20% advantage (very rough ballpark estimate). The also are on arm which is another 10-20% advantage for the frontend.

The big deal here is that this isn't going to change until Intel's process technology catches up. And, hell, I bet at that point Apple will be willing to pay enough to take first dibs there as well.

AMD will never catch up since we know they don't care to compete against Apple laptops and thus won't pay the premium for TSMC's latest art. Intel might not even care enough and let Apple have mobile/laptop market first dibs on their latest node if Apple is willing not to touch the server market. Whether or not they'd agree on the workstation MacPro vs 2 slot Xeon workstation market would be interesting.

It might be a long time before it makes sense to buy a non-Apple laptop.


> It might be a long time before it makes sense to buy a non-Apple laptop.

...if you only care about the things that Apple laptops are good at. Almost nobody needs a top-of-the-line laptop to do their tasks. Most things that people want to do with computers can be done on a machine that is five to ten years old without any trouble at all. For example I use a ThinkPad T460p, and while the geekbench scores for its processor are maybe half of what Apple achieves here (even worse for multicore!), it does everything I need faster than I need it.

Battery life, screen quality, and ergonomics are the only thing most consumers need to care about. And while Macs are certainly doing well in those categories, there are much cheaper options available that are also good enough.


This is a really useless comparison. A 10 year old laptop will be extremely slow compared to any modern laptop and the battery will have degraded.

The T460 has knock off battery replacements floating around but that’s not exactly reassuring.

Granted: it works for you (and me, actually, I’m one of those people who likes to use an old thinkpad; x201s in my case though I mostly use a dell precision these days) but people will buy new laptops- that’s a thing. The ergonomics of a Mac are pretty decent and the support side of it is excellent.

If you don’t need all that power: that’s what the MacBook Air is for, which is basically uncontested at its performance/battery life/weight.

If you need the grunt, most of the m1 pro and max offer is GPU.

You’re going to think it’s Apple shills downvoting you: it’s not likely to be that. The argument against good low wattage cpus is just an inane and boring one.


Not everyone needs 15+ hours of battery life. I'd argue that most people don't. Laptops were extremely popular when the battery life was two hours. Now even older laptops get 5h+.


So what is the trade off you think you’re making?

This is weird. I feel like I’m talking to someone who has a fixed opinion against something. It’s good for _everyone_ that these chips are as fast as the best chips on the market, have crazy low power consumption and the cost for new is comparable.

Intel have been overcharging for more than a decade when innovation stagnated.

Honestly, I’m not so hot on Apple (FD I am sending this from an iPhone), I prefer to run Linux on my machines but I would not advocate everyone to do that. Just like I wouldn’t advise people to buy old shoes because it’s cheaper. These machines are compelling even for me, a person who relishes the flexibility of a more open platform — I can not imagine myself not recommending them to someone who just uses office suites or communication software. The M1 is basically the best thing you can buy right now for the consumer; and the cost is equivalent for other business machines such as HPs elitebooks or dells latitude or xps lineup.

And for power users: the only good argument you can make is that your tools don’t work for it or you don’t like macos.

If you’re arguing a system to be worse: you’ve lost.


The tradeoff I'm making is money vs capability. My argument is that most people don't need the capabilities offered by brand new, top of the line models. A used laptop that is a couple of years old is, I think, the best choice for most people.


A new M1 laptop is likely to last 4-5 years as a good specification of machine.

A second hand laptop has much less advantage to doing that.

I think this is a false economy.

“The poor man pays twice”

But regardless: the cost isn’t outrageous when compared to the Dell XPS/latitude or HP Elitebook lines (which are the only laptops I know of designed to last a 5y support cycle).

If you’re buying a new laptop, I don’t think I could recommend anything other than an M1 unless you don’t like Apple or MacOS. Which is fair.


> I think this is a false economy.

> “The poor man pays twice”

I'm still using an X-series Thinkpad I bought used in 2011. I had another laptop in-between but it was one of these fancy modern machines with no replaceable parts and it turned out 4 GB RAM is not enough for basic tasks.


Also the M1 runs near silent or in case of the Air actually silent. I would pay an extra 1000 just for that alone. Turned out the Air was barely more than that in total. Which other laptop does that?


The trade in value for my 6 year old MacBook Air is 150 dollars. Old computers depreciate so fast, that you can afford to buy ten of them for the price of one new computer.


Looks like they’re selling for more than twice that on ebay.co.uk though. And considering MacBook airs are $1000~ devices that’s really high.

6 years is also beyond the service life of a(ny) machine.

If I look at 3 year old MacBook airs they’re selling for £600 on eBay, which is, what, half of the full cost. Not great for an already old machine with only a few good years left.

I guess you might save a bit of money using extremely old hardware and keeping it for a while. But this is a really poor argument against an objectively good evolutionarily improved cpu in my opinion.


> 6 years is also beyond the service life of a(ny) machine.

That was the case for many decades. I think it’s no longer nearly the case. I’ve got a USB/DP KVM switch on my desk and regularly switch between my work laptop (2019 i9 MBPro) and my personal computer (2014 Dell i7-4790, added SSD and 32GB).

Same 4K screens, peripherals, everything else. I find the Dell every bit as usable and expect to be using it 3 years from now. I wouldn’t be surprised if I retire the MacBook before the Dell.

https://www.cpubenchmark.net/compare/Intel-i9-9980HK-vs-Inte... shows the Mac to have only a slight edge and that’s a 2 year old literal top of line Mac laptop vs a mid-range commodity office desktop from 6 years ago bought with < $200 in added parts. (Much of what users do is waiting on the network or the human; when waiting on the CPU, you’re often waiting on something single-threaded. Mac is < 20% faster single-threaded.)


Yeah, desktops vs laptops.

20W parts vs 84W parts.

Honestly, I'm not sure what we're discussing anymore. If you don't need (or want) an all round better experience then that's on you.

But don't go saying that these things are too expensive or that the performance isn't there. Because it is.

If Apple had released something mediocre I'd understand this thread, but this is a legitimately large improvement in laptop performance, from GPU, to memory, to storage, to IO, to single threaded CPU performance.

Everyone kept bashing AMD for not beating Intel in single thread.

Everone bashed Apple for using previous Gen low TDP intel chips.

Now Apple has beaten both AMD and Intel in a very good form factor, and people still have a bone to pick.

Please understand that your preference is yours, these are legitimately good machines, every complaint that anyone had about macbooks has been addressed. Some people will just never be happy.


I was commenting only on whether “6 years is beyond the service life of any machine”, which I tried to make clear with my quoting and reply.

I’ve got no bone to pick with Apple and am not making any broad anti-Apple or anti-M1 case. (I decided to [have work] buy the very first MBPro after they un-broke the keyboard and am happy with it.)

Of the five to eight topics you raise after your first two sentences, I said exactly zero of them.


Oh. Sorry, in a very broad sense the support contract on any machine is only 5 years. After which you're basically living on borrowed time.

That's why companies aren't giving out 5+year old laptops/desktops.

(well, I suppose some do, but big companies simply wouldn't)

I assume the whole context of the thread and assumed you were defending the parent.


As the parent, I'd like to say... my entire argument isn't that top-of-the-line laptops are too expensive for what they do but rather that

(1) older macbooks are identical to mediocre new laptops in performance & price

(2) medicore laptops are very cheap for what they do

(3) desktops are far more economical when you need power.

If you spec out a laptop to be both powerful, light weight, and as beautiful as a MBP then you're going to pay a real premium. Paying for premium things is not the default.


> 6 years is also beyond the service life of a(ny) machine.

I'm still on 2013 MBP which doesn't show any signs of deterioration (except battery life). It's got retina, fast SSD, ok-ish GPU (it can play Civ V just fine).

I'd gladly pay for a guarantee that the machine will not break for the next 10 years - I think it will still be a prefectly usable laptop in 10 years from now.


No delamination issues with the screen?


> I guess you might save a bit of money using extremely old hardware and keeping it for a while.

If you get the best, and keep it for a while then even though it won't be bleeding edge anymore it'll still be in the middle of mediocre.

When it comes to computers, mediocre is actually pretty usable. A $600 computer can do pretty much everything, including handling normal scale non-enterprise software development. I didn't really realize it until I went back to school for science, but many projects are bound by the capacity of your mind and not the speed of your CPU.

If I do need computing power, I use a desktop.


6 years? I afraid that's just not the case. I have a cheap 2013 dell laptop that is all i need for office 2017 and a few other things that just work better in windows than linux (zoom/webex/office/teams). I gave about $150 for that thing and another $50 to double the ram. It's fine for what I need and use it with very little lag. I'll admit I cheated a little bit and put in an 256GB SSD drive that I have laying around.


Trade-in value on electronics is way lower than resale value. I’ve sold a couple 2014/2015 MacBook Pros this year for $700+ and probably could’ve gotten more had I held out.


Was that via ebay? I really want to sell my devices. Since work-from-home became a norm, I'm struggling to find actual reasons to use a laptop.


No, it was to coworkers. Going on eBay completed auctions, I could’ve gotten around $800-1k for similar spec / condition machines.


This is fine and people who have budget limits have options both new and used. It seems like this has been the case for quite a while although things like the pandemic probably impacted the used market (I haven't researched that.)

The thing you are denying is that people have both needs and wants. Wants are not objective, no matter how much you try to protest their existence. There is no rational consumer here.

There are inputs beyond budget which sometimes even override budget (and specific needs!) Apple has created desirable products that even include some slightly cheaper options. The result is that people will keep buying things that they don't really need, but they'll likely still get some satisfaction. I don't suggest that this is great for society, the environment, or many other factors - but, it's the reality we live in.


That's why most people buy the bottom of the line models. The base Macbook Air is the most common purchase, and the best choice for most people.

People buy it brand new because its small, lightweight, attractive, reliable, long-lasting hardware with very low depreciation, great support, and part of Apple's ecosystem. Cost is not the same as value and the value of your dollar is much greater with these.


> My argument is that most people don't need the capabilities offered by brand new, top of the line models. A used laptop that is a couple of years old is, I think, the best choice for most people.

I think you're correct. But also the majority of people will buy brand new ones either way. And a lot of them will spend much more than they should too.


> Not everyone needs mobile internet. I'd argue that most people don't. Mobile phones were extremely popular when they didn't have any internet connectivity.

I am not mocking you, but here the case is that people do not know what true mobility for laptops is, that they literally can leave power brick behind, not to think about whether the battery lasts or not and use it freely during the day everywhere. This has been impossible until now, there has always been the constraint of do I really need to open my laptop, what if it dies, where is the power plug. As soon as masses realize that this is now no worry, everyone wants and needs 15+ hours battery life.


Unless those usecases are already covered by other devices that they have. A couple of years ago I might have wanted to use my laptop all day so that I can check the internet, listen to music etc. But today I can just use my phone for that.


Here's a different take to that: If I refuse to use Windows as my OS on my Dell Inspiron 7559 I have to live with like 2 hours of battery life and a hot lap because power management doesn't work properly. So much as watching a YouTube video under Linux makes it loud and hot.

Same laptop could do 9+ consistently for me in Windows and remained quiet unless I was actually putting load on it.

The reading I've done on the Framework laptops makes it sound like this situation has not improved, or at least not anywhere near enough to compete with Windows.. this has effectively ruled them out of the running on a replacement laptop for me.

An M1 based Macbook sure is looking appealing these days. I can live with macOS.

Not everyone needs decent battery life, but some of us do.


> The reading I've done on the Framework laptops makes it sound like this situation has not improved.

And yet I hear otherwise.. maybe your referencing the original review units that didnt run on 5.14.


It's possible, and when actively in the market I'll always revisit such things.


> The argument against good low wattage cpus is just an inane and boring one.

> Not everyone needs 15+ hours of battery life.

Low wattage is not only about battery life. It is mostly about requiring less power for the same work. However you look at it, it is good for everyone. Now that Apple has shown that this can be done, everyone else will do the same.


for me it's the heat and lack of fan noise. I know that's not something that bothers some people but I thoroughly enjoy (the lack) of it.


Define "extremely". You get maybe a factor 2 or so, not 10 or 100. Is that nicer? Yes, sure. Is it necessary? No, older stuff is perfectly sufficient for most people.

Also, it is "if you need that power and need it with laptop formfactor". Again, impressive, but desktops/servers work just as well for most people.


Saying a 10 year old laptop is 2x or 4x slower than a new laptop is just a tiny part of the big picture.

I would say that for a light laptop user, the main reasons to upgrade are:

- displays: make a big difference for watching youtube, reading, etc. You can't really compare a 120 Hz XDR retina display with a 10 year old display.

- webcam: makes a big difference when video conferencing with family, etc.

- battery life: makes a big difference if you are on the go a lot. My 10 year old laptop had new something like 4 hours battery life. Any new laptop has more than 15h, some over 20h.

- fanless: silent laptops that don't overheat are nice, nicer to have on your lap, etc.

- accelerators: some zoom and teams backgrounds use AI a lot, and perform very poorly on old laptops without AI accelerators. Same for webcams.

If you talk about perf, that's obviously workload dependent, but having 4x more cores, that are 2-4x faster each, can make a big difference. I/O, encryption, etc. has improved quite a bit, which can make a difference if you deal with big files.

Still, you can get most of this new stuff for 1000$ in a macbook air with M1. Seems like a no brainer for light users that _need_ or _want_ to upgrade. If you don't want to upgrade, that's ok, but saying that you are only missing 2x better performance is misleading. You are missing a lot more stuff.


But that's the thing. You can upgrade all that.

I’ve got a T470 with a brand new 400nits 100% sRGB and like 80% AdobeRGB screen. You can even get 4K screens with awesome quality for the T4xx laptops with 40-pin eDP.

With 17h battery life even on performance mode.

With a new, 1080p webcam.

With 32GB of DDR4-2400 RAM

With 2TB NVMe Storage.

With 95Wh replaceable batteries, of which I can still get brand new original parts and which I can replace while using the laptop.

for a total below 500$.

If I'd upgrade the top-of-the-line T480 accordingly, I'd still be below 800$ and performance that's not that far off anymore.


2011 desktop CPUs will perform about half as well as a modern laptop one.

I don’t even think Sandybridge (Intel 2011) CPUs support h264 decode- a pretty common requirement these days for zoom, slack, teams and video streaming sites such as YouTube.


>2011 desktop CPUs will perform about half as well as a modern laptop one.

Maybe, but the fat client-thin client pendulum has swung back in favor of thin clients to the point that CPU performance is generally irrelevant (it kind of has to be, since most people browse the Web with their phones). As for games, provided you throw enough GPU at the problem acceptable performance is still absolutely achievable, but that's not new either.

>a pretty common requirement these days for zoom, slack, teams and video streaming sites such as YouTube

It really isn't: from experience the hard line for acceptable performance is "anything first-gen Core iX (Nehalem) or later"- Core 2 systems are legitimately too old for things like 4K video playback, however. The limiting factor on older machines like that is ultimately RAM (because Electron), but provided you've maxed out a system with 16GB+ and an SSD there's no real performance difference between a first-gen Core system and an 11th-gen Core system for all thin client (read: web) applications.

That said, it's also worth noting that the average laptop really took a dive in speed with Haswell and didn't start getting the 10% year-over-year improvements again until Skylake because ultra-low-voltage processors became the norm after that time and set the average laptop about 4 years back in speed compared to their standard-voltage counterparts: those laptops genuinely might not be fast enough now, but the standard-voltage ones absolutely are.


Yes… only half as well as modern laptop. Back on the era of Moore’s law, a new machine would be 32x as fast as a ten year old model.

That was a real difference.

But in 2021, people still buy laptops 1/2 as slow as other models to do the same work. Heck people go out of their way to buy iPad pros which are half as slow as comparable laptops.

Considering that, I think a ten year old machine is pretty competitive as an existing choice.


> iPad pros which are half as slow as comparable laptops.

Uh, what?

iPads are using comparable CPUs to the M1 (some even use the M1) and are some of the fastest CPUs for rendering JavaScript on the planet.

I think you’re right that people buy slow laptops. But I think that often comes from a place of technical illiteracy and willingness to spend.

Put simply: they can’t often comprehend the true value of a faster system and opt to be more financially conservative.

Which I fully understand.


IPad Pro 2021: 1118 on geekbench[1] priced at @ 2,199.00 fully specced at the 12.9 inch model with keyboard.

Macbook Air 2020: 1733 on geekbench[2]. Priced @ about 1,849.00 fully speced for 13-inch model.

That's what I mean by comparable tablets are more expensive than laptops. You have to pay a lot more because it has a dual form factor (like the Microsoft Surfacebooks).

[1]https://browser.geekbench.com/v5/cpu/10527696

[2]https://browser.geekbench.com/v5/cpu/10527696


Sandy Bridges actually do have hardware decoding, but this isn't really a CPU feature, since it's a separate accelerator more akin to an integrated GPU. FWIW sites like Youtube seem to prefer the most bandwidth-efficient codec regardless of whether hardware decoding is available.


> about half

Exactly, no 100x, no 10x, just half. That is very noticeable but "extreme" sounds like much more.

> I don’t even think Sandybridge (Intel 2011) CPUs support h264 decode-

Correct, but unrelated to CPU speed and instead an additional hardware component. That is a fair argument, just like missing suitable HDMI standards, usb c, etc.. However, again that is not about speed but features.


I'm still using my 12 (!) year old 17" MacBook Pro as daily work machine. Yes, it's not the fastest computer but for my usage it works. Granted, starting IntelliJ needs some time, but coding still works well (and compiling big codebases isn't done locally).

The one thing that really isn't usable anymore is Aperture/Lightroom. And missing docker because my CPU is too old (but docker it still works in VMs ...) is a pain.


I'm still using my 2014 16" MacBook Pro too - the SSD has made such a dramatic difference to the performance of machines that I think in general they age much better than previously.

I'm not a heavy user but that machine can easily handle xcode Objc/c++ programming quite handily.


The keyboard of the ThinkPad is and was always great and the TrackPoint is a bonus on top of it for some of us. The other part of Input/Output is a good screen. Only after I/O the performance is of matter.

What I don't like about Apples devices is the keyboard, they don't provide a equally good trigger point (resistance and feedback) and the keycaps aren't concave (not leading fingers). The quality problems and the questionable Touchbar are problem, too. Lenovo did that before Apple and immediately stopped it, they accept mistake far quicker. I still suspect both - Apple and Lenvo - tried to save money with a cheap touchbar instead of more expensive keys.

But there is the performance? First, Apple only claims to be fast. What matter are comparisons of portable applications and not synthetic benchmarks. Benchmarks never mattered. Secondly, Apple uses a lot of money (from the customers) to get the best chip quality in industry from TMSC.

What we have are the choice between all-purpose computers from vendors like Lenovo, Dell or System67 or computing appliances from Apple. I say computing appliance and not all-purpose computer, because I'm not aware of official porting documentation for BSD, Linux or Windows. More importantly MacOS hinders free application shipment not as worse as iOS but it is already a pain for developers, you need to use Homebrew for serious computing tasks.

Finally the money?

Lenovo wants 1.000 - 1.700 Euro for a premium laptop like ThinkPad X13/T14 with replacement parts for five years, public maintenance manuals and official support for Linux and Windows.

Apple wants 2.400 - 3.400 for no maintenance manuals, no public available replacement parts and you must use MacOS. Okay, the claim it is faster. Likely it is.

You buy performance with an exorbitant amount of money but with multiple drawbacks? Currently I'm still using a eight year old ThinkPad X220 with Linux. The operating-system I want and need, excellent keyboard, comfortable trackpoint and a good IPS-Screen. I think the money was well spent - for me :)


> Most things that people want to do with computers can be done on a machine that is five to ten years old without any trouble at all.

Please, browsing web has always been pain on old hardware.


You cannot win this argument because for some people Lynx in a terminal is an acceptable way of browsing the web.

Also, people argue in bad faith all the time and a lot of people for whom it isn’t an acceptable way of browsing the web would pretend it is anyway to win an argument.


> Battery life, screen quality, and ergonomics are the only thing most consumers need to care about.

You, Sir, are a very utilitarian consumer. Most consumers care about being in the "In" crowd, i.e., owning the brand that others think is cool. Ideally, it comes in a shiny color. That's it. The exact details are just camouflage.


Yea I agree. Implicit in my statement is `for the people in the market Apple is targeting`. You aren't going to buy a $4000 computer and you'll be happy with a clunker. I, and many people in my situation, don't feel there are options for $4000 laptops right now.


surprised by your experience with an older laptop. I have both a 4 years old dell (forget it's name right now, for biz uses) and a 2015 MBA (which makes it 6 years old now), and they are both getting SLOW. In a vacuum - they are fine, they pull through. but they are notably slow machines. (i prefer windows but can't stand the dell and find myself keep going back to the mba - although youtube sucks on that machine with chrome)

edit: oh the mba battery is still a miracle compared to the dell's, btw


I bought the T460p last year. It replaced a 2009 Macbook pro with spinning rust. That machine was fine too, but it stopped getting updates years ago so I had to replace it. I hope that Linux will support the T460p for longer.


A lot of people don't have a desktop so a very powerful laptop that just works (tm) well for software development and can be lugged around effortlessly is ideal. I haven't owned a desktop myself in years and many engineers I know have been letting their rigs collect dust as the laptop just does it all.

For the price, there isn't something comparable in all respects.


> It might be a long time before it makes sense to buy a non-Apple laptop.

They already don't make sense as the M1 isn't a "general purpose" CPU like Intel or AMD that support multiple operating systems, and even the development of new ones. Instead, the M1 is a black-box that only fully supports macOS - that's a crippling limitation for many of us.


> It might be a long time before it makes sense to buy a non-Apple laptop.

Some people need the ability to repair or upgrade, or the freedom to install any software they need. Not to mention that so far we have seen comparison of the M1 "professiona lline" to a gaming laptop; professionals deal with quadro cards since the RTXs are driver limited for workstation duties, and speaking of gaming on OSX makes no sense.

For me it might be a long time since I can even consider buying another Apple product.


That's a lot of if's and but's that ignores a very important fact - most of Apple's top engineers from their CPU division have already quit Apple ...

Source: Apple CPU Gains Grind To A Halt And The Future Looks Dim As The Impact From The CPU Engineer Exodus To Nuvia And Rivos Starts To Bleed In - https://semianalysis.com/apple-cpu-gains-grind-to-a-halt-and...

If they can't innovate, all they can do is keep increasing the core count ... don't think that'll help them compete with future AMD / Intel / ARM or RISC cpus in the long term.


It seems Intel seems to have recognized that and hedged their bets by securing the remaining 3nm capacity [1]

1: https://www.techradar.com/news/intel-locks-down-all-remainin...



Frankly don't believe this. Who do you give majority of leading node capacity to?

1. Biggest long term partner.

2. Someone who competes with you as a manufacturer.

Not a hard decision!


TSMC had long said they were happy to have Intel as a larger customer, including on leading edge nodes… if there was long-term commitment from Intel.


Of course but that’s not the same as giving them the majority of the leading edge at the expense of Apple which is what the article implies.


I have an amd laptop and wife has an M1. Mine has twice RAM, twice SSD, more ports, is 10% lighter and 20% cheaper.


And your AMD laptop can run many OSes, while your wife will forever be stuck on macOS on the Apple M1 laptop whose hardware is designed to be hard to repair and upgrade ...


I tried to work with the apple laptop, it's very good in many aspects, but I needed a program and it can't be installed without Apple ID. I think there are many programs like that for the Mac, so I passed.


great post, thank you. curious though - you mention only fabrication advantages. what about the in-house design team? meaning - what about the talent?


>It might be a long time before it makes sense to buy a non-Apple laptop.

Can't tell if sarcastic but until you can run Linux on the M1 I don't see any reason to buy an Apple laptop.

It could be 10x faster than the competition but with OSX it would still feel like a net productivity loss, from having to deal with bugs and jank in the software.


They aren't the only one willing to pay the premium: they specifically bought exclusivity to the newest node and have for years.


> It might be a long time before it makes sense to buy a non-Apple laptop.

A laptop that can do some light gaming is not a niche requirement, and ultimately Apple decided to completely turn it's back on that market with the ARM transition.


The architecture isn't to blame; it's the 32-bit stuff that breaks a fair amount of Steam games, and Vulkan/graphics engines lacking, both of which are...Apple being Apple


The architecture is ultimately to blame for "you can't log into Windows to fix Apple being Apple" however.


I'll take frame.work over apple any day.


[flagged]


> there's really no reason to buy an apple laptop short of bragging

That seems like a simplistic view, and honestly, I'd figure you don't value much what the Mac delivers for me, mostly aesthetic and lack of tinkering. For instance, this Mac has a nice, large trackpad, and even high end laptops from other manufacturers tend to feel a bit clunky by comparison. I don't like fan noise, and the Air performs well for a fanless machine. With a Dell laptop, I still had problems with audio drivers and bluetooth devices--I just don't want to mess with that crap, at all, ever, and even from the manufacturer, I'm dealing with driver issues and bad configuration. I don't want the ability to de-crapify my PC with a fresh install of Windows, I want to open the box and not have an OS loaded with 3rd party crap, and that's not something you get with a PC.

I could probably go on for a while if I thought about it, but I don't even consider an Apple laptop anything to brag about. Generally, I'm tech savvy, but so far over it that I want an appliance machine. I find visible logos, products, and brand identities tacky, so bragging doesn't even enter into the equation. Visible Apple logo counts against the machine for me.


>the OS is worse

Worse than what?

Windows, that is even in 2021 is still a clanky garbage.

Or Linux (any distro) that can't be installed on a newest hardware without any trouble at all (my relatively recent experience).

An even then you get an OS that you have to fix yourself for it to work and feel remotely nice. Yet without decent software in most cases outside of software development.


> Windows, that is even in 2021 is still a clanky garbage.

That's how I feel about macOS. It's for people who prioritize maximum animations over speed. For example, Finder is so slow compared to Windows Explorer. I love Windows. I love the customizability I love the speed, and even 15 years ago, there were 10X more software package for Windows than Mac.

I hear that macOS in recent years (after 2019?) even allows you to change the drab grey color of the menu bar? That's progress! Maybe by 2030, they will allow you to move the menu bar to a vertical orientation because that's how a lot of people prefer it; there's less vertical space than horizontal on most monitors. I honestly cannot understand how one can stand this stifling arrangement where you are told, "This is how it is, we know better".


>That's how I feel about macOS. It's for people who prioritize maximum animations over speed. For example, Finder is so slow compared to Windows Explorer. I love Windows. I love the customizability I love the speed, and even 15 years ago, there were 10X more software package for Windows than Mac.

Well, I don't have windows at hand, but Finder on my 2019 MPBr opens in much less than a second. Maybe Explorer opens 3 time faster but what's the point? Yes, I do prioritise speed over animations (linux is clearly a winner when it comes to animations by the way, macOS has just a few by default) when it really matters.

>there were 10X more software package for Windows than Mac.

Quality over quantity. I do agree that macOS lacks in a few departments but most of the time this is due to the closed-garden-like nature. Windows has things like Rufus and other tools because it has to play jack of all trades role. macOS is free of this. Like it or not.

>I love the customizability

This is understandable. I love this on linux too. Now I just don't have the time nor do I care rally. I'd prefer an OS that gives me 80% of what I need\want for 10% of my time rather than an OS that requires me to spend 50% of my time for all those tweaks.


Do you mean the File/Edit/View menu bar, or the Dock?

The Dock has been configurable forEVER. I ran it on the side for many years before switching back to the bottom.


Yes, the File/Edit/View menu bar that has to be at the top.


I have never even heard of someone wanting this, let alone doing it. Are you saying you do this in Windows programs?


This very discussion shows that two people can have differing opinions and thus, the choice should be left to the user. That is my main point.

Now, on a completely different note : a little bit more about the Vertical layout. Here are two articles with screenshots. It's awesome and now I can't go back to Horizontal taskbar ever.

https://www.groovypost.com/howto/howto/vertical-vs-horizonta...

https://adamhollett.com/posts/2014/02/move-your-taskbar-to-t...


Those articles are about the Windows Taskbar, not the menu I was talking about. This is what I was trying to clarify in my earlier question, when I asked "Do you mean the File/Edit/View menu bar, or the Dock?"

You insisted at the time that you meant the MENU bar, not the Dock, but now it appears that was incorrect.

This got a little long, but the tl;dr here is that you are confused on terms, and that the thing you say can't be moved on the Mac absolutely can be, and always could be.

--

One of the ways in which Win and MacOS are different is that, in MacOS, the File/Edit/View menu is always across the top of the screen, not bound to a specific window. I'm typing this in Firefox. I have several Firefox windows open, but NONE of them have a File menu. The menus for Firefox are across the top of my monitor.

In Windows, by contast, this menu is part of each window spawned by a given application. Most modern Windows programs use the Ribbon style menu, but some older or less-updated tools (e.g., SQL Server Management Studio) use the old-style. If I were doing this post on Firefox on Windows, each of my Firefox windows would have its own File/Edit/View menu.

In both Mac and Windows, this menu is always in this known position -- ie, on a Mac, it's across the top of the screen; on Windows, it's part of the top of each app window. It cannot be moved (to the best of my knowledge).

What YOU are talking about (and what those articles are talking about) is called the Taskbar in Windows. The analogous item in MacOS is the Dock. In both systems, the default location is the bottom of the screen, and in both systems it is possible to change this.

-> In Windows, the Taskbar can be placed on the bottom (default), either side, or across the top of the screen.

-> In MacOS, the Dock can be placed on either side or the bottom; you can't put it on the top of the screen because it would interfere with the aforementioned File/Edit/View menu.

--> In NEITHER OS is it possible to reposition the File/Edit/View menu.


You are right. I think the confusion arises because this Menu bar at the top mixes up the File/Edit/View menu to the left and the bluetooth and WiFi icons to the right. When you minimize all windows, what does this top bar show? How do you access the WiFi icon if all windows are minimized and this top bar is disappeared? Sorry, the last time I used a macOS was in 2017.

(Also, to the extreme left of this Menu bar is the Apple icon which can be used to access the "System Preferences". I guess this mixing of system-wide and program-specific menu options doesn't bother macOS folks.)


The key distinction here is that the MacOS menu bar isn't part of any Window, so it never disappears. Under MacOS, SOME application ALWAYS has focus, and whatever has focus shows its menu options there between the Apple menu and the right-hand icons (see below).

This is different from Windows. On a Mac, if you quit all your apps entirely, then the only remaining application is Finder, which has no real equivalent under Windows and cannot really be quit (you can restart it, tho, which is sometimes required).

Finder is how you navigate to files, but it's also the "shell" that controls how you interact with the computer -- it gives you the Dock, the menus, etc.

The area to the right, as you correctly surmise, is more analogous to the System Tray area on the right side of the Taskbar under Windows. That side doesn't change with app focus.

The Apple menu is always there, yes.

As for your final snide note, no, we're not bothered, because it's not something to be bothered by (besides, the same supposed "issue" exists in the System Tray on Windows, where background app and system level options often coexist).

Honestly, I think this thread probably MOSTLY shows that people who aren't terribly familiar with an OS should avoid making negative statements about it.


Well I'm someone who hacks on Linux for a living.

What do I do now?


Well, I've been hacking on Linux for 8 years and been a maintainer for a few packages in AUR so what?

Even the way you present yourself here proves my point. This is not an OS for a comfortable life, it's a system for "hacking" (including the system itself).

If you still have passion and time for this - cool, most people don't want to spend their days on this.


> If you still have passion and time for this - cool, most people don't want to spend their days on this.

Just adding another counterpoint to "MacOs is awsome", anecdotal one, because we don't have any other.

My wife (she is not a hacker, just a computer science teacher) hates macos, she preferred Linux. She used the system I setup for her, and I left it alone. She loved it.

MacOS is created for average Joe that just browses web and has no clue about anything else (like filesystem, directories). Example: Finder, that thingy can't show you full paths. If you try to get to your home directory to get some files you have to jump hoops.

I'm astonished that it is pushed as a developer OS, "because you have 'Linux' there". Sorry, that poor choice of basic utilities (some time ago bash there was ages old), that everyone has to use brew to get anything useful there.

Hardware is nice, but OS, is something that you just overwrite while installing Linux.


> Example: Finder, that thingy can't show you full paths

It can

> If you try to get to your home directory to get some files you have to jump hoops.

You don't

> Sorry, that poor choice of basic utilities (some time ago bash there was ages old), that everyone has to use brew to get anything useful there.

Ah yes, unlike the great linux where everyone is using apt-get/yum to install anything useful

Disclaimer: have used Macs as my primary developer machine since 2008 across 4 industries as both frontend and backend developer in half a dozen different programming languages.


> Ah yes, unlike the great linux where everyone is using apt-get/yum to install anything useful

I wouldn't complain if macos had those utils built into os package manager/appstore, but it does not, one needs to install custom package manager.

It is like I would need to install yum on Debian to get apps.


You are not complaining, you are trying to find yet another straw.


I'm sorry, but this is just funny to say the least.

I'm your average software developer when it comes to building software but I'm (and most people are) is your average Joe for the rest of the time.

I don't really care if some "outside of job" app is a bit outdated if it performs well.

At the same time I do care about my time and I do prefer applications that where carfted with user experience in mind which Linux lacks clearly.

>She used the system I setup for her, and I left it alone. She loved it.

So she doesn't really loves linux. You just build a kiosk for her. Nice and shiny. Good for her but this has nothing to do with the topic.


I use a AMD Linux machine for work (softfware development).

Now I want to setup a second machine for audio production (Pro Tools and Ableton). For that I have the choice between Windows and Linux. Easy choice for me: M1 and MacOS, I certainly won't dabble with Windows.


"Windows, that is even in 2021 is still a clanky garbage."

MacOS is useless on desktop, it can't scale DPI of UI, if that's not clanky garbage i don't know what is.

My large 4K screen either acts as an oversized 1080p display or turns into a blurry mess.

Android and Windows both can efficiencltly use display with any aspect ratio and any DPI.


It scales fine for me on 4K. Do you have the most recent Mac OS?


You misunderstand - the MacOS software stack can only scale the UI to 1x,2x, 3x. So you can either use 4K display at 2X scaling, which gives you fullhd worth of realestate with Retina resolution. Or you can use it at 1x, ehich gives you tiny icons.

When you use 'looks like 2560/qHD', then the OS renders to a virtual surface, pretending it has a 5K screen attached. Then it downscales that image, and outputs it. The result is janky straign lines, blurry'er text, etc.

Modern Windows and Android render their UI natively in any resolution, and so you don't get issues. The caviat is that windows has 3 UI frameworks, and the oldest one is still found is some places, that's the one that doesn't scale.


Is that why every tech company uses Macbooks?

People who can't afford Apple laptops (and gamers and certain other use cases) buy cheaper laptops. The brand appeal draws all the wealthy people towards Apple devices.


A lot of tech companies let their engineers choose between something like an X1 Carbon or a Macbook.

Lots of tech workers, in my experience, actually choose a Macbook for work because they really want to use a slick, luxury device and can't justify the cost for personal use... particularly given how many of them actually use their work laptops for personal tasks in off hours.

I think HN underestimates how many SWE's actually don't do much programming or use their personal computers for much at all. Particularly those with families.


> I think HN underestimates how many SWE's actually don't do much programming or use their personal computers for much at all. Particularly those with families.

This. I haven't owned personal computer for 10 years now, always use laptop I get from work.


> This. I haven't owned personal computer for 10 years now, always use laptop I get from work.

While a lot of folks do this, it’s such a terrible practice from your perspective (the company essentially owns your personal data) and the company’s (way higher risk). It’s not something I’d do, let alone brag about as if it’s a good thing, if I were you.


I've worked at many tech companies and apple are by far the minority. And that's a biased sample size of really the largest market they have.


[flagged]


That is not a typical experience with a Macbook Pro. Have you considered that maybe there's something wrong with yours?


could it be that your corporate laptop is loaded with pile of monitoring, management and antivirus crapware? i have one, and it annoys the hell out of me.


It sounds like you need to get your laptop fixed?


> They're expensive

What on earth are you talking about? The M1 Air is <$900 if one buys Apple refurbished ...


Are you really comparing a refurbished laptop to a new one so you can find a price that matches?


Even if you don't look at refurbished, here in France, a MacBook Air is 1029€ brand new.

A comparable ThinkPad X1 Carbon is 2499€ minimum (that's the base model, with 256GB/8GB, but with a high-res screen).

A comparable Dell XPS 13 is 1699€ (same, cheapest one with a high-res screen you can buy; though you can't have the high-res screen without buying 512/16GB).

The ThinkPad T14 Gen 2 (successor to T490, supposed to be the cheap ThinkPad) starts at an astonishing 2156€ for 1080p. That's the very cheapest T14 you can buy, period. I could buy a T450 new for $700 when I was in the US! T14s starts at 2369€, again for 1080p. They keep selling old models so they can still hit lower price points.

The cheapest T14 you can buy with a high-res screen comes with 8GB ram, 128GB SSD (WTF?) and costs 3408€ (no, not a joke) or 3548€ if you want a graphics card.

HP starts at 1599€ for high-res HP Spectre.

Even if you forget the whole "premium laptop thing", I went on the Dell website and asked it to list all high-res 13" laptops: the cheapest is the 1699€ XPS. They don't made high-res Inspirons smaller than 15 inch. I believe the cheapest high-res laptop HP makes is that 1599€ Spectre, though their website is buggy. HP doesn't offer cheaper high-res devices except an 11" Chromebook.

These are all the official prices from the official manufacturer website.

Even on ok1.de, which has extremely cheap student prices, it's 1499€ for a high-res P14s, 2159€ for the cheapest high-res X1 Carbon, and 2,019€ for the cheapest high-res T14s. And then, we're comparing it to the 900€ student-price MacBook Air.

I've researched it for weeks, and there isn't a single premium Windows laptop that matches the MacBook Air on price.


Hm, the XPS 13 base model is a bit cheaper than the Air base model (in Austria), the screens are comparable (FHD+ is lower res than Retina, but then 4K is far more, so there is no direct comparison). I do have a 13" XPS and I can tell you, the basic screen (FHD in my case) is more than good enough.

But I was just blown away how expensive Lenovo has become. HP has always been expensive.

I'm also looking at the fresh Linux laptops, like the TUXEDO InfinityBook Pro 14, which are in the same price range and look very compelling, if you are OK with running Linux, that is (I am)...


Interesting, I just went back on the XPS website and the base, 1080p XPS 13 with 11th gen Intel is 1000€ for 512GB. I remember it being much more when I checked this morning. That does seem like a great price.

I think I found the issue: when you select the 13" from the products page, it shows 1000€, but if you select it, then switch the configurator to the 2-in-1 13", and then switch back the 11th gen 13", the base model with 256GB now costs 1400€. That website bug cost Dell a sale, since I would have definitely bought it instead of the M1 Air if I had seen the real price. What a shame!

Funnily enough also, when I configure it with 4K, then switch back to 1080p and switch every option back, it costs 1050€ instead of 1000€ for the exact same config.


i got my m1 air new from BB for $750 this summer, good deals do exist


Thats a good deal, care to elaborate what BB means?


Best Buy I suppose


The fact that this beats AMDs top laptop CPU is actually a huge deal. And that's before considering battery life and thermals.

I'll never buy an Apple computer, but I can't help but be impressed with what they've achieved here.


Don't get me wrong: It's impressive and I have huge respect for it. I also bought one.

However, it would be surprising if Apple's new 5nm chip didn't beat AMD's older 7nm chip at this point. Apple specifically bought out all of TSMC's 5nm capacity for themselves while AMD was stuck on 7nm (for now).

It will be interesting to see how AMD's new 6000 series mobile chips perform. According to rumors they might be launched in the next few months.


This definitely is a factor. Another thing that people frequently overlook is how competitive Zen 2 is with M1: the 4800u stands toe-to-toe with the M1 in a lot of benchmarks, and consistently beats it in multicore performance.

Make no mistake, the M1 is a truly solid processor. It has seriously stiff competition though, and I get the feeling x86 won't be dead for another half decade or so. By then, Apple will be competing with RISC-V desktop processors with 10x the performance-per-watt, and once again they'll inevitably shift their success metrics to some other arbitrary number ("The 2031 Macbook Pro Max XS has the highest dollars-per-keycap ratio out of any of the competing Windows machines we could find!")


> This definitely is a factor. Another thing that people frequently overlook is how competitive Zen 2 is with M1: the 4800u stands toe-to-toe with the M1 in a lot of benchmarks, and consistently beats it in multicore performance.

It's a bit unfair to compare multicore performance of a chip with 8 cores firing full blast against another 8 core chip with half of them being efficiency cores.

The M1 Max (with 8 performance cores) multicore performance score posted on Geekbench is nearly double the top posted multicore performance scores of the 5800U and 4800U (let alone single core, which the original M1 already managed to dominate).

It'll be interesting to see how it goes in terms of performance-per-a-watt. Which is what really matters in this product segment. The graphs Apple presented indicated that this new line up will be less efficient than the original M1 at lower power levels, but they'll be able to hit it out of the park at higher power levels. We'll have to wait to see the results from the likes of Anandtech to get the full story though.

Personally, I'd love to see a MacBook 14 SE with an O.G. M1, 32 GB memory, a crappy 720p webcam, and no notch. I'd buy as many of those as they'd sell me.

I'm curious to see how the M1 Pro compares to the M1 Max. They are both very similar processors with the main differences being the size of the integrated GPU and the memory bandwidth available.

https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q...

https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q...


The M1 Max isn't competing with the 4800u, considering that it's starting price is ~10x that of the Lenovo Ideapad most people will be benching their Ryzen chips with. I don't think it's unfair to compare it with the M1, since it's still more expensive than the aforementioned Ideapad. Oh, and the 4800u came out 18 months before the M1 Air even hit shelves. Seeing as they're both entry-level consumer laptops, what might you have preferred to compare it with? Maybe a Ryzen 9 that would be more commonplace in $1k+ laptops?


It's hard to compare CPUs based on the price of the products they're packaged in. There are obviously a lot of other bits and bobs that go into them. However, it's worth noting that a MacBook Air is $300 cheaper than the list price for the Ideapad 4800U with an equivalent RAM and storage configuration. So by your logic, is it fair to compare the two? Perhaps, a 4700U based laptop would be a fairer comparison?

The gap between the Ideapad 4800U and the base model 14 inch MacBook Pro is a bit wider, but you'll also get a better display panel and LPDDR5-6400[1] memory.

We'll have to see how the lower specced M1 Pros perform, but it's hardly clear cut.

[1] https://www.anandtech.com/show/17019/apple-announced-m1-pro-...

Edit: I just looked up the price of the cheapest MacBook Pro with a M1 Max processor and it's about 70% more expensive than the Ideapad 4800U. However, with double the memory and much better quality memory and a better display and it seems roughly about 70% better multithreaded performance in Geekbench workloads. Furthermore, you may get very similar performance on CPU bound workloads with the 10 core M1 Pro, the cheapest of which is only 52% more expensive than the Ideapad 4800U.


AMD is already planning not only 20% YoY performance improvements for x86 but now has 30x efficiency plan for 2025. I think x86 is in it for much longer than a decade.

Intel otoh, depends on if they can gut all the MBAs infesting them.


30x efficiency is specifically for 16 bit FP DGEMM operations, and it is in the context of an HPC compute node including GPUs and any other accelerators or fixed function units.

For general purpose compute, so such luck unfortunately. Performance and efficiency follows process technology to a first order approximation.

https://www.tomshardware.com/news/amd-increase-efficiency-of...


also bear in mind that AMD's standards for these "challenges" have always involved some "funny math", like their previous 25x20 goal, where they considered a 5.02x average gain in performance (10x in CB R15 and 2.5x in 3D Mark 11) at iso-power (same TDP) to be a "32x efficiency gain" because they divided it by idle power or some shit like that.

But 5x average performance gain at the same TDP doesn't mean you do 32x as much computation for the same amount of power. Except in AMD marketing world. But it sounds good!

https://www.anandtech.com/show/15881/amd-succeeds-in-its-25x...

Like, even bearing in mind that that's coming from a Bulldozer derivative on GF 32nm (which is probably more like Intel 40nm) 5x gain in actual computation efficiency is still a lot, and it's actually even more in CPU-based workloads, but AMD marketing can't help but stretch the truth with these "challenges".


To be fair idle power is really important for a lot of use cases.

In a compute focused cloud environment you might be able to have most of your hardware pegged by compute most of the time, but outside of that CPUs spend most of their time either very far under 100% capacity, or totally idle.

In order to actually calculate real efficiency gains you'd probably have to measure power usage under various scenarios though, not just whatever weird math they did here.


That's not really being fair, because the metric is presented to look like traditional perf/watt. And idle is not so important in supercomputers and cloud compute nodes which get optimized to keep them busy at all costs. But even in cases where it is important, averaging between the two might be reasonable but multiplying the loaded efficiency with the idle efficiency increase is ludicrous. A meaningless unit.

I can't see any possible charitable explanation for this stupidity. MBAs and marketing department run amok.


Yep 100% agree with you - see my last sentence. Just trying to clarify that the issue here isn't that idle power consumption isn't important, it's the nonsense math.


Wow that's stupid, I didn't look that closely. So it's really a 5x perf/watt improvement. I assume it will be the same deal for this, around 5-6x perf/watt improvement. Which does make more sense, FP16 should already be pretty well optimized on GPUs today so 30x would be a huge stretch or else require specific fixed function units.


it's an odd coincidence (there's no reason this number would be related, there's no idle power factor here or anything) but 5x also happens to be about the expected gain from NVIDIA's tensor core implementation in real-world code afaik. Sure they advertise a much higher number but that's a microbenchmark looking at just that specific bit of the code and not the program as a whole.

it's possible that the implication here is similar, that AMD does a tensor accelerator or something and they hit "30x" but you end up with similar speedups to NVIDIA's tensor accelerator implementation.


I've seen tensor cores really shining in... tensor operations. If your workload can be expressed in convolutions, and are matching the dimensions and batching needs of tensor cores, there's a world of wild performance out there...


Alder lake is looking really good in leaked benchmarks. I definitely think Intel is down, but not out.

Have to see if they can not only catch up, but keep up.


Where can I find these benchmarks?



the 4800u stands toe-to-toe with the M1 in a lot of benchmarks, and consistently beats it in multicore performance

I had a Lenovo ThinkPad with a 4750U (which is very close to the 4800U) and the M1 is definitely quite a bit faster in parallel builds. This is supported by GeekBench scores, the 4800U scores 1028/5894, while the M1 scores 1744/7600.

If AMD had access to 5nm, the CPUs would probably be more or less or par. Well, unless you look at things like matrix multiplication, where even a 3700X has trouble keeping up with the Apple AMX co-processor with all 3700X's 8 cores fully loaded.


During my time, the Mac line has switched CPUs three times, why would Apple not switch to RISC-V if that one is really so much better than ARM?


well....M1 Max single core also beating a 5950x

But tbh, it doesn't seem the new hotness in chips is single core CPU, it's about how fancy you spend the die space in custom processors, in which case the M1 will always be tailored to Apple (and presumably Mac users') specific use-cases...


My 5950x compiles the Linux kernel 10 seconds faster than a 40 core xeon box, in like 1/10th the power envelope.

The chances of actually getting a only a single core working on something are slim with multitasking, I had to purpose build stuff - hardware and kernel/etc for CPU mining a half decade ago to eliminate thermal and pre-emption on single threaded miners.

Single thread performance has been stagnant forever because with Firefox/chrome and whatever the new "browser app" hotness is this month your going to be using more than 1 core virtually 100% of the time, so why target that. Smaller die features means less tdp which means less throttling which means faster overall user experience.

I'm glad someone is calling out the M1 performance finally.


You actually get better single core performance out of a 5900x than a 5950x. The more cores the AMD CPU has, generally the more they constrain the speed it can perform at on the top end. In this case, the 5950x is 3.5GHz and the 5900x is 3.7GHz. The 5800x is even slightly faster than that, and there's some Geekbench results that show single core performance faster than the score listed here, but at that point the multi-core performance isn't winning anymore.

Also, I'm not sure what's up with Geekbench results on MacOS, but here's a 5950x in an iMac Pro that trounces all the results we've mentioned here somehow.[1]

1: https://browser.geekbench.com/v5/cpu/6034871


>that trounces all the results we've mentioned here somehow.[1]

MacOS, being unix based, has a decent thread scheduler - unlike windows 10/11, which is based on windows NT and 32bits, and never cared about anything other than single core performance until very very recently.


If that's true it puts a lot of comparisons into question. That windows multiprocessing isn't as good as MacOS doesn't matter to a lot of people that run neither. There's not a lot of point in using these benchmarks to say something about the hardware if the software above it but below the benchmark can cause it to be off by such a large amount.


Most comparisons have always been questionable. Main reason MacOS gets away with charging so much more for similar hardware and still dominates the productivity market is it squeezes so much more performance (and stability) out of equivalent hardware. Just check the geekbench top multithread scores, windows starts around the 39th _page_ - and thats for windows enterprise.


That's not necessarily true, single core boost on a 5950x is higher than the 5900x (4.9GHz vs 4.8GHz).


That's obviously a Hackintosh or a VM result.


Geekbench score for a 5900x (12 core) against the M1 Max (10 core) score already linked:

https://browser.geekbench.com/v5/cpu/compare/10517471?baseli...

They look to be on par to me. Things will be less murky if and when Apple finally scale this thing up to 100+ watt desktop class machines (Mac Pro) and AMD move to the next gen / latest TSMC process.

In my view Intel and AMD have more incentive to regain the performance crown than Apple do at maintaining it. At some point Apple will go back to focusing on others areas in their marketing.


no it doesnt, you must have misread the benchmark scores.


Geekbench.com says m1 max 1700, 5950x 2200


It wouldn't be a surprise that a CPU that can't run most of the software out there (because that software is x86) and has ditched all compatibility and started a design from scratch, can beat last-gen CPUs from competitors. For specific apps and workloads, for which it has accelerators.

But here I am, with a pretty thin and very durable laptop that has a 6 core Xeon in it. It gets hot, it has huge fans, and it completely obliterates any M1 laptop. I don't mean it's twice as fast. I mean things run at 5x or faster vs an M1.

Now, this is a new version of the M1, but it's an incremental 1-year improvement. It'll be very slightly faster than the old gen. By ditching Intel, what apple did is making sure their pro line - which is about power, not mobility, is no longer a competitor, and never will be. Because when you want a super fast chip, you don't design up from a freaking cell phone CPU. You design down from a server CPU. You know, to get actual work done, professionally. But yeah, I do see their pro battery is 11 hours while mine usually dies at 9. Interesting how I got my computer plugged in most of the time though...


>Because when you want a super fast chip, you don't design up from a freaking cell phone CPU. You design down from a server CPU.

Is that really true? I don't have any intricate chip knowledge, but it rings false. Whether ARM is coming from a phone background or the Xeon from a server background, what matters in the end is the actual chip used. Maybe phone-derived chips even have an advantage because they are designed to conserve power whereas server chips are designed to harvest every little ounce of performance. IDK a lot about power states in server chips, but it would make sense if they aren't as adapted to rapidly step down power use as a phone chip.

Now, you might be happy with a hot leaf-blower and that's fine. But I would say the market is elsewhere: silent, long-running, light notebooks that can throw around performance if need be, you strike me as an outlier.

Pro laptops should have a beefy CPU, great screen, really fast SSD, long battery life, lots of RAM which (presumably) your notebook features, but the new M somethings seemingly as well. But in the end, people buy laptops so they can use them on their lap occasionally. And I know my HP is getting uncomfy hot, the same was said about the intel laptops from Apple I think.

Apple doesn't need to have the one fastest laptop out there, they need a credible claim to punching in the upper performance echelon - and I think with their M* family, they are there.


You actually have it correct. When you start with an instruction set designed to conserve power, you don't get "max power." The server chips were designed with zero power considerations in mind - the solution to "too much power" is simply "slap a house-sized heatsink on it."

>Apple doesn't need to have the one fastest laptop out there

correct. My complaint, which I have reiterated about 50 times to shiny iphone idiots on here who don't do any real number crunching for work, is when the industry calls "mid tier" something that apple calls "pro" - apple is deceiving the consumer with marketing. The new laptops are a competition to Dell's Latitude and XPS lines. Not their pro lines. Those pro laptops weigh 7lb, and have a huge, loud fan exhaust on the back so they can clock at 5GHz for an hour. They have 128GB of RAM - ECC RAM, because if you have that much RAM w/o ECC, you have a high chance of bit errors.

There are many things you can do to speed up your stuff, if you waste electricity. The issue is not that apple doesn't make a good laptop. It's that they're lying to the consumer. As always. Do you remember when they marketed their acrylic little cube mini-desktop? It was "a supercomputer." They do this as a permanent tactic - sell overpriced underperforming things, and lie with marketing. Like using industry standard terms to describe things not up to that standard.


I’ll happily take my quiet, small, and cool MacBook and number crunch in a data center infinitely more powerful then your laptop. Guess that makes me a shiny iPhone idiot.

Relax, no one is forcing you to use Apple products.


Intel tried to ”design down” their uArch.

Also, there is a TON of pro Mac users. If we define ’pro’ as getting paid for work done on Macs..

Not to mention M1 emulates x86 pretty darn well..


[flagged]


The M1 (laptops) do emulate x86, and the M1 (chip) has a few x86 specific instructions to improve emulation performance


I love how you added "laptop" to make your statement... still false. There is a program running on macos that literally recompiles x86 binaries to arm, then the m1 executes the arm code. the m1 does not execute x86 binaries. period. it only runs arm binaries.


No, parent comment isn't false, even if the wording could be more precise. It is true that M1 CPUs do not execute x86 instructions, but the machines do, in effect, execute x86 binaries. Also, M1 does have added instructions for TSO to improve performance of translated code.


Hipster graphic designers make upwards of 150,000 a year in my area. The professional in pro, never actually meant “software engineer”. It meant anyone who can hang up their signboard and work on their own: lawyers, doctors, architects, and yes… graphic designers.

Personally, I think software engineers don’t need fast laptops either. We need mainframes and fast local networks. Nothing beats compiling at the speed of 192 cores at once.

Which reminds me, laptops and render farms is exactly the technique those hipster graphic designers you talked about are using so they aren’t missing out on any power.


> 150,000 a year in my area.

which is the top of their salary ceiling, and it's not a high number, like at all. the top number for software devs is about 700k. In my field, people make 200k+. But we're not talking about "pro" people. We're talking about a "pro" laptop. It's the best thing that apple makes - that doesn't make it "pro." It's got the performance of the midrange systems from everyone else.

>I think software engineers don’t need fast laptops either. yeah, when I run a script to read a few gig of performance data and do a bunch of calculations on it, I need a fast laptop. Until that's done, I'm sitting there, CPU maxed out, not able to do anything else. With an M1, I have to arrange my schedule to process the dataset overnight. With a Dell I run it over lunch. Case closed.

>We need mainframes and fast local networks. Nothing beats compiling at the speed of 192 cores at once.

I'm not a software engineer anymore. When I was, no, I did not usually compile on the server. I compiled on my workstation. Because you're not on a fast local network. You're at an airport for 4 hours, or on a plane for 5 hours, or on a comcast connection at your house.


> The m1 does not emulate x86. You literally don't know what you're talking about.

What is it doing when it runs x86_64 binaries then?


Rosetta 2 kicks in, performs a JIT/AOT translation of the x86 instructions to ARM instructions, executes those, and caches the resulting ARM binary for later use.

https://en.wikipedia.org/wiki/Rosetta_(software)


[flagged]


Please stop being so hostile to other users. It really doesn't add anything. You have made some factually questionable comments yourself, and I say this as someone who has worked on JIT aarch64 translation very similar to Rosetta 2.


Just say it: using a laptop on battery power is for hipsters.


Right? I’m curious what pro’s do “at the indy 500”. Most devs where I work use 15” macs and probably a blend of apps from jetbrains toolbox. Mostly connected to power outlets to be fair. So we’re talking local installs of spring boot app Java servers, front end Webserver, an IDE to work on one of those, because opening a second one on a Intel mac will either run the dev out of RAM or the heat will cause a shutdown.

The thing is, the corporate DELL windows machines available were largely unsuitable to dev work due to the trashy interfaces (low resolution screens, bad trackpads, battery life so bad you can’t make it through a 2 hour meeting undocked). The Windows laptops available really failed hard when they needed to be laptops.


It's fine to work sometime on battery. Except, after 5 hours, marginal utility decreases, and 8 hours it goes to zero. Why would I need more than one day?


because you're a "dev" who makes web pages for a company that can't afford an oracle license. and your office is a starbucks. but you want to call your little toy a pro, because to non-programmers, you make missle guidance systems. well not you. but the other few people on this thread.


Yes, using a laptop for over 10 hours on battery is not for people who do any serious work needing a pro laptop - what is in the professional circle called a workstation. Glad you understand. Note apple's stated hours: 11 hours while browsing the internet, and 17 hours for watching videos. If this is your use case, you are not the target market for a workstation. Apple sells "pro" laptops like Kia sells racing cars.


But here I am, with a pretty thin and very durable laptop that has a 6 core Xeon in it. It gets hot, it has huge fans, and it completely obliterates any M1 laptop. I don't mean it's twice as fast. I mean things run at 5x or faster vs an M1.

Probably not faster than an M1 Pro and definitely not faster than the M1 Max.

Your machine doesn't have a 512-bit wide memory interface running at over 400GB/s.

Does the Xeon processor in your laptop have 192KB of instruction cache and 24MB of L2 cache?

Every ARM instruction is the same size, enabling many instructions to be in flight all at once, unlike the x86-64 architecture where instructions vary in size and you can't have nearly as many instructions in flight at once.

Apples-to-apple: at the same chip frequency, an M1 has higher throughput than a Xeon and most any other x86 chip. This is basic RISC vs. CISC stuff that's been true forever. It's especially true now as increases in clock speeds has dramatically slowed and the only way to get significantly more performance is by adding more cores.

On just raw performance, I'd take the 8 high-performance cores in an M1 Pro vs. the 6 cores in your Xeon any day of the week and twice on Sunday.

And of course, when it comes to performance per watt, there's no comparison and that's really the story here.

Now, this is a new version of the M1, but it's an incremental 1-year improvement.

If you read AnandTech [1] on this, you'll see this is not the case—there have been huge jumps in several areas.

Incremental would have resulted in the same memory bandwidth with faster cores. And 6 high-performance cores vs. the 4 in the original M1.

Except Apple didn't do that—they doubled the number to 8 high-performance cores and doubled the memory width, etc. There were 8 GPU cores on the original M1 and how you can get up to 32!

Apple stated the Pro and the Max have 1.7x of the CPU performance of Intel's 8-core Core i7-11800H with 70% lower power consumption. There's nothing incremental about that.

By ditching Intel, what apple did is making sure their pro line - which is about power, not mobility, is no longer a competitor, and never will be.

Pro can mean different things to different people. For professional content creators, these new laptops are super professional. Someone could take off from NYC and fly all the way to LA while working on 16-inch MacBook Pro with a 120 MHz mini LED 7.7 million pixel screen that can display a billion colors in 4k or 8k video—battery only.

If you were on the same flight working on the same content, you'd be out of power long before you crossed the Mississippi while the Mac guy is still working. At half the weight of your laptop but a dramatically better display and performance when it comes to video editing and rendering multiple streams of HDR video.

The 16-inch model has 21 hours of video playback which probably comes in handy in many use cases.

Here's a video of a person using the first generation, 8 GB RAM M1 Mac to edit 8K video; the new machines are much more capable: https://youtu.be/HxH3RabNWfE.

[1]: https://www.anandtech.com/show/17019/apple-announced-m1-pro-...


Sorry but this entire post reads (skims) like you're playing top trumps with chip specs.


[flagged]


>arm cpus are faster than server-grade processors

They are way more efficient as server cores. https://aws.amazon.com/ec2/graviton/


if you define efficiency as compute per watt. I don't give a flying crap about watts. Efficiency is measured as amount of work done per hour. Because I get paid for the work, and then I pay the two dollars a week for the watts. I don't care if it's five dollars a week for the watts or two. I do care if it's two hours of waiting time versus five.


Compute per watt is compute per dollar. If you want more compute, spawn more cores. In this case, it will be cheaper with ARM.


lol no. the $20/month cost of electricity is a rounding error for my $6k laptop and the $50k of software licenses for it. It's even less of a rounding error for the datacenter, where a $500k ESX farm that has several million in software on it farm uses $5k of electric per per month including cooling.

Have you noticed almost no one uses ARM? There's a reason for that. Including software being licensed per core, so faster hotter cores and fewer of them win.


[flagged]


[flagged]


I also have a Xeon laptop. (45w TDP E3-1505m v6 in a dell precision).

Xeons are not magically faster than their i7/i9 counterparts (mine being not faster than a i7-7820HQ which is its contemporary flagship high performance mobile CPU). In fact they can be slower because the emphasis is on correctness and multi core, not usually single thread performance.

Xeons are also slower than a modern AMD chip which also can have more cores.

5x is a performance metric that doesn’t match up. Unless you have a desktop class 145w/165w cpu, in which case it’s not going to get 9hrs of battery unless you’re not actually touching the CPU. More like 30 minutes of heavy use on the largest battery legally allowed in laptops.

Edit: I just took a quick snoop on geek bench and found a modern xeon w in the largest dell precision laptop available:

Single core: 929 vs 1783 for M1

Multi core: 6718 vs 12693 for M1

https://browser.geekbench.com/v5/cpu/10517773

Synthetic scores aren’t everything, but I’m hard pressed seeing how you can get 5x performance out of a chip that scores almost exactly half. Even with hardware accelerations like AVX512 (which causes a cpu to get so hot it throttles below baseline even on servers with more than adequate cooling.)


My experience as well. I, too, have a dell precision with a 8 core xeon part, and while it looks decent its heavy and not noticeably faster than the m1 I replaced it with when it came out. The xeon would get hot and noisy when running teams or hangouts. It sits in my drawer for the last year or so.

M1 does not. Code compile is about as fast. Battery lasts a 3 day business trip or a hackaton without charging. I never heard its fan. I dont care much about brands, but lightweight, fast enough and well built M1 is praiseworthy. I am not getting the pro or max, as the benefits for me as a software dev are probably not worth the extra weight and power consumption.


[flagged]


Citation please on “the Xeon dell smokes the M1 air”, geek bench says the M1 air can be twice as fast.

All other things being equal: your statement is simply not true.

I just checked and I can’t find a mobile Xeon with a greater TDP than 45w, so you’re stuck with that geek bench score because that’s essentially as good as it gets for a modern mobile Xeon.

Xeons, fwiw, are just higher binned i7s and i9s with features still enabled. The reason they can be slower than i7s and i9s is that the memory controller has to do more work and the way Intel does multi-core (essentially a ring bus) doesn’t scale gracefully always.


All things are not equal though. Geekbench includes many things that run on the GPU - video encoding and playback, rendering web pages - heck even your window manager mostly uses the GPU. The Dell has low power low performance GPU. To use the second one - an NVIDIA RTX, which is literally the fastest thing you can put in a laptop. You have to explicitly tell your OS to use that GPU for a program - it defaults to the low power one.

In summary, you are full of crap if you think an untuned blind geekbench score is what you're going by - an aggregation of a whole bunch of tests, using defaults. My statement is true, as I kick off the same data processing script on my laptop and it finishes it over lunch, while my coworker kicks it off overnight.

> Xeon with a greater TDP than 45w

yes, the Xeon W-11955M in the Dell is 45W. Now add the RTX GPU - which coincidentally will be doing most of the work. Unless you're running the geekbench test you're referring to, to purposely gimp the results. That Intel integrated graphics chip uses almost no power.

go process a large dataset and do some calculations on it. Run a bunch of VMs while you're doing it to - let's say 3. Give each one 32G of memory. Better be ECC memory too, or your data won't be reliable. Maybe in about 5 years when apple catches up to the current pro laptops, you'll be able to. This is why all the m1 comparisons they do is to previous generation intel chips in their old laptops. which have always been slow. apple has always used outdated hardware, in everything they've ever made.


I guarantee you, an M1 is about as fast as your "6 core xeon" laptop. M1 Pro/Max will steamroll it. You can look at Cinebench, LLVM compile, Java benchmarks, etc. You're completely delusional claiming your laptop is 5x faster in CPU. Mocking a "phone CPU" when the A15 is actually faster than a 5950X in single core performance shows you don't know what you're talking about.


You probably wouldn’t have gotten downvoted as much if people hadn’t (ironically?) read the part you said they wouldn’t read :p And I want to mention that I agree with you about apple not selling to pros very well over the past 6 years.

Your laptop is definitely very capable. But it’s barely a laptop. Why not build out a proper desktop? This precision would be a pain in the ass to carry around for most people, especially travel. Dell made sacrifices to get that kind of power: size, cooling, and battery life. Those are actually meaningful things when it comes to a laptop for most people, even pros.

I think the fact that you’re even mentioning a MacBook <em>air</em> in the same sentence is very good for the Air. The M1 hasn’t really been marketed to the pro market until the recent release.

Also, 5x faster at what? The M1 is about the same performance as a W-11855M at single core cinebench, and only 25% slower at multicore. So comparison to the M1 pro/max is not very promising for the Xeon chip.


Engineers in the field, on oil rigs, doing CAD, Simulations etc. need huge heavy desktop replacement laptops. The licensing cost for the software they use is usually above 100_000$, and only certified for RHEL or Windows.

Even at 8k$ the laptop is often below 5% of the budget.


it's because you're using "geekbench" for your numbers. which is a combined number of misleading stats. it includes things like "encoding video" and "rendering html" - things that the m1 has specific optimizations for, which in the real world are done by the NVIDIA RTX on my Dell, with the CPU sitting at under 1% utilization. Yes, if you offload these tasks, which in the real world don't use the CPU at all, and run it on a CPU with special accelerators for these useless tasks, the CPU designed specifically to game "geekbench" metrics will win. In the real world, I got a multi-gig dataset I need to process and do calculations on. Go put a database on the M1 and see if it beats a xeon. Or for an easier test, just load up excel with lots of functions and lots of calculated rows and columns. Run that while running a couple of VMs for virtual appliances on your laptop too (128GB ECC RAM helps with that).

You're literally here saying the M1 is going to replace a server chip. Newsflash - the M1 doesn't even run any of the needed code, because it's arm code - a small niche.


Dell Precision with Xeon CPU … this thing doesn’t even have ECC ram, so it’s a toy and not suitable for actual pros.

Seriously though, your comment lacks, because of your very narrow and specific definition of „pro“.

The vast majority of people using laptops professionally, obviously prefer a MacBook Pro over a niche Dell laptop with a Xeon CPU.

Also the Geekbench score for Xeon CPUs used in Dell Precision laptops is way below that of the M1 Pro.


[flagged]


I’ve had enough of this.

Post your part number and the model number of your ram.

Anything less and you’re trolling.

It is common for Dell Precisions to ship with Xeons and not ECC ram.


here's one with ecc ram from 2 years ago.

https://www.servethehome.com/dell-precision-7540-with-intel-...

in fact, i'm not sure you can even order a xeon precision w/o ecc ram. but i'm not here to do your research for you for things you can look up in a minute. you're the one that claimed a xeon precision doesn't come w/ ecc ram, without even googling it.

my laptop is a 7660. I also have an i7 5560, and a latitude 7410 w/ an I7. It's what work gives me for work - and yes, I use all 3 since we went full remote. For "pro" work. The M1 laptop is comparable to my 7410 - which I use to play online games and chromecast videos. Not any real work. It's a kids toy compared to my 7660.

since you seem to be lost here on this tech site, instead of hanging out on reddit with your peers: all xeon CPUs support ECC ram. If you want you can go on ebay, buy ECC ram, and put it in any Xeon system. Or, you know, just order it w/ ECC ram from Dell.

>It is common for Dell Precisions to ship with Xeons and not ECC ram.

correct. because most come with 16 or 32GB or RAM, and are the low end of Dell's pro line. Once you get a lot of RAM, like 64 or 128GB, and you're crunching numbers and running VMs, your chances of a memory error go up dramatically. Which is why you need ECC RAM. Which has zero to do with your post that I was replying to, claiming precisions don't have ecc ram. now find me an m1 laptop with 128GB of ECC RAM. Because you're right - "Enough" strawman astroturfing from you.

The M1 is a Latitude competitor - not a Precision competitor. Apple's "pro" line is considered mid-tier from other vendors. Their tests showing it beats xeons are comparing xeons released 2 years ago, that for some reason they used in their apple laptops. Because Apple has always used outdated CPUs compared to everyone else.


> since you seem to be lost here on this tech site, instead of hanging out on reddit with your peers

This kind of behavior makes you seem much more lost here on HN than the guy you're replying to. And looking at your downvoted and flagged posts all over the place, HN seems to agree.


I don’t need to google it when I own such a system.

My precision did not ship with ECC ram.

ECC also needs to be supported by the motherboard; all AMD Ryzen chips have ECC enabled but due to limited motherboard support: many are not able to effectively use ECC.

If you have the time could you share the output of `sudo dmidecode —type 17`?


Edit:

That laptop (much heavier and much much more power hungry) still doesn’t beat an M1 at Dells max configuration, it certainly doesn’t beat it 5x over.

For the Xeon W-11955M

1618 Single-Core Score

9266 Multi-Core Score

For the M1 Max

1770 Single-Core Score

12556 Multi-Core Score


you're looking at geekbench scores that do a bunch of GPU-offloaded stuff. and they used the low power integrated graphics, not the NVIDIA RTX in their tests - you have to explicitly select the discreet GPU for a process to use, as it defaults to the low power. so, something that doesn't even look like a mistake of taking the defaults - something that looks like deliberately lying to game the numbers.


Dell Precision. 6 core xeon, 128GB RAM, 9 hour battery life. [...] My laptop is like 5x or more faster that the Air.

Very unlikely. You seem to go out of your way not to mention the Xeon's part number. So what Xeon is it?


Yup. I, too, call BS on that. I do own a precision laptop with a 8 core xeon and this thing is heavy, noisy and can't work on battery for more than 2hrs under normal workload.


Just for fun, I looked up the most expensive Dell Precision Xeon laptop, which seems to be the Precision 7760 with a Xeon W-11955M.

With a GeekBench score of 1647 ST/9650 MT, this $5377.88 machine is just a bit faster than the passively cooled $999 MacBook Air with 1744 ST/7600 MT. The MacBook Pro 14" 10 core is better than the Dell Xeon in about every way, performance, price, performance per watt, etc.


This is the xeon that's in the laptop. the geekbench score is base on running tests on the low power integrated graphics instead of the discreet NVIDIA RTX GPU. the numbers you idiots keep quoting are completely bogus. anywise losers, enjoy your apples, i got real work to do.

I run about 50k worth of software licenses on the laptop and generate millions in revenue per quarter with it. That's why it's a pro laptop, and I'm sure my company paid about double your number after you add in dell's pro support plus. Pro laptop for pro work. You're a kid who wants toys, but wants to say you're using professional equipment. I got something that's like the pro mac laptop. work gave that to me too for secondary tasks. it's called a dell latitude. it runs the latest i7 and no ecc memory. great for chromecasting porn and playing games in the browser, 13 hours of battery unlike the precision's 9, and much lighter. they just don't call it a pro.


you have a 5 year old precision you're comparing to an apple that came out yesterday. tell me, was the election stolen?


I would prefer your machine, though too many negatives to parse all of your post. Maybe too many words? I will shop Xeon laptops, that sounds quick.


the main reason it does it is because apple bought up all the 5nm capacity though. amd is running at 7nm still. so impressive because they could afford to do that I guess.


You need to consider the larger target group of professionals. It's really GPU capabilities that blow everything away. If you don't plan to use your MacBook Pro for video/photo editing or 3D modeling, then a M1 Pro with the same 10-core CPU and 16-core Neural Engine has all you need and costs less. Unless I'm missing something I don't think there much added benefit from the added GPU cores in your scenario, unless you want to go with the maximum configurable memory.


> GPU capabilities that blow everything away

Compared to previous macs and igpus - an nvidia gpu will still run circles arounnd this thing


Not so sure about that "running circles around". While the M1 Max will not beat a mobile RTX 3080 (~same chip as desktop RTX 3070), Apple is in the same ballpark of its performance [1] (or is being extremely misleading in their performance claims [2]).

Nvidia very likely has leading top end performance still, but "running circles around this thing" is probably not a fair description. Apple certainly has a credible claim to destroy Ampere in terms of power per watt - just limiting themselves in the power envelope still. (It's worth noting that AMD's RDNA2 already edges out Ampere in performance per watt - that's not really Nvidia's strong suit in their current lineup).

[1]: https://www.apple.com/v/macbook-pro-14-and-16/a/images/overv... - which in the footnote is shown to compare the M1 Max to this laptop with mobile RTX 3080: https://us-store.msi.com/index.php?route=product/product&pro...

[2]: There's a lot of things wrong with in how vague Apple tends to be about performance, but their unmarked graphs have been okay for general ballpark estimates at least.


Definitely impressive in terms of power efficiency if Apples benchmarks (vague as they are) come close to accurate. Comparing the few video benchmarks we are seeing from the M1Max to leading Nvidia cards I'm still seeing about 3-5x the performance across plenty of workloads (Id consider anything >2x running circles).

https://browser.geekbench.com/v5/compute/3551790


> an nvidia gpu will still run circles arounnd this thing

Not for loading up models larger than 32GB it wouldn't. (They exist! That's what the "full-detail model of the starship Enterprise" thing in the keynote was about.)

Remember that on any computer without unified memory, you can only load a scene the size of the GPU's VRAM. No matter how much main memory you have to swap against, no matter how many GPUs you throw at the problem, no magic wand is going to let you render a single tile of a single frame if it has more texture-memory as inputs than one of your GPUs has VRAM.

Right now, consumer GPUs top out at 32GB of VRAM. The M1 Max has, in a sense, 64GB (minus OS baseline overhead) of VRAM for its GPU to use.

Of course, there is "an nvidia gpu" that can bench more than the M1 Max: the Nvidia A100 Tensor Core GPU, with 80GB of VRAM... which costs $149,000.

(And even then, I should point out that the leaked Mac Pro M1 variant is apparently 4x larger again — i.e. it's probably available in a configuration with 256GB of unified memory. That's getting close to "doing the training for GPT-3 — a 350GB model before optimization — on a single computer" territory.)


Memory != Speed

You could throw a TB of memory in something and it won't get any faster or be of any use for 99.99% of use cases.

Large ML architectures don't need more memory, they need distributed processing. Ignoring memory requirements, GPT-3 would take hundreds of years to train on a single high end GPU (on say a desktop 3090 which is >10x faster than m1) which is why they aren't trained that way (and why NVidia has the offerings set up the way they do).

>That's getting close to "doing the training for GPT-3 — a 350GB model before optimization — on a single computer" territory.

Not even close... not by a mile. That isn't how it works. The unified memory is cool but its utility is massively bottlenecked by the single cpu/gpu it is attached to.


I don't disagree that there are many use cases for which more memory has diminishing returns. But I would disagree that those encompass 99.99% of use cases. Not all problems are embarrassingly-parallel. In fact, most problems aren't embarrassingly parallel.

It's just that we mostly use GPUs for embarrassingly-parallel problems, because that's mostly what they're good at, and humans aren't clever enough by half to come up with every possible way to map MIMD problems (e.g. graph search) into their SIMD equivalents (e.g. matrix multiplication, ala PageRank's eigenvector calculation.)

The M1 Max isn't the absolute best GPU for doing the things GPUs already do well. But its GPU is a much better "connection machine" than e.g. the Xeon Phi ever was. It's a (weak) TPU in a laptop. (And likely the Mac Pro variant will be a true TPU.)

Having a cheap, fast-ish GPU with that much memory, opens up use-cases for which current GPUs aren't suited. In those use-cases, this chip will "run circles around" current GPUs. (Mostly because current GPUs wouldn't be able to run those workloads at any speed.)

Just one fun example of a use-case that has been obvious for years, yet has been mostly moot until now: there are database engines that run on GPUs. For parallelizable table-scan queries, they're ~100x faster still than even memory databases like memSQL. But guess where all the data needs to be loaded into, for those GPU DB engines to do their work?

You'd never waste $150k on an A100 just to host an 80GB database. For that price, you could rent 100 regular servers and set them up as memSQL shards. But if you could get a GPU-parallel-scannable 64GB DB [without a memory-bandwidth bottleneck] for $4000? Now we're talking. For the cost of one A100, you get a cluster of ~37 64GB M1 Max MBPs — that's 2.3TB of addressable VRAM. That's enough to start doing real-time OLAP aggregations on some Big-Ish Data. (And that's with the ridiculous price overhead of paying for a whole laptop just to use its SoC. If integrators could buy these chips standalone, that'd probably knock the pricing down by another order of magnitude.)


Again, there is a huge memory bandwidth bottleneck. It's ddr versus gddr and hbm. It's not even close. The m1 will be slower.


Cerebras is a thing.


And at least an order of magnitude more expensive than an A100, if not two orders


I think you put extra zero in A100 price


> I don't disagree that there are many use cases for which more memory has diminishing returns. But I would disagree that those encompass 99.99% of use cases. Not all problems are embarrassingly-parallel. In fact, most problems aren't embarrassingly parallel.

Mindlessly throwing more memory does encompass diminishing returns in 99.99% of use cases because extra memory will inflict a very large number of TLB misses during the page fault processing or during the context switching which will slow memory access down substantially unless:

1) the TLB size in each of the L1/L2/… caches is increased; AND

2) the page size is increased, or the page size can be configured in the CPU.

Earlier versions of MIPS CPU's had a software controlled, very small sized TLB and were notorious for being slow with the memory access. Starting with A14, Apple has increased an already massive TLB, which was on top of the page size having been increased from 4kB to 16kB:

«The L1 TLB has been doubled from 128 pages to 256 pages, and the L2 TLB goes up from 2048 pages to 3072 pages. On today’s iPhones this is an absolutely overkill change as the page size is 16KB, which means that the L2 TLB covers 48MB which is well beyond the cache capacity of even the A14» [0].

It would be interesting to find out whether the TLB size is even larger in M1 Pro/Max CPU's.

[0] https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


I think we’re losing the perspective here that Apple is not in the business of selling chips, but rather in the business of selling laptop to professional who would never even need what you describe


"Right now, consumer GPUs top out at 32GB of VRAM. The M1 Max has, in a sense, 64GB (minus OS baseline overhead) of VRAM for its GPU to use."

We've had AMD APU's for years, you can shove 256GB of RAM in there. But noone cares because a huge chunk of memory attached to a slow GPU is useless.


No way you could train, but if they could squeeze a bit more RAM onto that machine, you could actually do inferencing using the full 175B parameter GPT-3 model (vs. one of its smaller, e.g., 13B parameter versions [1] - if I could get my hands on the parameters for that one I could run it on my MBP 14 in a couple of weeks!).

The ML folks are finding ways to consume everything the HW folks can make and then some.

[1] https://arxiv.org/abs/2005.14165


Terrible, terrible take. GPU ram is about 10x faster than ddr in practical usage, so your worloads can finish and a new batch can be transferred over pcie faster than the apple would finish the first pass.


Yes, Nvidia GPU's are a major reason I switched to PC about 3 years ago. That and I can upgrade RAM and SSD's myself on desktops and laptops. The power from professional apps like Solidworks, Agisoft metashape, and some Adobe products with a Nvidia card and drivers is like night and day with a Mac at the time I switched.

Does Apple have any ISV certified offerings? I can't find one. I suspect Apple will never win the Engineering crowd with the M1 switch... so many variable go into these systems builds and Apple just doesn't have that business model.

Even with these crazy M1's, I still have doubts about Apple winning the Movie/Creative market. LED walls, Unreal Engine, Unity are being used for SOOOO much more than just games now. The hegemony of US centric content creation is also dwindling... budget rigs are a heck of lot easier to source and pay for than M1's in most parts of the world.


Anything that can run rings around it is unlikely to be running on a battery in a laptop, at least for any reasonable length of time.


Not to mention will stretch beyond reason the idea of a "mobile device" with an unwieldily weight and thickness.


> Compared to previous macs and igpus - an nvidia gpu will still run circles arounnd this thing

True, but the point here is that M1 is able to achieve outstanding performance per watt numbers compared to Nvidia or Intel.


Are you really rendering in a cafe that you need on the go GPU performance?


Editing photos or reviewing them before getting back home to know if you need to re-shoot, reviewing 8K footage on the fly, applying color grading to get an idea of what the final might look like to know if you need to re-shoot, re-light or change something...

There are absolutely use-cases where this is going to enable new ways of looking at content and give more control and ability to review stuff in the field.


Adding to all the usecases listed by other commenters.

Having a higher performance per watt numbers also implies less heat from M1's perspective. This means that even if someone isn't doing CPU/GPU heavy tasks, they are still getting better battery life since power isn't being wasted on cooling by spinning up the fans.

For some perspective, My current 2019, 16inch i7 MBP gets warm even if I leave it idling for 20 - 30 mins and I can barely get ~4hrs of battery life. My wife's M1 macbook air stays cool despite being fanless, and lasts the whole day with similar usage.

The point is performance per watt matters a lot in a portable device, regardless of its capabilities.


Try disable turbo and search around there is an utility that keep no turbo and using hours still 50 degree.

I am not associated with that guy. In fact I bought one for even my macmini. Get my m1 macmini to avoid all these hot air.

If you run biotcamp windows has registry to disable that and also system setting to limit to 99% (but seem still hot) for my playing with Vr and fs2020 using external egpu.


For the content creator class that needs to shoot/edit/upload daily, while minimizing staff, I can see definite advantages to having a setup which is both performant and mobile.


honestly half hour on plug every 4 hours of work sounds mobile enough to me.

I worked in the content creators business making videos, photos and music and frankly the need for 15 hours of battery is a (very cool indeed) glamorous IG fantasy.

In reality even when we were really on the move (I used to follow surfers on several isolated beaches in South Europe) the main problem were the phones' batteries - using them in hotspot mode seriously reduce their battery life - and we could always turn on the van's engine and use the generator to recharge electronic devices.

Post processing was done plugged to the generator.

Because it's better to sit comfortably to watch hours of footage or hundreds of pictures.

I can't imagine many other activities that are equally challenging for a mobile setup.


I am often rendering and coding while traveling for work a few months out of the year. Even when I’m home, I prefer spending my time in the forest behind my house, so being able to use Blender or edit videos as well as code and produce music anywhere I want is pretty sweet.


GPUs are no longer special purposes components; certainly in macOS, the computation capabilities of the GPU are used for all sorts of frameworks and APIs.

It’s not just about rendering any more.


Besides what the other person commented, also consider creatives that travel. Bringing their desktop with them isn't an option.


That’s not the right way to look at it. We never did this because you couldn’t get enough performance.

Now you can get performance off a battery for your entire work day for less money than the competition (if reports are to be believed).

In this scenario, would you render things in a cafe? Why not?


> In this scenario, would you render things in a cafe? Why not?

honestly, as a traveler and sometimes digital nomad, the real question is "why yes?"

There is no real reason to work in a cafe, except because it looks cool to some social audience.

Cafes are usually very uncomfortable work places, especially if you have to sit for hours looking at the tiniest of the details as one often does when rendering.


Maybe you’re hungry but it’s crunch time.

It’s like when the iPad came out and had a camera. “Who is going to lug around an iPad to take pictures!?!?”

But that’s exactly what I started seeing people do. Pull out iPads and take snaps.


what's your point?


Kind of yes .. but sometimes after years of working remotely you may find its nice to be hanging out where people are doing things even while you work


Why would you even buy a laptop if you don't need to be mobile?


Fewer cables is one reason.


That’s a strange reason to buy a laptop. On a desktop you set up the cables one time and you’re done.


There is this thing called all in one desktop


In a quiet laptop?


If you are trying to do hardcore video editing or modeling then 'quiet laptop' likely comes second to speed


I'm looking forward to playing BG3 on mine :)


Is it optimized for mac?


There's a native ARM binary :) You can choose the x86 or ARM binary when you launch, and they're actually separate apps. That's how they get around the Steam "thou shalt only run x86 applications" mandate.


Well, the MAX has double the memory bandwidth of a PRO, but I cannot see workloads other than the ones you mentioned where it would make a significant improvement.

Perhaps ML but that’s all proprietarized on CUDA so it’s unlikely.

Perhaps Apple could revive OpenCL from the ashes?


Pro only supports 2 external displays which is why I ordered max


ha, man, some people have veeeery different workstation setups than me.

I was offered a large external monitor by my employer, but I turned it down because I didn't want to get used to it, and working in different locations is too critical to my workflow. But I'd love to see how people with more than 2 external displays are actually using them enough to justify the space and cost (not being facetious, I really would).


Three displays here.

First - dedicated to personal Chrome profile, Discord, etc.

Second - dedicated to screen share - VS Code, terminal, JIRA, etc.

Third - Work Chrome Profile for email/JIRA/web browsing, note-taking (shoutout to https://obsidian.md), Slack.

I could certainly get by with fewer monitors, and do so when I am mobile, but I really enjoy the screen real estate when at my desk at home.


6+ hours of Web Conferencing a day.

1. Web Conferencing Content

2. Web Conferencing participants video

3. Screen where I multitask in parallel

4. VDI session to a customer's environment for tests


"I'd love to see how people with more than 2 external displays are actually using them enough to justify the space and cost (not being facetious, I really would)."

My friend uses 5 monitors, and I would too if I wasn't mandated to use an iJoke computer at work.

Teams, browser and IDE mandate a minimum of 3 displays.


You realize most machines won't support 5 displays, right? This isn't exclusive to "iJoke" machines.


Actually pretty much all desktops do .. you just plug in this thing called a high end graphics card ..



I disagree with this perspective. I think it's important to recognize that the M1 is a System on a Chip (SoC), not simply a CPU. Comparing the Apple M1 to "mid-range 8-core AMD desktop CPUs from 2020" is not comparing apples to apples. The M1 Max in the Geek Bench score has 10 cores whereas the AMD desktop CPUs you mention have 8-cores. That would be more of an apples to apples comparison.

Where the M1 architecture really shines is the collaboration between CPU, GPU, memory, SSD, and other components on the SoC. The components all work together within the same 5nm silicon fabric, without ever having to go out to electrical interconnects on a motherboard. Thereby saving power, heat, etc.

What you lose in repairability/upgradability, you gain in performance on every front. That tradeoff is no different than what we chose in our mobile devices. If repairability and upgradability are more important to you, then definitely don't buy a device with an Apple M1; absolutely buy a Framework laptop (https://frame.work).


I really hope Qualcomm/NUVIA (or Nvidia/ARM) release a competitive ARM SoC that will eventually become part of a framework mainboard module.


It would be very interesting to see a consumer-oriented ARM SoC from one of the other main manufacturers. I doubt that will happen, however. Their entire business is based on being a component in a chain of components, not being the entire thing. Although, for example, Intel makes some motherboards, some GPUs, etc...their business isn't based on putting it all together in one fabric for their end-clients. They'd have to control/influence more of the OS for that. Apple has it all: full hardware control, full software control, and it's all designed for the mass market consumer.


M1 was 10 years in the making. So I wouldn't hold my breath for Qualcomm or anyone else.


> Apple really dragged their feet on updating the old Intel Macs before the transition

There was a Twitter post doing the rounds which I cannot locate now as my Twitter-search-foo is not strong enough. :-(

To summarise the gist of it: The post was made by someone on the product development team for the newly released MacBook Pro models, they referred to it as multiple years in the making.

So it may well be Apple were dragging their feet for good reason. They knew what was coming and did not want to invest further in Intel related R&D and did not want to end up with warehouses full of Intel-based devices and associated service parts.


I think people are missing the fact that it’s performance + energy efficiency where M1 blows regular x86 out of the water.


>Apple really dragged their feet on updating the old Intel Macs before the transition. People in the Mac world (excluding hackintosh) were stuck on relatively outdated x86-64 CPUs.

Maybe my expectations are different; but my 16" MacBook Pro has a Core i9-9880H, which is a 19Q2 released part - it's not exactly ancient.


Just because the SKU is fairly recent doesn't mean the tech inside is. That 9880H is using Skylake cores which first hit the market in 2015 and is fabricated using a refined version of the 14 nm process which first used for Broadwell in 2014.


But that's Intel's foot dragging, not Apple's, right? The 10th generation i9s didn't come out until 20Q2.


And the fact intel hasn't updated their CPUs is Apple's fault how again?


> 1. M1 is a super fast laptop chip. It provides mid-range desktop performance in a laptop form factor with mostly fanless operation. No matter how you look at it, that's impressive.

Get an x86-64 laptop with a recent fastest Ryzen, install Linux on it. You're gonna see better performance for most practical things than your Mac M1, practically fanless. For half the price.

Power-per-watt remains Apple's competitive advantage and therefore battery life is 1.5-2x there.

I think the question remains, like it had before these new chips: do you want MacOS and the Apple ecosystem? If you do, they're obviously a good choice (even a great choice with these new chips). The less value you get from that ecosystem, the less you will get from these laptops. For nearly everything else, Linux will be the better choice.


If M1 chips have better performance per watt, then they are going to put off less heat. So however "practically fanless" the x86-64 chip you choose is, the M1 is going to be more fanless.

And for a laptop chip those are pretty much the two things that matter (not melting your lap and not being crazy loud).


That's correct. It's about the ecosystem. If you want to run Linux, you don't care about the ecosystem. That person's definition of "most practical things" is going to be very different. For me, using Reminders, Notes, Photos, Messages, and FaceTime across my iPhone and MBP is very practical. I've witnessed amazing gains in the integration just since I got into Apple machines in 2017. My work bought me an iMac in 2011 and I hated it.

And yes, it's that portability through incredible battery life that is the other advantage. I've owned many Windows laptops and the Dell M3800 that came pre-loaded with Ubuntu, and none of them came close in this regard. All other laptops I have used needed to be plugged in for most use which does severely limit portability. This does not. It also doesn't get hot. The fans never turn on. It's a game changer.


Unfortunately laptop price are somehow high up for thin laptop with 32gb+ ram ryzen models recently, probably due to shortage. And about the igpu. AMD haven't release cpu with RDNA2 igpu yet. The first general purpose ryzen with RDNA2 igpu will be probably on steamdeck. But it will be released next year.


From what i see the Clang difference mostly looks like its because it is a 10 core cpu (with 2 efficiency ones at that), vs a 16 core. (I'm seeing 104 klines/sec vs 180.5), the 5800x seems to hit 94-96klines/sec

Kinda wish they had a m1 max focused on extra cpu cores rather than gpu cores tbh


For reference, I’m looking to purchase a framework laptop. If I’m lucky I’m going to get 8 hours of battery life. Real number for M1 mbp reach 16+ hours, doubling the laptop of my choice.

That is crazy impressive and something that I wish I could get in the ultrabook form factor.


> Compared to those older CPUs, the M1 Max is a huge leap forward.

Is it, though? Its single-core score is roughly 2x that of the top CPU Apple put in the mid-2012 macbook pro. 2x after 10 years doesn't seem that great to me.

Maybe that's more of an indictment against intel


The single-core performance of Intel CPUs has only increased by about 10% per generation, so that checks out.


Single core performance has been stagnant for a decade from all manufacturers.


Kids these days don't know the days when 2x was within realms of intentional product segmentation and at most 18 months worth of progress.


> were stuck on relatively outdated x86-64 CPUs.

We came to a point when X86 is faster to emulate, than to run on a more modern microarchitecture.

One point is clear: per-transistor performance of X86 is dimishing with each new core generation, and it is already losing to M1.

X86 makers will not be able to keep up for long. They will have to keep releasing bigger, and hotter chips to keep parity, until they cannot.


> We came to a point when X86 is faster to emulate, than to run on a more modern microarchitecture.

Since you are clearly referring to Rosetta 2:

- It is not an emulation. It is a translation. That alone is a 10-100x speed difference.

- It is not always faster; it highly depends on the specific code to run. Some code is miraculously faster, some is only at about 50% or lower.

- The comparison is "apples vs. oranges" because it's only comparing against Intel Macs, which do not use the fastest x86-64 chips on the market. Most importantly, AMD processors are not included in this comparison at all.

Constructing a general superiority of ARM vs. x86 based on this flawed premise is really far-fetched.


Per transistor performance in M1 isn't any better than the x86 opposition, quite the opposite in fact. The standard M1 has 60% more transistors than a current gen Ryzen mobile chip, while having halve the large cores (and significantly worse multi ore performance).


You need to compare cores alone, not whole SoCs, with cache.


What are you comparing? M1 is an SOC.


Ryzen mobile chips are also SOCs.


I think in 10-15 years the majority of the market, laptop or otherwise, will still be x86.

For me what the M1 shows is not that x86 is dead and ARM is the future, or even that Intel or AMD are toast, it's that the days of the traditional socketed CPU and modular RAM are numbered.

10 years from now we'll all be using gigantic 32 core x86 SoC's with onboard RAM and GPU, and perhaps people on HN cheering on Apple now won't be when hobbiest PC building is a thing of the past.


What’s the best Apple computer for fastest compile times nowadays?


> But I agree that the M1 hype may be getting a little out of hand.

Alder Lake will have big little core and by on 7 nm. I'm really curious if it or its successor will be good enough when compared to the current M1


Alder Lake is 10nm


Really curious to see what chips will go into the Mac Pro line next year (year after?). Will they be faster than the AMD desktop/workstation chips when they come out?


> Apple really dragged their feet

... I wonder if this was deliberate theatre for dramatic contrast?


That's Docker for Mac versus native Docker. Docker only runs on Linux, so Docker for Mac spins up a linux VM to run your containers. When you mount your Ruby or Python projects into your containers, Docker for Mac marshals tons of filesystem events over the host/guest boundary which absolutely devastates your CPU.

Docker for Mac is really just bad for your use case.

No idea what's going on with the thumb drive, bluetooth, etc.

Beyond that, it's a little silly to compare a desktop (presumably many times larger, ~500+W power supply, cooling system, etc) with a Mac mini (tiny, laptop chip, no fans, etc).


> Beyond that, it's a little silly to compare a desktop (presumably many times larger, ~500+W power supply, cooling system, etc) with a Mac mini (tiny, laptop chip, no fans, etc).

You're totally right, but I see a lot of folks around me making absolutely bonker claims on these M1 devices. If I believed everything I'm told, I'd be expecting them to run complex After Effect rendering at 10000 frames per second.


I think that’s disingenuous. I haven’t seen any claims that haven’t been backed up by benchmarks and specific use cases.

What most people keep missing is the M1 jumped out way ahead on performance per watt on their first attempt at making their own processor.

I’ve seen the M1 Mac for less than $700. In that price range, there’s not much competition when it comes to computation, GPU performance, etc. and comes fairly close to much higher priced x86 Macs and PCs.

That’s why people are excited about the M1. You generally can’t edit 4k (and 8k) video in real-time on a machine in this price range—and what people happily paid 5x that price to get this level of performance just a few years ago.


> on their first attempt at making their own processor.

They've been making their own chips for over a decade now. The M1 is not fundamentally different from the A series, and in fact seems to serve as a drop in replacement (in the iPad Pro line).

That being said, this is the first time they're confident enough in their own processor to rewrite their whole desktop software stack and market what are traditionally mobile-exclusive cpus as being superior to x86 in the personal computing space. And in all honesty, for good reason.


That being said, this is the first time they're confident enough in their own processor to rewrite their whole desktop

Pretty sure it wasn’t a matter of confidence, as Apple has been signaling making their own desktop processor for a while. There was an iPad launch a few years ago where Phil Schiller said that particular A-series processor had desktop-class performance like three different times.

Also, it’s the third time they’ve done a processor transition: 68K to PowerPC to Intel to ARM. There’s no doubt they’ve known for years they could do this if needed.

It wasn’t a rewrite; it was mostly a recompile, especially after Apple dropped support for 32-bit apps. iOS and macOS share many frameworks and APIs and that’s been running on ARM since 2007.

When Apple announced the switch to Intel in 2006, Jobs revealed they had Mac OS X running on Intel in the lab for years; there’s no doubt they’ve had macOS running on ARM for years as well.

Likely getting the logistics of software and hardware development to align at the scale Apple operates at is what took the time; not the lack of confidence.

This move was probably inevitable but it certainly didn’t help that Intel repeatedly missed deadlines and performance goals.


> There’s no doubt they’ve known for years they could do this if needed.

Honestly would not be surprised if it comes out at some point in the future that they launched the ARM-Mac project as soon as they'd successfully switched to Intel.


Honestly would not be surprised if it comes out at some point in the future that they launched the ARM-Mac project as soon as they'd successfully switched to Intel.

If not right after the switch to Intel, certainly when the A7 was on the drawing board, since it was the first mainstream 64-bit ARM processor.


> You're totally right, but I see a lot of folks around me making absolutely bonker claims on these M1 devices.

What "but"? The claims are less bonkers with a perspective that acknowledges the significance of those physical differences.

Without the case, would the full internals (motherboard/CPU/GPU/PSU/ram/heatsinks/storage) of your desktop fit in your pants pockets? Because the M1 Mac Mini's fit in mine.

How much fan noise does your desktop produce? How much electricity does it consume? M1s are fast compared to almost everything, but they're bonkers fast compared to anything getting even remotely close to 20 hours on a battery at 3 lbs weight.


I get the point, but I was answering the question of "why do people compare this with desktops". Because people compare them with everything.

You're totally right that they are impressive in their own right, especially in their category. I'm not arguing that.

Also:

> How much fan noise does your desktop produce

Basically none since modern high end air cooling is essentially silent. Water cooler pumps are noisier though.


If Geekbench scores are an indication, it is well into the HEDT league [1]. Some results of previous-gen servers are equally surprising, such as this 2-way 2x64-core EPYC Rome setup [2].

1- https://browser.geekbench.com/v5/cpu/compare/10508517?baseli...

2- https://browser.geekbench.com/v5/cpu/compare/9997439?baselin...


> You're totally right, but I see a lot of folks around me making absolutely bonker claims on these M1 devices.

I bought an M1 MBA to replace a 2017 (might have been 2018 - I forget now) top specced Intel MBP and it cut the time to run my large Java test suite in half. Of course those are generations/years apart in technology, but the MBA doesn't have fans, barely gets warm, the battery lasts days, and it's the low end of the first generation M1.

I'll fully admit that there might be other systems out there that can do exactly that in the same fan less, portable package that I'm unaware of.


> You're totally right, but I see a lot of folks around me making absolutely bonker claims on these M1 devices.

Fake news happens on HN too. Apparently. Sadly.

Hopefully it's mostly limited to discussions about Apple.


I used to run Docker on an older Intel Mini and it was fine--a little slow but usable. I've also used it on other Intel Macs without major issues, including running giant Java apps and SQL Server.

On my M1 Mini, I found it unstable and incredibly slow (for Postgres at least, using stock images from Docker Hub) and mostly quit using it. I upgraded to the latest version recently and it was still too slow.

For everything else, though, I find that the M1 Mini is significantly faster. E.g., compiling Python versions is way faster.


What's the CPU arch of the container images you're running?


The postgres image (postgis/postgis:12-3.1-alpine) is linux/amd64.

I see now that the official postgres images have other arch options compared to the postgis images, which are arm64 only. I wonder if that could make a difference...

And now I see that Docker Desktop actually shows a warning about potentially poor performance when using an amd64 image: "Image may have poor performance, or fail, if run via emulation."

This is good news, since I'd much rather use Docker to run postgres in dev.


Correction: in paragraph 2, "arm64" should be "amd64"

Update: I switched to an official postgres image and it works great. I just had to create a derived image that installs PostGIS like so:

    FROM postgres:13-bullseye
    RUN apt-get -y install postgresql-13-postgis-3


I wonder how well ARM64 is supported in the Docker community? Presumably x86 Docker images running in emulation would be even slower than ARM64 images running in a Linux VM.

Though as you note they're completely different classes of hardware (500W vs. 26.5W), a more Apples-to-Apples (so to speak) comparison would be native Ubuntu on both systems, though as I understand it Ubuntu for M1 is not complete and may lack GPU support and other important features.


I don't think chip architecture was the main issue that the parent was alluding to. The same issues would arise if it was Docker running an x86 image on an x86 chip on a Mac.

Docker doesn't run natively on OSX, unlike Linux where it does. As a result, Docker Desktop for Mac spins up a linux VM, and this is where penalties begin to be payed.

I do wonder if Docker Desktop is smart enough to tailor the VM's architecture to the image's architecture? Moreover, is it smart enough to tailor both the VM's architecture and images architecture to the host's hardware architecture?

I'd be interested to see how these soft and hard architecture combinations actually change the performance characteristics:

    M1/ARM -> x86-VM -> ARM-image
 vs 
    M1/ARM -> ARM-VM -> ARM-image
I wonder how difficult it would be to support docker natively on OSX (BSD?) or if Apple has any intention on providing that support?


> I wonder how difficult it would be to support docker natively on OSX (BSD?) or if Apple has any intention on providing that support?

I think it would be pretty difficult. Mac would effectively have to provide a fully compatible Linux interface in its kernel. It would probably be more likely to build a better fs-event bridge to guest systems, but considering MacOS still doesn’t have ext4 support, I’m not counting on it.


It’s supported pretty well. I’m running a Kubernetes cluster on Raspberry Pis and these days most popular images have ARM64 support.


Every Mac mini has had a fan. The 2018 Intel version and the M1 version use a lot of the small internal space for a large fan.


> Docker for Mac is really just bad for your use case. Nah, not just for mac, docker is really just bad. period ;P


Docker is certainly a hammer that makes a lot of problems look like nails. Maybe too frequently. But you can’t really debate that it’s a really powerful hammer.


What’s wrong with Docker? Are you opposed to Docker specifically or containers generally? What’s the problem and what’s your preferred solution?


Disclaimer, I don't have an M1 Mac, but I do have a buggy ubuntu desktop and used Macs my whole technical life.

It seems that you're heavily in the minority with this. Even the weird bugs you mention are very unexpected. I've used a Mac for 15 years and never heard of an issue related to thumb drives. You may just have a lemon. See if you can just replace it (warranty, etc, not just spending more money).


It's hardly unheard of for a Mac to pick up weird issues. My wife's Macbook has a thing where the mouse cursor will just disappear when she wakes the thing up from sleep. Poking around on the internet finds other people with the same problem and no good solution (zapping PRAM doesn't help, neither did a full OS reinstall). It's just a live with it affair. The only fix is to close the lid and open it again, which isn't too bad but the issue crops up multiple times in a day and is quite annoying.

I manage a bunch of Ubuntu desktops at work and the most common issue seems to be that if you leave a machine alone for too long (a week or two), then when you log back in the DBUS or something seems to get hung up and the whole interface is mostly unusable until you log out and log back in. It can be so bad you can't even focus a window anymore or change the input focus. Next most common issue is DKMS randomly fucking up and installing a kernel without the nVidia or VirtualBox modules leaving the machine useless.


Doing a three finger swipe up and back down should fix it. But yeah that bug is annoying as hell


Just my N=1 anecdata, but I'm in the same boat. I got a Macbook Air from work, and I have a hard time using it compared to my Linux setup (which is saying something, since I'm using a Torvaldsforsaken Nvidia card). Here's a list of the issues I can recall off the top of my head:

- High-refresh displays cause strange green/purple artifacting

- Plugging in multiple displays just outright doesn't work

- Still a surprisingly long boot time compared to my x201 (almost a teenager now!)

- No user replaceable storage is a complete disservice when your OEM upgrades cost as much as Apple charges

- Idle temps can get a little uncomfortable when you're running several apps at once

...and the biggest one...

- A lot of software just isn't ready for ARM yet

Maybe I'm spoiled, coming from Arch Linux, but the software side of things on ARM still feels like they did in 2012 when my parents bought me a Raspberry Pi for Christmas. Sure it works, but compatibility and stability are still major sticking points for relatively common apps. Admittedly, Apple did a decent job of not breaking things any further, but without 32-bit library support it's going to be a hard pass from me. Plus, knowing that Rosetta will eventually be unsupported gives me flashbacks to watching my games library disappear after updating to Catalina.


> A lot of software just isn't ready for ARM yet

Compatibility issues are the most painful part of architectural shifts, although Apple's pretty good at them by now and a lot of desktop software "just works."

> Plus, knowing that Rosetta will eventually be unsupported gives me flashbacks to watching my games library disappear after updating to Catalina

RIP my 32-bit macOS Steam library. Though I think some 32-bit Windows games can run under Crossover.

I wish that Apple would commit to supporting Rosetta 2 indefinitely, but realistically they'll pull the plug on x86 emulation just as they did with their 68K and PowerPC emulators. Apple is about the Next Big Thing and not so much about long-term backward compatibility.

However Windows on ARM under Parallels may get better over time - a number of x86 Windows games already run. And who knows, maybe we'll be able to boot native ARM Windows at some point...

On the up side, M1 Macs get access to the only game library Apple cares about: iOS games (including Apple Arcade and some pretty decent iPad games.)


I have both, and use both everyday.

I use an M1 MacBook Pro (16Gb Ram) for personal projects and as my standard home/travel computer. It is amazing and fast.

I use a Lenovo Carbon X1 Laptop with similar specs (i5, 16Gb Ram, m.2 ssd) for work that runs RHEL 8 (Red Hat Enterprise Linux). It's insanely fast and stable.

The overhead to run RHEL is so small it would blow your mind at the performance you get from almost nothing. Mac or Windows are crazy bloated by comparison. I know I am sparking an eternal debate by saying this, but I personally have never found Ubuntu to be as stable for a workstation (but ubuntu server is great) as RHEL is.

With that being said, I still think the M1 mac is the best computer I have ever owned. While linux is great for work, I personally enjoy the polished and more joyful experience of Mac for personal use. There are a million quality of life improvements that Mac offers that you won't get in Linux. The app ecosystem on mac is incredible.

When most people make comparisons for the M1 Mac, they are comparing windows PCs (generally Intel-based ones since Mac previously used Intel) and they compare intel-based Macs. I have never seen someone comparing it to linux performance. The speed of the M1 mac is far better than Windows and far better than old Macs. There is no question. Before my M1 mac I used a MacBook Pro with an i7, 16Gb RAM, and the upgraded dedicated graphics card. The little M1 MacBook outshines it at least 2 to 1. Best of all, the fans never turned on, and my old MacBook Pro had constant fan whine which drove me crazy.

The other incredible feat of the M1 Mac is the battery life. I run my laptop nearly exclusively on battery power now. I treat it like an iPad. You plug it in when it gets low, but I can use it for about a week between charges (I use it for 2-3 hours each day). I don't turn the screen down or modify my performance. I keep the screen fairly bright and just cruise away. I love it.

While Linux might be able to outshine on performance, it doesn't outperform with battery. My Lenovo laptop is worth about 2x my MacBook Pro. It is a premium laptop and yet running RHEL I will be lucky to get 6 hours. Compare that to ~20 hours of my MacBook.


Downside of RHEL is the package repo is anemic and out of date. Sometimes horribly out of date. It's hardly uncommon to run into some issue with an application and then look it up online and find out that the fix was applied 8 versions after the one that's in the repo.

Worse is when you start grabbing code off of Git and the configure script bombs out because it wants a library two versions ahead of the one in the repo. But you don't want to upgrade it because obviously that's going to cause an issue with whatever installed that library originally. So now you're thinking about containers but that adds more complication...

Like everything it is a double edged sword.


This use case begs for arch. Not bloated and up to date upstream and AUR is very robust for the “one off” apps. Rhel on desktop makes sense for development for rhel on servers. Can’t think really of any other use case. At least they could try out centos stream? For all its hate it’s really nice for certain use cases. Although people are religiously against it (rh needs to really work on that before it dies on the vine)


CentOS Stream doesn't significantly change the up-to-date-ness of the packages. It's a few months ahead of RHEL, but RHEL (minor releases) move at the same pace as always, which is to say slowly, so a few months ahead is insignificant.

Fedora is probably better if you want up-to-date packages on a platform that's relatively similar to production.


I run Arch on my homelab, and the AUR is a lifesaver. Not to mention, the basic Arch repos contain more Podman Cockpit modules than apt, to my surprise. Very nice OS for server stuff if you're brave enough to wrangle pacman.


> [Arch is a v]ery nice OS for server stuff if you're brave enough to wrangle pacman.

if you're running a homelab and regularly 'wrangling' a package manager which is broken/incomplete by design, why are you choosing distros based on package availability rather than on the quality of the tooling? surely you can package anything you need to use


I could, but the time I'd spend getting it working on a stable distro vastly outweighs the time I'd spend setting up my pacman.conf and backing it up to git. When I say 'wrangle', I'm more talking about the instability and frequency of updates. It's definitely only for homelab use, I wouldn't ever consider deploying this on a larger scale.


I guess preexisting availability is important when you don't know what you'll want to use, and you're also interested in trying a lot of things that you may not want to keep around for long


Why go all the way to Arch when Fedora is perfectly serviceable? Used it for a few years before being imposed a MBP.


any RHEL user who is happy switching to Arch probably never appreciated what's actualltly good about Red Hat's tooling


I have had luck running Debian Stable with apps from Snap when the older .deb from the repo doesn't cut it. I haven't found that setup at all complicated.


My colleague has an M1 Mac. I have a Ubuntu desktop. My colleague always asks me to transcode videos on my machine because on her MacBook it is too slow.


I finally managed to get nvenc working on handbrake and the stupendous frame rate jump actually had me sit in stunned silence, from 3-15FPS to 120FPS on drone 1080p60 footage was jaw dropping. I know the m1 claims to be fast but I'm transcoding on a GPU that's from like 2015!


Just know that HW encoders on nVidia cards are usually inferior in quality compared to software encoders, altho the speed gain is very nice. I originally bought 2070 with its HW HEVC encoder, and apart from few test runs never used it again, because no matter the quality settings, its just worse (and I was never in a hurry).

No idea about HW encoders quality/speed on M1


It's great for things like streaming, and the 3rd series is quite good quality. For best compression and quality, ofcourse software encouder is best


Newer Macs have hardware accelerated transcode with Handbrake as well.


yep. if things don't work contact apple support they are actually pretty decent. i had a lemon mini, randomly would go into a bootloop after os updates - had a bad mainboard so apple replaced it and it's been fine since.


Well if you actually do ‘work’ with the device, like installing software thats not in the appstore you might run into some troubles...i love my 2013 macbook air for most daily tasks, but never was there a time where i couldnt do with having a windows and linux device on hand. But yeah, thats just life. Happy to see sobering comment here that the m1 is a ‘mobile’ processor, my 12 year old pc agrees. Another question that came to mind is; what professional is gonna edit ProRes video while commuting? Is this the ultimate precarious labour creative industry machine?!


> Well if you actually do ‘work’ with the device,

LOTS of people "actually do ‘work’" with their macs. Without trouble.

> i love my 2013 macbook air for most daily tasks, but never was there a time where i couldnt do with having a windows and linux device on hand

Are those devices also 8 year old low-mobile-tier hardware optimized for battery use and low thermals? I've never needed a windows or linux computer on hand.


I mostly work with obsolete industrial devices. Having obsolete devices around is handy to deal with them, sadly this also means keep ‘supporting’ their obsolescence in a way. The other half of my job and hobby consists of making those obsolete devices running on less obsolete devices but again requires sniffing out obsolete devices, they controllers or the support hard and software again. I agree thats work done by most people, again sadly


Plenty of professionals take business trips and present and share their work. It's not unusual to want to carry your work computer home, to a hotel, or to a conference and have enough power to comfortably continue working on it.


Easy, it is likely docker that is making your Mac mini slower than your old linux box?

Docker on macOS is painfully slow, because it is implemented on macOS through what amounts to a sledgehammer to the problem. Docker depends on linux kernel features so on macOS it just starts a linux virtual machine and does other high overhead compatibility tricks to get it to work. Volumes are the biggest culprit.

If you are running docker through rosetta... (don't know the state of docker on apple silicon) then that is a double whammy of compatibility layers.

Regarding bugs, yeah probably teething issues because the M1 was/is such a large departure from the norm. They should really get those things fixed pronto.


Docker for Mac Desktop supports the M1 and uses an Linux VM that is ARM.

It is not using Rosetta 2 at all.


But if you run x86 images, it will use qemu’s software emulation inside the Linux VM (which is quite slow).


This has been the big slowdown for my org as a number of packages we rely on in docker on debian do not yet have an arm64 version, so have to emulate x86 and take the huge performance hit.


Yea, don't get the benefit of the hypervisor when constrained to x86 Vms.


May I introduce you to AWS/GCP/Azure where you can cheaply run legacy x86 systems and use them for development?


Many folks are comparing it to previous Macs with macOS, not PCs nor other distros.

If you have an Intel Mac next to it, there's a clear noticeable difference assuming macOS is on it.

If you put W10 on the Intel Mac, it is much faster than macOS on it (from my own experience). If we could run W10 or Linux on M1, it will be much faster than Intel Mac.

Another example, Windows 10/11 on ARM in Parallels on my MBA m1 is much faster than Surface Pro X, a MS-tuned/customized device.


For me, all the gains from a W10 running faster than MacOS are lost because I have to immediately go into control panel to fix everything. Those fast computers are waiting on me to improve the user experience most of the time. I'm personally not really splitting hairs over much longer running processes if I have to walk away anyway.


Fix everything?


I’m comparing Windows PC (i7-6700k, nvme) with MBA m1 and MBA is noticeable faster in just everything. Compilation time of JS and Java code is literally 2 times faster on MBA M1 than on desktop i7 CPU.


That hardware is from 2015, though. A fairer comparison would be to current gen Intel. https://en.wikipedia.org/wiki/List_of_Intel_Core_i7_processo...


To current intel laptop (without a cooler), maybe, not to desktop with powerful cooler. I’m sure desktop CPU should be much more powerful.


i7-6700k is 6 years old. The current equivalent i7 (i7-11700k) is almost 2x as fast when using all cores.


If the latest gen, top cpu is “almost 2x as fast” it will still lose to the parent’s “literally 2 times faster” M1. While using 4x more power.


Right but GP's main points were:

- noticeable faster in just everything

- literally 2 times faster [compiling code]

Neither of these statements is true when comparing against the latest generation, which is the relevant comparison.

The power thing is true, but irrelevant to GP's point and my objection to it.


I was comparing 2 computers I have (read the comment I replied to). But if you are saying that passively cooled Apple notebook should be compared to last-gen Intel desktop CPU with powerful cooler - Apple enginers would be honored to read it ;)


While the M1 is surely impressive I think you put too much weight on those JS compilation results. Windows is known to be really slow for this specific task since it has poor per-file IO performance - especially when Windows Defender is enabled.

I do my daily work - involving lots of JS compilation - on a i5-4690 running ubuntu server, it's faster than any of my colleagues' much newer MacBook Pros (no M1's yet though). Our Windows laptops are so far behind that it's just awful to watch (3-5 times slower).


Isn't the i7-6700k 3-4 generations older at this point than the m1?


for those who downvote the facts they don't want to know about: https://twitter.com/eugeniyoz/status/1407518570888810497?s=2...


nobody is arguing with your facts, just that your comparison is apples and oranges since the i7 6700k was released 5+ years ago.


I have a three-year old hand-built 16-core ThreadRipper 1950X machine here with 64GB 4-channel RAM. I have a 14" MBP M1 Max on order. I just checked the Geekbench scores between these two machines:

  M1 Max: 1783 single-core | 12693 multi-core
  1950X:   901 single-core |  7409 multi-core
That's a big difference that I'm looking forward to.

Also, just checked memory bandwidth. I have 80GB/s memory bandwidth [1] on my 1950X. The MBP has 400GB/s memory bandwidth. 5x(!)

[1] https://en.wikichip.org/wiki/amd/ryzen_threadripper/1950x

*EDIT adding memory bandwidth.


> I have 80GB/s memory bandwidth [1] on my 1950X. The MBP has 400GB/s memory bandwidth. 5x(!)

Your 1950x also doesn't have a GPU. For comparison the rx6700xt, and Vega 56 have ~400GB/s. The 6 year old R9 nano has 500GB/s.


The M1 Max *CPU* cores can access RAM at 400GB/s. My 1950x CPU cannot use the 448GB/s memory bandwidth on my RTX 2080 (which only has 8GB of GDDR6 RAM). This unified memory is wild.


1. M1 Max has 50+ billion transistors (only a part of that count is CPU, so adjust the number downwards), 1950X (first gen 14-nm Ryzen) has 10 billion.

2. Multicore Cinebench R23 for M1 Max: 14970; Ryzen 1950X: 18780

Geekbench tests in short bursts, not taxing the CPU thermals. Cinebench is the opposite. In short, CPUs do well on some benchmarks and not so well on others.


The Cinebench R23 results for 1950x[1] and M1Max[2], including single and multi-core:

           single     multi
  M1Max:    1562      14970
  1950X:    1035      18780
Remember that the M1 Max has 8 high performance and 2 efficiency cores vs. the 16 cores of the 1950x. Again, this is a pretty convincing victory - 1.5x single core perf in a laptop - this is not a HEDT part.

[1] https://www.cpu-monkey.com/en/cpu-amd_ryzen_threadripper_195...

[2] https://www.cpu-monkey.com/en/cpu-apple_m1_max


So you are comparing a laptop that is just coming out to something 4 years old.


a 4 year old workstation/HEDT that was widely viewed as being an out-of-band excellent performer for its time, and the laptop is outperforming it in multithreaded performance by almost a factor of 2, with half the (performance) cores, with no SMT.

yeah, that actually is fairly wild. Granted, there are good technical reasons there (lower memory bandwidth, the 1950X having a NUMA/NUCA arrangement that often hampers the ability to bring all the cores to bear on a single task, Zen1 having extremely high memory latency and cross-CCX latency in general, etc) but it still actually is a very impressive feat.


I share the same feeling, and I'm glad to learn I'm not alone: earlier this year I worked for a (really) sort time with a company who gave me a Macbook M1, which I was pretty excited about.

Over the course of the two weeks I worked there, I faced:

- frequent glitches when unplugging the HDMI adaptor (an Apple one, which costed an arm and a leg).

- non-compatible software (Docker, Sublimetext)

- and even a Kernel Panic while resuming from hibernation

It was like working with Linux in the 2000's, but at least with Linux I knew it was gonna be a bit rough on the edges. Given the hype found online, I wasn't at all prepared for such an experience.


> non-compatible software (Docker, Sublimetext)

What was the issue with Sublime? I use it on an M1 every day without issue.


I don't remember exactly, but I couldn't use it.

Keep in mind this was in Jan/Feb 2021, so I'd expect software compatibility with the new chip to have improved dramatically.


Docker is now faster than on my work Intel MBP, fwiw. If was pretty bad in winter 21 though.


I have had the opposite experience. Whenever a update to a kernel comes out for Ubuntu my machine apparently forgets which kernel to choose and boots up incorrectly. My M1 MacBook Air drives a Pro Display XDR without any hiccups. Its completely silent when I work with it. But, performance wise the M1 Pro and Max don't seem like they would be worth it for me to upgrade from the M1 at all. I just want a completely silent laptop and it makes a huge difference to me.

But, my workflows and workloads are slowly changing from local compilation of code to one where I will probably just SSH and or possibly figure out how to configure RDP into my Threadripper when I do actual builds and have my Macbook Air M1 with 16GB of Ram and the synergy with other Apple devices is a huge plus.

Have you talked with Apple to get a replacement?


docker is a second-class citizen on anything but linux. i'm especially amazed you bothered with the m1 mac mini if you wanted to use docker, considering how little memory it has. the memory overhead of having a VM running with only 8GB available is significant.

yes, i know the docker on mac has "native" support for the m1 cpu, that doesn't mean its not running a VM.


Yeah people forget that Docker is built to run on Linux. That is why it exists. You deploy your containers onto a Linux host that runs docker for containerization, that is the point.

The reason we have Docker for Windows and Mac is so that developers can emulate the docker environment for testing and building purposes. But ultimately it is expected that you will take your container and eventually deploy it on Linux. So Docker will always be a second class citizen on these other OS's because it is just there for emulation purposes.


Seems like your computer may have issues, you shouldn't be crashing from USB devices. Sounds like you should go get it checked out tbh.


Well could just be a software issue, for example if you installed the wrong CH430x driver to run your faux arduino clones (that didnt pay debt to ftdi) you might run into this issue.


There are an awful lot of people with USB issues on the M1. Its clearly not as mature a platform as previously.


Has there been any research to OS response times between Macos, Windows and Linux?

Ive wanted to switch to Mac many times, most recently to M1 Mac Mini, but cant get over the frustrating slowness. Every action, like opening Finder, feels like it takes so long compared to Windows even with reduce animation on. I always end up returning the mac.


Well thanks for mention it, I really feel the same. Stuff just does not open instant and perceived slowness on macOS is real. Just try simple stuff… Modern Windows just flies most of the time in comparison. Opening a file dialog in finder is a common scenario yet it is so slow and the system seems to appear limited by simple stuff like that. M1 surely improves things but the UI generally seems to prefer animation over raw speed.

I really don‘t know what the reason is but I feel that a lot of people simply ignore this. I use all 3 major systems and I cannot overlook it anymore.

Additionally, Docker is another real issue mentioned for the reasons already. Too bad that it is so common now and its slowness is a growing pain. I feel that WSL (WSLg) on Windows is huge, no comparison to run a traditional VM for the most part if you are not macOS only. Never thought I have to write this…


Meh. The M1 is a node ahead and memory is placed on package. I'm trying not to be dazzled by the tech when it comes at the price of freedom.


Ok - I get that comment for iPhone - but what is less free about the Mac vs. a PC?


No component is user upgradeable or repairable. You have to buy apple care or else.


You have to use macos for now.


I went from a 2016 top-of-the-line MacBook Pro to an M1 Mac mini, and I can't believe how much faster things are–including Docker with Rails, PostgreSQL, etc. Out of curiosity, are you running native (arm64) containers? Aside from an issue now and then with my secondary HDMI display, my machine has continued to blow me away. My development environment is almost on par with our production environment, which is a nice change.


The Mac Mini isn't super impressive, because you're comparing it to desktops where TDP is far less of a concern.

The M1 is getting a lot of attention because it's Apple's first laptop chip, and it is the fastest chip in that category by a fairly significant margin. Chips from Intel and AMD only compete with it on performance when drawing considerably more power.


MacOS just seems to have a lot of stability problems related to peripherals. I don’t think it’s specific to M1. Even on my late-era Intel MBP, Bluetooth and USB often triggers a kernel panic.


When I got my first MacBook Pro, there was actually bug in iPhone USB Tethering that caused a guaranteed kernel panic if you tried to do it. Managed to crash my laptop a few times with my iPhone 3GS.


My experience is the opposite. I love Linux, but my MacOS work laptop is just so much better.

For as long as I've used Linux it freezes randomly. This happens on all hardware configurations I've ever owned. If I don't regularly save my work, I will end up losing it.

These days I also can't leave my desktop PC suspended for too long. At some point the fans start spinning at max. No reason why.

As for my 13 inch M1 laptop, I don't even know if it has fans. I've never heard them even if I'm in a zoom meeting, sharing my screen, while compiling half the world.

Also in Linux I can't really use Zoom screen sharing. Seems to leek memory and after a while it crashes. Of course that's not Linux fault.

On my Linux machine it takes significant effort to use my AirPod's microphone.

And that's not even talking about battery life on Linux laptops.

It's always interesting how we all have such completely different experiences with our hardware and software.


Seems like you mostly have issues with Wayland. What distro are you running?

For me Zoom screen sharing works fine with Wayland, if everything runs Wayland, including the browser. Xwayland apps can't "see" Wayland apps or the desktop.

I assume the Zoom client may be Electron/Xwayland, correct?

The X to Wayland transition is mostly over soon. Probably with Ubuntu 22.04 LTS.


People also overlook the fact that Geekbench has generally favored Apple A-series CPUs. When the original M1 came out they were faster than 4800u Ryzens on Geekbench, while being slower on Cinebench. 4800u consumes more power, but is also a node behind.

Other benchmarks too have their biases. You can find plenty of discussion on this topic.


I've never owned a mac and I've never used an M1 so take this with a heavy grain of salt. Everyone I've seen talk about how fast the M1 is were coming from 4-5+ year old macs not comparing them to intel/amd chips that came out last year in a similar price range.


Well I went from a 16" 32GB 6-core i7 2019 MBP to a 16GB M1 Macbook Air and it was night and day better. So much faster, silent, cool and snappier.


Well switching hardware and OS makes it hard to tell what's responsible for the difference. I also suspect Docker and what's running in the containers might well be x86 code instead of native.

I can tell you that I have a high end mbp 16" intel-i9 with 32GB ram and it feels much slower than a new mac M1 mini. My intel-i9 runs the fan card during backups (crashplan), video conferencing, and sometimes just because even just outlook. The M1 mini on the other hand has been silent, fast, and generally a pleasure to use.

Doubling the number of fast cores, doubling (or quadrupling) the GPU and memory bandwidth should make the new MBP 14 and 16" even better.


I have a top of the line Windows 10 i7 laptop and Outlook will create more heat than any other process I am running. Now introduce Windows Modern Standby and we get a real SH*T SHOW.


I thought native M1 Linux was still a work in progress (disclaimer: I’m not a Linux geek).

Even if it’s out there, I suspect it’s not optimized.

I think standard distros are running on Rosetta2, which is an (excellent, from all reports) X86 emulator.


linux is actually ported to the M1 at this point and you can run it normally (although of course there are likely still some bugs lurking), however the graphics drivers are still a work in progress although coming along very quickly.

https://www.theregister.com/2021/10/06/asahi_linux_m1_progre...

I'm sure you're right that there are still optimizations to be done, but as you can see from Alyssa's tweet there, it is still a very fast piece of hardware for normal developer usage, as long as you don't fall into one of the scenarios that probably requires further tuning (x86 emulation probably being an obvious one, no idea if they've got Total Store Ordering working in KVM yet but that's obviously a key feature).

mega kudos to the Asahi team, the progress has been really tremendous, it's been just about 6 months since kickoff and they've made progress that some people insisted would take multiple years/was impossible at all.


So, for the most part, the lions’ share of distros still run emulated.

In that case, running “only slightly faster” seems like win.

I’m sure we’ll be fully native, soon enough. We’ve had native ARM Linux for years (Android). Should be fun.


Put the same thing in a fanless laptop, that seems to last forever on a charge, then you’ll understand


The Docker situation on macOS for Apple Silicon is just really, really terrible. Docker on macOS was never particularly good (compared to Linux anyway), but on Apple Silicon it is just beyond bad. A combination of how Docker spins up a VM, the state of Linux on ARM (especially on the M1) and the complexities of when you need to compile stuff that doesn’t have native ARM versions is just a mess. If Apple cared about this segment, I'd think they would contribute some resources to either improving Docker or doing native macOS containers built with Unix, but I just don’t think they do.

This is one reason I'm keeping my 2020 iMac (10-core i9, 128GB of RAM, AMD Radeon Pro 5700XT) around for a long time, even though I just ordered a 64GB 32-core GPU 14” MacBook Pro . (I have an AMD Ryzen 5900X with an Nvidia 3080 as my gaming PC too.)

If your primary interface is containers, I don’t necessarily think the M1 series machines are the best. But for lots of other tasks, it’s really just astounding.


Actually there is nothing to figure out. If limitations of macos and arm cpu is not issue for you and you want a lightweight laptop then you buy a m1 macbook. m1 macbook air first Apple product i bought and i also had latest gen ryzen laptop at that time (which was superior to intel). Simply there is nothing to compare. It is not an alternative to m1 cpu. At least 5 years behind it. Actually it looks like it will never possible for that architecture to catch up with arm for low tdp siutations. If you ever used last gen windows laptop you will know there are always bugs. And installing ubuntu on a latest gen machine. LOL. That won't make you happy even if it is possible. I also had an issue with bluetooth mouse being laggy. It seems like patched now. Bugs were really an issue with m1 cpu but i assume there shouldn't be much issues now.


>*Work... Docker, Ansible, Rails apps, nothing that requires amazing super power. Everything just runs slower.

Docker does require amazing super power on anything except Linux since only on Linux it doesn't have to rely on emulation. The overhead is 0-1% on Linux, and something way more on alternative platforms.

Besides Docker, what you're seeing is probably the fact that MacOS is way slower than Linux for almost all the things developers do in their daily lives. If the M1 port ever gets properly done, you're gonna see Linux fly on that.


M1 is fast for being a laptop CPU. This means that maybe it is not the fastest CPU in absolute terms, but if you consider performance per Watt, it is really a beast.

Although my main workstation is an Intel Ubuntu-based PC, I have also a M1 Macbook Air that I use when I travel, or when I am laying on the couch in the evening after work. That piece of hardware never gets hot, and the battery seems to last forever. Meanwhile it does whatever I throw at it; if else the limiting factor is the amount of RAM installed on the laptop.


My $1000 Windows laptop from two years ago kills my M1 MacBook Pro. Benchmarks aren't everything and there's more to speed than just raw calculations.


Is it 5 pounds and 2 inches thick? The fact is is that my M1 Macbook Air that was $1400 would cost at least $2000 from HP, Dell or Lenovo to even approach similar performance, and still have a worse screen, worse battery life, AND would still be slower. I have not seen a laptop chip from AMD or Intel in fanless, thin laptop like the Macbook Air that even gets close to the M1.


Wow, that would really be against the tide. Can you provide any more information like the specs, make and what you are doing?


Doing what?


M1 is, compared to precious editions of Macbooks, much better.

My two year old Linux desktop is a beast compared to my M1 Macmini. But I love the Macmini compared to my 2019 MBP.


Running Docker on a Mac vs on Linux is going to be a subpar experience compared to a native x86 Linux box. It would take a LOT to magic around that.


> My Mini has weird bugs all the time. e.g. Green Screen Crashes when I have a thumbdrive plugged in. Won't wake from sleep. Loses bluetooth randomly.

Either the Mini is much buggier than the MacBook Air by design, or you need to get it replaced under warranty. My MBA is the first laptop I've used for months with absolutely no hardware interaction issue.


As someone who has an Intel NUC w. Ubuntu, a Thinkpad with Windows 10, and a Macbook Pro 2015, I have a totally different experience. I spend like 5% of my total effort on OSX to make things working, the remaining 95% is spent to make things working on Windows and Ubuntu...


My AMD / Nvidia laptop running Ubuntu has had a number of issues from flash crashes to freeze ups to external monitor connectivity that have progressively gotten better as the drivers were updated. It is likely the drivers are buggy and will probably improve with time.


Not particularly defending the M1, but it might be that much of what you're using so far isn't available native and is going through Rosetta2? The other issues could be OS issues, but maybe it would be worth doing a RAM test?


Probably more to do with bug sir than the hardware. Catalina is pretty solid.


I hadn’t heard that one. I had some pain, but I can laugh now.


Currently typing this on an m1 laptop that I bought 2 weeks back. This machine blows my 16 inch Macbook out of the water. My 16 inch Macbook is pretty well spec'd, but this laptop is something else.


Let’s review: “Black. Magic. Fuckery.”—https://news.ycombinator.com/item?id=25202147


> desktop

I too have a $900 "Gaming" desktop setup from 2 years ago that should have no problem exceeding the newest M1 Max mac in terms of performance and graphics.

But it consumes 250 watts on average.


The Intel macs were extremely slow. Even a core i9 16 inch ran hot and noisy. A small xps 13 felt much faster in general use. I don't know if this was deliberate by Apple or just a side effect of the extremely anemic cooling.

So all those people who had been using macbooks felt the speed when they used a decently fast laptop. Everybody else knows that M1 laptops are fast, like non-apple laptops have been for quite some time.


macOS has always been worse at animation performance, even after the Metal switch. My M1 is buttery smooth, but you always had dropped frames even on pretty recent Mac hardware. Your other issues seem like factory defects, I've never had them.


What are the specs on your Ubuntu machine?


Even VIM is super slow on my mac book, this amazes me most hahaha


M1 is hardware. Ubuntu is software.

Your comparison makes no sense.


This is easily explained. Linux distributions don't make money if you buy a new laptop.


You more or less described how Apple hype works.

The M1 is fast like a Ferrari, it's expensive and nice in the brochure but you can't take it many places in real life let alone use more than 20% of its potential.


The answer is M1 is a great chip held back by a mediocre OS. Containers and certain other applications are just going to run faster on Linux, which has had a ton of performance tuning put into it relative to MacOS.


Here's the link to the MacBookPro18,2 OpenCL benchmark:

* M1 Max OpenCL https://browser.geekbench.com/v5/compute/3551790 [60,167 OpenCL Score]

Comparing to my current MacBook Pro (16-inch Late 2019) & my Hetzner AX101 server:

* MacBook Pro (16-inch Late 2019) vs M1 Max - CPU https://browser.geekbench.com/v5/cpu/compare/10496766?baseli... [single 163.6%, multi 188.8%]

* MacBook Pro (16-inch Late 2019) vs M1 Max - OpenCL https://browser.geekbench.com/v5/compute/compare/3551790?bas... [180.8%]

* Hetzner AX101 vs M1 Max - CPU https://browser.geekbench.com/v5/cpu/compare/10496766?baseli... [single 105.0%, multi 86.4%]

* NVIDIA GeForce RTX 2060 vs M1 Max - OpenCL https://browser.geekbench.com/v5/compute/compare/3551790?bas... [80.7%]

* NVIDIA GeForce RTX 3090 vs M1 Max - OpenCL https://browser.geekbench.com/v5/compute/compare/3551790?bas... [29.0%, boo]

I'm surprised the MacBook holds its own against the Hetzner AX101's AMD Ryzen 9 5950X 16-core CPU! The multi-core SQLite performance surprises me, I would think the M1 Max's NVMe is faster than my server's SAMSUNG MZQL23T8HCLS-00A07.


> * NVIDIA GeForce RTX 3090 vs M1 Max - OpenCL https://browser.geekbench.com/v5/compute/compare/3551790?bas... [29.0%]

30% performance for ~15% of power use in laptop form factor? that's not boo, that seems like a clear win for apple.


Performance doesn't go down linearly with power. I don't have that card to try, but maybe it would do even better at 15% of power.


There is a floor below which it will go straight to 0 (ie, nonfunctional).

The M1 Max is well below that floor at max wattage.


That floor is really, really low when you try to optimize for it. But the issue is that the M1 Max is at 5nm TSMC while the 3080 is on 8nm Samsung, so this isn't actually that impressive.


i doubt it'll even initialize.


These cards can run idle (e.g. desktop 2D mode) at about 15W, so it might run just fine.

Actual power draw depends on monitor resolution, number of attached displays, and refresh rate, though.


These cards idle at 15w while m1 max is yielding like half the performance at that wattage.

Sorry, Apple is clearly lightyears ahead. All they need to do is put out a chip with 48 of their gpu cores and they will beat the most expensive top of the line best ever performance gpu on the market… with a laptop-grade SoC.


The moment of truth will be Mac Pro. There they'll have all the freedom to spend as much electricity as possible and push computing to the bounds.


Also, the 3090 costs almost like the whole macbook


the truly amazing thing is that the M1 Max Macbook Pro will have twice the transistor count of a 3090 in its SOC (not counting memory stacks). That's a laptop CPU now.

It also has hexadecimal-channel RAM. 16 channels of DDR5. Instead of putting the SOC on GDDR6 and gimping the CPU side with higher latency, they just stacked in DDR5 channels until they had enough. Absolute meme tier design, Apple just does not give a single fuck about cost.

(note that DDR5 channels are half the width of DDR4 - you get two channels per stick. But, the burst length is longer to compensate (so you get the same amount per burst), and you get higher MT/s. But either way, it's conceptually like octochannel DDR4, "but better", it's a server-class memory configuration and then they stacked it all on the package so you could put it in a laptop.)


We don't know exactly of course, but I wouldn't be surprised, if Apple doesn't even save costs with their own chips vs. buying from Intel/AMD. In any case, they wouldn't be able to reach that compute power in a laptop of that size at all without the new chips.


Hm, as far as I can tell a M1 Max Macbook pro costs like $3k+ (the config used in the link is like $4k). how do you figure that?


Easy: you're not getting a 3090 at MSRP unless you're lucky.


Best buy has been doing physical drops on a monthly basis. I got one in the last drop @ MSRP, only waited in line for 30 minutes. It's worth checking out if you are looking for a card.

I will say though if you are in a large city this doesn't work as well, the lines are longer.


They've not done these in my area, but indeed I am in a "big city," or at least the suburbs.


Just wait until miners will start buying macbooks.


It's probably possible to build a 3090 system with a 750 watt power supply though.


A laptop?


Since we're comparing against M1 Max, it's more fair to use a fully maxed out Macbook Pro 16-inch for comparison:

* MacBook Pro (16-inch Late 2019) vs M1 Max - CPU https://browser.geekbench.com/v5/cpu/compare/10496766?baseli... [single 145.8%, multi 176.3%]

* MacBook Pro (16-inch Late 2019) vs M1 Max - OpenCL https://browser.geekbench.com/v5/compute/compare/3551790?bas... [153.0%]

* MacBook Pro (16-inch Late 2019) vs M1 Max - Metal https://browser.geekbench.com/v5/compute/compare/3557857?bas... [167.2%]

The M1 Max still wins by a large margin, but there's a slightly smaller gap. The CPU difference is negligible but the upgraded Radeon Pro 5600M graphics card narrows the gap a bit more. It's comparable to a Radeon RX 580. The default 5500M is pretty only about 60% as powerful and surprisingly runs hotter despite being slower. (TDP 85W vs 60W it seems)


Apple compared the M1 Max to the 3080 in the Razer Blade 15 Advanced, saying it outperforms the Nvdia’s GPU [0], but Geekbench seems to disagree [1].

[0]: https://www.apple.com/newsroom/2021/10/introducing-m1-pro-an...

[1]: https://browser.geekbench.com/v5/compute/compare/3551790?bas...


Apple never claimed the M1 Max was faster than the 3080, they claimed the M1 Max was almost as fast while using 100W less power and while being dramaticly faster on battery.


I wonder what they were comparing. The Geekbench result is for an OpenCL benchmark, if they were using Metal Compute on the M1 that may explain the different results


The 2060 and 3090 on these compares are the desktop chips, right?


The 3090 is a high end desktop machine. There is no mobile version of the 3090 (https://en.m.wikipedia.org/wiki/List_of_Nvidia_graphics_proc...).


It’s telling that we need comparisons with 300 watt desktop GPUs to make M1 look mediocre.


Thanks - interesting on OpenCL - presumably running on GPU? All that memory opens up some interesting possibilities.

Also I thought Apple was deprecating OpenCL in favour of Metal?


OpenCL and OpenGL have been deprecated in favor of Metal. Geekbench also has a Metal compute benchmark.


Is there a metal benchmark? This is the score I’ve been most interested in.


Nay, there isn't one for the new M1 Max, but FWIW it's pretty comparable, but OpenCL is a bit faster than Metal.

* MacBook Pro (16-inch Late 2019) - Metal https://browser.geekbench.com/v5/compute/3139776 [31,937 metal score]

* MacBook Pro (16-inch Late 2019) - OpenCL https://browser.geekbench.com/v5/compute/3139756 [33,280 OpenCL score]


Huh. I was expecting it to be over 80k. This is the blackmagic pro Vega 56 egpu metal score.


How to compare Metal vs non-mac hardware then?


Anyone know if that “Device Memory 42.7 GB” is fixed or can be increased?


The MacBook Pro M1 Max can be ordered with either 32 GB or 64 GB of unified memory. The geekbench report shows 64 GB of memory (maxed), not sure why only 42.7 GB is usable by OpenCL--so I guess we have to assume that's the max, unless there's some software fix to get it up to 64 GB.


> unless there's some software fix to get it up to 64 GB.

I very much doubt that this would even be possible. The OS and other on-chip devices require dedicated memory as well and need to reserve address space.

You simply cannot max out physical RAM with a single device in a UMA configuration.


It reports the currently free memory as device memory.


I'm not exactly proficient with GeekBenchery, but what I see here is that the M1 Max per core barely outperforms the M1?

https://browser.geekbench.com/v5/cpu/compare/10496766?baseli...


I think this kinda makes sense to me — the M1 Max has the same cores as the M1, just more of them and more of the performant ones, if I understand it right. The fastest work on the fastest core, when only working on a single core, is probably very similar.


Maybe a little surprised - presumably the thermal limitations on a 16 inch laptop are potentially less limiting than on a 13 inch one so that single core could be pushed to a higher frequency?


M1 uses TSMC high-density rather than high-performance. They get 40-60% better transistor density and less leakage (power consumption) at the expense of lower clockspeeds.

Also, a core is not necessarily just limited by power. There are often other considerations like pipeline length that affect final target clocks.

The fact is that at 3.2GHz, the M1 is very close to a 5800X in single-core performance. When that 5800X cranks up 8 cores, it dramatically slows down the clocks. Meanwhile the M1 should keep its max clockspeeds without any issue.

We know this because you can keep the 8 core M1 at max clocks for TEN MINUTES on passive cooling in the Macbook air (you can keep max clocks indefinitely if you apply a little thermal pad on the inside of the case).


> When that 5800X cranks up 8 cores, it dramatically slows down the clocks

Not that dramatic, it drops from ~4.8ghz to ~4.4ghz: https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...

Actual drop varying depending on actual power consumption & temperature as Ryzen is more or less an entirely reactive system.


Thanks - some very good points. Presumably this opens the possibility of higher single core performance on a future desktop design unless limited by pipeline length etc?


> We know this because you can keep the 8 core M1 at max clocks for TEN MINUTES on passive cooling in the Macbook air (you can keep max clocks indefinitely if you apply a little thermal pad on the inside of the case).

Is there a guide on how to apply this thermal pad?

I would love for my Air to not downclock.

Also why on earth does this thermal pad not come factory installed?


That’s why the MacBook Pro M1 has fans. It’s designed for harder workloads where you’re maxing out the cores (multi-core compilation, video encoding, etc) for extended periods. This was well-documented and discussed previously. The tradeoff is increased heat, power consumption, and fan noise.

Realistically, those aren’t typical workloads for most people, especially for 10+ minutes (and especially on an ultralight and portable laptop). So I wouldn’t lose sleep over thermal pads.


Maybe this is a reflection of the 16 inches higher thermal envelope?

https://www.macrumors.com/2021/10/21/new-macbook-pros-high-p...


> M1 uses TSMC high-density rather than high-performance.

Is that an actual product differentiation from TSMC? Or just observational + the fact that it's 5nm.

I'm actually curious from a chip manufacturing standpoint.


There’s actually three major version on N3 (sorry, I don’t have time to look up the similar articles on N5/7, but I think wikichip has a couple).

https://www.techdesignforums.com/blog/2021/06/03/three-libra...

I remember seeing a picture of a partner list for a TSMC node with a couple dozen libraries that not only changed based on density, but also on type of chip being built.


Does the core speed of an M1 core change at all? I thought they used the low/high power cores at fixed-limit clock speeds.

It sounds crazy to consider, but maybe they’d rather not try to speed up the individual cores in M1 Max, so that they can keep their overhead competitively low. That certainly would simplify manufacturing and QA; removing an entire vector (clock speed) from the binning process makes pipelines and platforms easier to maintain.


For how long? Longer than it takes to run the benchmark?


I thought I remembered that in the presentation they had souped up the individual cores too. Must be I'm misremembering.


They didn't. However the cores enjoy more main memory bandwidth.


they're the same cores as the original M1 architecturally, you just get an 8+2 on a Max instead of 4+4... and also an absolutely titanic iGPU, total transistor count is about twice that of a 3090 and almost all of it lives in the iGPU.

the only architectural difference between M1 and M1 Max in the CPU cores, besides the different combination of big/little cores, is that the M1 Max goes from quad channel to hexadecimal-channel DDR5 RAM (note a DDR5 channel is half the width of a DDR4 channel, but has longer burst and higher MT/s).

Apple's iGPU approach is real simple: unlike a console where you somewhat gimp the CPU by putting it on high-latency GDDR6 (to get enough bandwidth to feed the iGPU), they just put 16 channels of DDR5 on it, basically it's like an octochannel DDR4 server processor in terms of bandwidth. And the CPU also benefits from that as well, at least a little bit, but internally it's the same core design.

It's an absolute meme of a design, Apple just does not give a single fuck about cost here, "server class memory configuration"? sure why not, and we'll stack it on the package so it can go in a laptop.


I believe it's LPDDR5, basically a memory chip for smartphones, not DDR5.


sure, but it's also running at JEDEC either way, it's actually irrelevant to performance, it's just a matter of packaging (obviously for laptops, stacking the packages is more convenient than DIMMs or soldered modules).


All I wanted was an M1 that could address/use more memory. The first one could only use 16gb RAM, which was a dealbreaker for me.

Apple delivered that and so much more. I'm happy. I think there are many people like me.


Multi-display is another highly sought feature this M1 Pro/Max delivers (achievable on the M1 with a DisplayLink hub, but stability leaves something to be desired)


Tick - New cores; Tock - Scaling up the number of those same cores

I think most of the work went into the uncore portions of the SOC this time.


FYI it's the reverse. "Tock" is a new microarchitecture, and "tick" is a process shrink.

https://en.wikipedia.org/wiki/Tick–tock_model


Also, scaling the number of cores up and down for different tasks happens in both. Apple's recent announcement doesn't fit the tick-tock model at all.


In this case, both are using the same CPU core designs and are on the same node.

However, they are following the notion of "don't change everything at once".


uncore?


Parts of the SoC that are not the main CPUs e.g. power management controllers, display controllers, etc.


The A15 chips has core improvements, I suspect this is what we’ll see every year from now on 15-20% performance increase yearly for the next few years assuming no issues with TSMC…


Top intel mobile processor appears to be https://browser.geekbench.com/v5/cpu/10431820 M1 Max gives 9% boost for single-core and 34% for multicore, with similar or larger (?) TDP -- Intel is 35-45 W, M1 Max is 60W but I assume some of it (a lot?) goes to GPU. Impressive, but probably wouldn't be called revolution if came from Intel.


M1 only uses around 15-18w with all 8 cores active according to Anandtech's review (with E-cores using 1/10 the power, that's equivalent to just a little less than 4.5 big cores. I'd guess 30w for all cores would be the upper-end power limit for the cores.

Intel's "TDP" is a suggestion rather than reality. Their chips often hit 60-80 watts of peak power before hitting thermals and being forced to dial back.


Anandtech measured the max TDP of the CPUs in the M1 as being around 28W, with the GPU adding ~17W to that figure.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


That figure is compute workloads.

Also, those measurements are “at the wall” and include RAM, SSD, and everything else.

At the very least, you must subtract most idle power, but you also need to subtract more when DRAM/SSD kicks up from low power state.


That figure is after subtracting idle power. It’s right there on the chart.

The workload is described as "Compute MT" (i.e. multithreaded), compared with "Memory MT (i.e., memory intensive Multi-threaded workload), with a third bar on the chart being the average of the two.

It seems to me that the "Compute MT" is the closest to the measurement that Intel or AMD would describe as TDP. The M1 is a ~25+W chip, not a 10W chip, if we’re going to make a fair comparison with Intel or AMD equivalents. It’s still a great CPU of course, but it’s not magic pixie dust.


Compute MT is using both CPU and GPU together. To compete, any other CPU needs to use a dedicated GPU.

Also, at idle, the DRAM and SSD are powered off. Samsung 980 uses 4.5 watts of power when active. I'd guess a similar power consumption applies here.

LPDDR4x ranges from 0.6v to 1.8v This should give 2-3w per 8gb chip at full-power. Anandtech's article agrees with ST Memory adding 4.2w of power over their other ST benchmarks.

Now, 26.8 - 4.5 - 4.2 = 18.1w at PEAK.

For just CPU loads, we get 22.3 - 4.5 - 4.2 = 13.6w.


I don’t think your logic is based on sound assumptions:

You can’t completely turn off DRAM at idle if it’s to store data - it has to self-refresh continuously. Some power savings are available by running it in a power saving mode, but you can’t get around the leakage of the DRAM cells, so the extra power used to drive the DRAM hard is not the full DRAM power usage, but a fraction of it. (LPDDR5 is nifty in that you can tell it to power down parts of the chip that aren’t currently being used.)

The SSD isn’t being hit by either workload: why would a CPU benchmark hit disk? The SSD is going to remain idle & be part of the 4W that Anandtech assigns to the system idle power drain.

Where does it say that Compute is CPU + GPU? (I’m happy to believe it, but I don’t see it in the article & they already break out the GPU power usage seperately using a different benchmark.)

(It would really help if Anandtech told us what the benchmarks were - some CPU intensive benchmarks don’t hit memory at all, they just hammer a few cache lines. It’s entirely possible for a pure computational benchmark to leave most of the LPDDR5 in deep sleep!)

If by MT Compute they mean specifically the Geekbench 5 compute benchmark then that is a Metal / GPU benchmark, so I will concede this point in that case :)

Even conceding that the Compute benchmark is really a GPU benchmark, it’s still the case comparing a CPU benchmark load against the TDP of another processor isn’t entirely correct, as the TDP is meant to be a power ceiling (although Intel has rather breached this in recent years - AMD TDPs are more honest I’m told.) - if we accept the 22.3W of the average workload benchmark, then the TDP for the chip is still going to be more than that & that’s the appropriate point of comparison.


LPDDR4x has various power levels based on whether it's actually being used or just keeping data from corrupting. The latter is much less memory intensive and runs at much lower voltages. You may have a point about SSD depending on how much read/write is happening.

Check out this comparison of the passively-cooled M1 Macbook Air vs both Intel and AMD Surface 4 machines.

https://youtu.be/7yTWGjYFiC0?t=509

TL;DR -- AMD loses 25-50% of their performance as soon as the plug is removed. Their actual power usage spikes at 40w (for a 10-25w U-series CPU). No matter how you cut it, Peak M1 power usage is about half for similar performance.

Insult to injury, the M1 not only has far better performance, but it's also cheaper.


Oh, the M1 is a great chip, no question!

But I think it’s maybe twice (three times tops?) as good as the competition on a power/perf basis, not the 10x some people keep claiming when they compare the 10W claimed by Apple against the 100+W TDP of similarly performing AMD/Intel CPUs.

Incidentally, assuming the benchmark they’re running is Geekbench (seems likely at this point), I’ve just run a pass of both CPU & Compute GeekBench5 benchmarks with iotop running in the background & neither did any disk IO of any significance during the run. I think the peak throughput I saw was 150kb/s and that might have been something else on the desktop (I didn’t bother killing anything). Most of the time there was no IO at all.

So, if it was GeekBench they were running (seems plausible) then I think we can assume that the SSD is idle & that the Compute benchmark is really a GPU benchmark.


At first glace yeah, but that's 45W at 2.5Ghz, but TDP isn't the power rating of the CPU. That benchmark lists the Intel CPU as up to 4.9Ghz, I would say it's actual power draw was closer to the 100W mark for CPU only.


4.9 GHz should be single-core max freq, not applicable in multicore benchmark. All-core turbo is lower, but also AFAIK there is a limit on time it run on this freq (tau for PL2, i think around half a minute by default).


Is there any reason why the Intel CPU wouldn’t run at 4.9 GHz for the single core benchmark though - whilst the M1 would be limited to a much lower frequency which it can sustain for much longer?


I think, but I'm not 100% sure, that limits are on overall TDP, i.e., single-core workloaad can run on turbo indefinitely. Then aalso I'd assume benchmarks are much longer than any reasonable value of tau which means it integrates out (i.e., Intel performance may be higher in short-term). Edit: that is probably one of the reasons single-core gap is smaller


Thanks - this all highlights some of the interesting trade offs in CPU design!


Just to throw AMD in the mix, Ryzen 9 5980HX (35-54W).

https://browser.geekbench.com/v5/cpu/10431820

M1 Max leading 17% single core, 50% multi-core.

https://browser.geekbench.com/v5/cpu/compare/10496766?baseli...


I wonder how much a performance hit not having "hyperthreading" would be for the AMD cpu. Back when the openBSD guys disabled it I did the same, and I havent really noticed much difference in the workloads I have.

Considering the m1 lacks it it seems it is not necessary for good multithreading performance, and the implications of it has always been a can of worms.


If the TDP was really the same we’d be seeing Intel laptops with 20 hour battery lives.


Battery life is more about idle/low load TDP, not full load TDP. It's not like MBP has 60*20=1200 Wh battery (100Wh is FAA limit)


Yes, and the M1 is consuming less power at low loads, enabling longer battery lives. What other explanation do you think there could be for the long battery lives of the M1 laptops?


What does it have to do with geekbench which test processors on max load, i.e., highest possible TDP, which I referred to?


Depends on how you look at it. What's impressive about the M1 is that it gets highly respectable Geekbench scores while also being able to deliver 20 hours of battery life in a small form factor laptop. If there is an Intel chip that can do that, where are the laptops with comparable battery life?

We can nerd out about the exact definition of TDP all day, but ultimately these benchmarks are only useful insofar as they tell us something about real world performance and power consumption.


Well, I hear a lot of claims that M1 is significantly outperforming Intel (either by raw computing power or per watt). All I saying that both are at level of maybe single generation improvement, nice but not revolutionary. Of course they also those low-power cores that allow long battery life, which is very nice, but thats a separate story


What do you mean "claims"? M1 has been out for a year. There are countless real-world demonstrations. Apple left Intel in the dust. Intel doesn't know how to make a cool mobile chip.


I mean that top Apple chip outperforms top Intel chip by 9% percent single core performance and 34% multicore on similar TDP. M1 is very energy efficient, and MBA is very cool cheap typewriter, but Apple hasn't outperformed Intel on high load.

So yeah, basically Apple has chip that slightly outperform Intel at high load and also can be very low power. Performance-wise (that what the parent message was about) the latter doesn't really matter.


>So yeah, basically Apple has chip that slightly outperform Intel at high load and also can be very low power.

This is what people are impressed by.

Given your 'typewriter' comment, you're clearly not interested in a lightweight laptop with decent performance and an extremely long battery life. But a lot of people are very interested in exactly this combination of features. And to be serious for a moment, the M1 Macbook Air has more than enough performance for the majority of people.

This is really an example of how Apple 'gets it'. Even if you're right that Intel is only slightly behind technically, they're way behind in terms of the overall product experience. You just can't make an M1 Macbook Air clone with any current Intel chip.


My OP talked exactly about claims (and there are many at least around me) of outperforming in terms of raw power.

I never argued with the fact that ARM allows to stick a couple of very low-power cores, while x86 doesn't.

In terms of "lightweight laptop with decent performance" MBP (the discussion is about new stuff after all) is less interesting, because it's heavier, more expensive, and probably doesn't provide much added value. M1 was revolution (mostly because someone took ARM and put in popular laptop and took the burden of software transition), those are much less so.

I can see myself getting MBA if I'll need something small and cheap and there won't be decent competition (maybe Google's Chromebooks? Or something from Samsung?), but not MBP (mostly because I dislike Apple's UX and I think competition is decent). I'd consider if it was say outperforming Intel by 50% in single core tasks.


> typewriter

Now that you mention, I remember my grandfather used to sit us around the fire and regale us with stories of how he used to edit 60fps 8k video on his old typewriter. I pulled it out of the attic just now and ran geekbench, and you're right, my new MBP is basically the same.

Thanks bud for bringing so much insight to this conversation!


I assume your grandfather wasn't able understand metaphors either?


Or sarcasm


Well the geekbench score didn't say anything about it being a revolution, that was your take. If Intel was capable of keeping their chips cool, while maintaining high perf, they would have been called a revolution too.


If you look up other laptops with the same processor you get a very large variation. The Lenovo you point to has a 280 watt power brick, maybe the results aren’t as good when it’s not plugged in.


It might be called a revolution if Intel could get that performance boost while nearly doubling battery life and eliminating thermal throttling


I have seen far too many people making comments on MacPro "Pro" Chip.

A hypothetical 32 Core CPU and 64 Core GPU is about the max Apple could make in terms of die size reticle limit without going chiplet and TDP limit without some exotic cooling solution. Which means you cant have some imaginary 64 Core CPU and 128 Core GPU.

We can now finally have a Mac Cube, where the vast majority of the Cube will simply be a heat sink and it will still be faster than current Intel Mac Pro. I think this chip makes most sense with the A16 design on 4nm next year [1].

Interesting question would be memory, current Mac Pro support 1.5TB. A 16 Channel DDR5 would be required to feed the GPU, but that also means minimum memory on Mac Pro would be 128GB at 8GB per DIMM. One could use HBM on package, but that doesn't provide ECC Memory protection.

[1] TSMC 3nm has been postponed in case anyone not paying attention, so the whole roadmap has been shifted by a year. If this still doesn't keep those repeating Moore's law has not died yet I dont know what will.


If someone had said in 2018 "Apple is going to release a macbook pro with an ARM chip that will be faster at running x86 applications than an x86 chip while having a 20 hour batter life", then a lot of people would have responded with a lot of very clever sounding reasons why this couldn't possibly work and what the limitations would be on a hypothetical chip and how a hypothetical chip that they can imagine would behave.

I think the evidence is that Apple's chip designers don't care about anyone's preconceived and very logical sounding ideas about what can and can't be done.

So what we know is that there is going to be a Mac Pro, and that whatever it is is going to absolutely destroy the equivalent (i.e. absolute top of the range) x86 PC.

If you want to be right, take that as a fact, and work backwards from there. Any argument that starts with "I don't think it can be done" is a losing argument.


I'm so old I remember the brief time when Alpha ran x86 code faster than x86 so history is repeating. I am eagerly awaiting M1 Plaid vs. Sapphire Rapids vs. Genoa next year.

"If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong." — Arthur C. Clarke


> Any argument that starts with "I don't think it can be done" is a losing argument.

But plenty of interesting opinions start that way -- interesting because someone with expertise thinks so, and/or because the opinion is followed by well-reasoned arguments.

OTOH, "it can't be done" is often based on current assumptions about hard limits. Those assumptions have been broken routinely and consistently for hundreds of years now.


>If you want to be right, take that as a fact...

I guess you dont know enough about Die Space and Reticle Limit on Wafer and Foundry. You are basically arguing if Apple can bypass speed of light.


It's not hard to imagine that "Jade 2C-Die" mean two compute die chiplets and "Jade 4C-Die" means four chiplets which makes 40 CPU cores and 128 GPU cores about the same size as Sapphire Rapids. It could have 256 GB RAM on a 2048-bit bus with 1,600 GB/s of bandwidth.


Chiplet isn't a silver bullet and without compromise. You will then need to figure the memory configuration and NUMA access. Basically the Jade 2C-Die on chiplet analogy doesn't make sense unless you make some significant changes to Jade-C.

Jade 4C-Die on a single die only make sense up to 40 Core ( 4x Current CPU Core or 32 HP Core and 8 HE Core ), unless there are some major cache rework there aren't any space for 128 GPU Core.

But then we also know Apple has no problem with only 2 HE Core on a MacBook Pro Laptop, why would they want to put 8 HE Core on a Desktop?


> Interesting question would be memory, current Mac Pro support 1.5TB

I wonder whether it might have on-chip RAM and slotted RAM?

Not sure what the implications to software and performance would be there, but the whole point of the new Mac Pro was an admission that the customers in that market need components they can chop and change as required. They obviously still care about that market, otherwise they would have just discontinued the Mac Pro and left it at that (instead they discontinued the iMac Pro—the intended successor to the Mac Pro—because a modular desktop is a better fit).

It would be super weird to do a total 180 on that now, especially considering they went to the effort of building things like the Afterburner cards. The move to ARM was surely in motion before work started on the latest Mac Pro design so I'm really interested to see where things go here. "Modular tower" and "tightly integrated SoC" seem completely at odds with each other.


> We can now finally have a Mac Cube

Fascinating idea. Perhaps Apple could finally match the ideal Steve Jobs dreamed of but couldn't realize in the infamous Power Mac G4 Cube.


Mac Pro is supposed to be expandable.

The most elegant solution would be to have a motherboard with cartridges. You're buying as many CPU/GPU/AI/RAM/SSD modules as you like and getting anything you want. Wanna crazy build farm? Put plenty of CPUs and some RAM. Wanna GPU farm? You've got it. Want terabytes of RAM? No problem.

Not sure if it's possible to implement. But I'd love to see it done this way.


> about the max Apple could make in terms of die size reticle limit without going chiplet

I probably have no idea what I'm talking about as I'm a software guy. But — I remember that company that made that crazy ML accelerator where an entire silicon wafer is one chip. How'd they do that? Why can't Apple and others do the same/similar?


As mentioned in your other reply. Cerebras .

Wafer Scale Engine doesn't break the reticle limit, if you search for a picture on Cerebras, you can still clearly see the square die within the wafer. From a very high level overview, they are basically doing cross interconnect with each die on wafer. Scribe line used to be physical separation for each die, now they built interconnect on top so the whole wafer becomes a huge mesh of AI chip.

It works for because you can think of each die as lots of Neutral Engine and cache only. You can easily rework around each process defects and solve the yield problem. And then you have to solve packaging, power and cooling.

>Why can't Apple and others do the same/similar?

When you have a complex SoC. Not every defect can be worked around, that is why the larger the die the lower the yield. A reason why you see in GPU they fused off certain GPU core and Memory Controller. It isn't because of market segmentation, it is necessity coming from manufacturing. To avoid this they have chiplet. Making each part a small die and interconnect them together. But as mentioned, chiplet isn't a silver bullet. Your design must have chiplet in mind for interconnect with AMD Zen. You cant just dump two pieces of M1 Max together on the same interposer and expect it to work. They could, but it wouldn't be efficient. And that sort of defeat the purpose. An analogy would be asking the current AMD Ryzen APU, which is an SoC with GPU and Memory controller, you stick two of them together and somehow expect it to work 100% faster.


Cerebras

I'd bet it's the plumbing. Getting data in and out, supplying something around 10-100 kiloamps, attaching it to anything without it ripping itself off via thermal expansion, getting the heat out - all of that sounds miserably difficult.

Like sure you can do some outrageous and expensive things to make it all work in small quantities and for huge sums of money. But you can't build a mass-market laptop that way.

I'd guess that the upper limit with present-day and near-future packaging technology for commodity hardware is ~1000 mm^2.


Doesn't HBM2E do ECC? I know Micron definitely package that at least. I'm likely misunderstanding your ECC point though.


I dont follow HBM closely so I am guessing it could be an On Die ECC correction like DDR5 instead of Full ECC in both DIMM and Memory Controller. But I could be wrong.


Wow does much better than Geekbench's prior top processor (https://browser.geekbench.com/processor-benchmarks) the Intel Core i9-11900K (https://browser.geekbench.com/processors/intel-core-i9-11900...).

10,997 for Intel i9

vs

12,693 for M1 Max


You sorted by single core performance, then compared multi core performance. Sort by multi core performance, and you will see that the i9-11900K is nowhere near the top spot.

For example, the Ryzen 9 5950X has single/multi core scores of 1,688/16,645 - which is higher in multi core score than the M1 Max, but lower in the single core.


Interestingly, the iPhone's A15 SOC did get a newer version of Apple's big core this year.

>On an adjacent note, with a score of 7.28 in the integer suite, Apple’s A15 P-core is on equal footing with AMD’s Zen3-based Ryzen 5950X with a score of 7.29, and ahead of M1 with a score of 6.66.

https://www.anandtech.com/show/16983/the-apple-a15-soc-perfo...

On floating point, it's slightly ahead. 10.15 for the A15 vs. 9.79 for the 5950X.


Which is still not that much higher. Of the "consumer" CPUs only 5900X and 5950X score higher. And their stress power draw is about 2X of speculated M1 Max's.


That's maybe not a bad way to sort? Most of the time I'm interacting with a computer I'm waiting for some single thread to respond, so I want to maximize that, then look over a column to see if it will be adequate for bulk compute tasks as well.


Perhaps they were referencing the highest 8C chip. Certainly, a 5950X is faster, but it also has double the number of cores (counting only performance on the M1; I don't know if the 2 efficiency cores do anything on the multi-core benchmark). Not to mention the power consumption differences - one is in a laptop and the other is a desktop CPU.

Looking at a 1783/12693 on an 8-core CPU shows about a 10% scaling penalty from 1 to 8 cores - suppose a 32-core M1 came out for the Mac Pro that could scale only at 50% per core, that would still score over 28000, compared to the real-world top scorer, the 64-core 3990X scoring 25271.


M1 Max has 10 cores.


But the two efficiency cores are less than half a main core thought right?


1/3 the performance, but 1/10 the power. Not adding more was a mistake IMO. Maybe next time...


Really? I mean if it gets me 10-14h coding on a single charge that’s awesome…


The A15 efficiency cores will be in the next model. They are A76-level performance (flagship-level for Android from 2019-2020), but use only a tiny bit more power than the current efficiency cores.

At that point, their E-cores will have something like 80% the performance of a Zen 1 core. Zen 1 might not be the new hotness, but lots of people are perfectly fine with their Threadripper 1950X which Apple could almost match with 16 E-cores and only around 8 watts of peak power.

I suspect we'll see Apple joining ARM in three-tiered CPUs shortly. Adding a couple in-order cores just for tiny system processes that wake periodically, but don't actually do much just makes a ton of sense.


Stil 8 more than my desktop pc :p


The single core is second to Intel's best but the multicore is well below in the scale, comparable to Intel Xeon W-2191B or Intel Core i9-10920X, which are 18 and 12 core beasts with TDP of up to 165W.

Which means, at least for Geekbench, Apple M1 Max has a power comparable to a very powerful desktop workstation. But if you need the absolute best of the best on multicore you can get double the performance with AMD Ryzen Threadripper 3990X at 280W TDP!

Can you imagine if Apple released some beast with similar TDP? 300W Apple M1 Unleashed, the trashcan design re-imagined, with 10X power of M1 Max if can preserve similar performance per watt. That would be 5X over the best of the best.

If Apple made an iMac Pro with similar TDP to the Intel one, and keeps the performance per watt, that would mean multicore score of about 60K, which is twice of the best processor there is in the X86 World.

I suspect, these scores don't tell the full story since the Apple SoC has specialised units for processing certain kind of data and they have direct access to the data in the memory and as a result it could be unmatched by anything but at the same time it can be comically slow for some other type of processes where X86 shines.


John Siracusa had a diagram linked here that shows the die for M1 Max, and says the ultimate desktop version is basically 4 M1 Max packages. If true, that’s a 40 core CPU 128 core GPU beast, and then we can compare to the desktop 280W Ryzens.


Interestingly, the M1 Max is only a 10 core (of which only 8 are high performance). I wonder what it will look like when it’s a 20-core, or even a 64-core like the Threadripper. Imagine a 64-core M1 on an iMac or Mac Pro.

We’re in for some fun times.


John Siracusa - no the chart isn't real, but maybe qualify that with "yet"...

https://twitter.com/siracusa/status/1450202454067400711


Hm, related to that reply https://twitter.com/lukeburrage/status/1450216654202343425

Is this a yield trick, that one is the "chopped" part of another? So they'll bin failed M1Max ones as M1Pro, if possible?


Bloomberg's Gurman certainly has shown that he has reliable sources inside Apple over the years.

>Codenamed Jade 2C-Die and Jade 4C-Die, a redesigned Mac Pro is planned to come in 20 or 40 computing core variations, made up of 16 high-performance or 32 high-performance cores and four or eight high-efficiency cores. The chips would also include either 64 core or 128 core options for graphics.

https://www.macrumors.com/2021/05/18/bloomberg-mac-pro-32-hi...

So right in line with the notion of the Mac Pro getting an SOC that has the resources of either 2 or 4 M1 Pros glued together.


I wish there were laptop-specific Geekbench rankings because right now it seems impossible to easily compare devices in the same class


The M1 Pro/Max are effectively H-class chips so you can search for 11800H, 11950H, 5800H, 5900HX, etc.


Your comment got me wondering if there was actually a method to Intel's naming madness, and it turns out there is!

https://www.intel.com/content/www/us/en/processors/processor...

11800H = Core i7-11800H -> family=i7 generation=11 sku=800 H=optimized for mobile

11950H = Core i9-11950H -> family=i9 generation=11 sku=950 H=optimized for mobile

I didn't look up the AMD names.

So, now that I know the names, why not use Core i9-11980HK?

family=i7 generation=11 sku=800 HK=high performance optimized for mobile

It seems like it exists https://www.techspot.com/review/2289-intel-core-i9-11980hk/

P.S. General rant: WTF Intel. I'm really glad there is a decoder ring but does it really have to be that hard? Is there really a need for 14 suffixes? For example, option T, power-optimized lifestyle. Is it really different from option U, mobile power efficient?


I really wonder how a single z/Architecture core would fare on this benchmark, though I imagine it's never been ported


Probably not as good as you might expect. Z machines are built for enterprise features like RAS, and performance on specific workloads.

The ultra-high-clocked IBM cpus are probably significantly faster at DB loads, and less than the best at more general benchmarks like Geekbench.


Per core performance is the most interesting metric.

Edit: for relative comparison between CPUs, per core metric is the most interesting unless you also account for heat, price and many other factors. Comparing a 56-core CPU with 10-core M1 is a meaningless comparison.


Not when building large software projects.


> Not when building large software projects.

Or run heavy renders of complex ray-traced scenes.

Or do heavy 3D reconstruction from 2D images.

Or run Monte-Carlo simulations to compute complex likelihoods on parametric trading models.

Or train ML models.

The list of things you can do with a computer with many, many cores is long, and some of these (or parts thereof) are sometimes rather annoying to map to a GPU.


It seems Apple thinks it _can_ map the essential ones to the GPU, though. If they didn’t, there would be more CPUs and less powerful other hardware.

‘Rather annoying’ certainly doesn’t have to be a problem. Apple can afford to pay engineers lots of money to write libraries that do that for you.

The only problem I see is that Apple might (and likely will) disagree with some of their potential customers about what functionality is essential.


Related content: the round Mac Pro


> Not when building large software projects.

While working in rust I am most limited by single core performance. Incremental builds at the moment are like, 300ms compiling and 2 seconds linking. In release mode linking takes 10+ seconds with LTO turned on. The linker is entirely single threaded.

Fast cold compiles are nice, but I do that far more rarely than incremental debug builds. And there’s faster linkers (like mold[1] or lld) but lld doesn’t support macos properly and mold doesn’t support macos at all.

I’m pretty sure tsc and most javascript bundlers are also single threaded.

I wish software people cared anywhere near as much about performance as hardware engineers do. Until then, single core performance numbers will continue to matter for me.

[1] https://github.com/rui314/mold


My project [0] is about 600k lines of C++. It takes about 5m40s to build from scratch on a Ryzen Threadripper 2950X, using all 16 cores more or less maxed out. There's no option in C++ for meaningiful incremental compiles. Typically working compiles (i.e. just what is needed given whatever I've just done) are on the order of 5-45 secs, but I've noticed that knowing I can do a full rebuild in a few minutes affects my development decisions in a very positive way. I do 99% of my development work on Linux, even though the program is cross-platform, and so I get to benefit from lld(1).

The same machine does nychthemeral builds that include macOS compiles on a QEMU VM, but given that I'm asleep when that happens, I only care that the night's work is done before I get up.

[0] https://ardour.org/


Like… everyone builds large projects all the time?


If you don’t then you don’t really need the top end do you?


Most people who buy fast cars don't need them and it's the same with computers.


By that logic you could build an array of mac mini if you don't care about price/heat.


What compiler could even make use of 10 cores? Most build processes I've run can't even fully utilize the 4 cores.


Compilers typically don't use multiple cores, but the build system that invokes them do, by invoking them in parallel. Modern build systems will typically invoke commands for 1 target per core, which means that on my system for example, building my software uses all 16 cores more or less until the final steps of the process.

The speed record for building my software is held by a system with over 1k cores (a couple of seconds, compared to multiple minutes on a mid-size Threadripper).


I can stress out a 32+32 core 2990WX with 'make -j' on some of my projects, htop essentially has every core pegged.


Just running the tests in our Rails project (11k of them) can stress out a ton; we're regularly running it on 80+ cores to keep our test completion time ~3 minutes. M1 Max should let me run all tests locally much faster than I can today.


Wow, what is the system doing to have 11000 tests?


  add(1, 1) = 2
  add(1, 2) = 3
  add(1, 3) = 4
  add(1, 4) = 5
  ...


Rust (Cargo) does, and always wants more.


Often a single compiler won't make use of more than a core, but it's generally easy to build independent modules in parallel.

For example, make -j 10, or mvn -T 10.


Xcode has no issues taking advantage of all cores.


C++ compilers probably will.


Just when you think things hit the top, another kid's out of the town.



That’s a strangely shaped laptop, what is the battery like on it?


It's actually compatible with a tremendous range of third party external batteries like so: https://www.amazon.com/dp/B004918MO2

And forget about fast charging—you can charge this battery up from 0% to 100% in less than a minute just by pouring some gasoline in the thing!

It's the very pinnacle of portability!


This is the comparison to a mid 2015 15" MacBook Pro for anyone who is curios: https://browser.geekbench.com/v5/cpu/compare/10513492?baseli...

Summary: single core performance gain is 2x whereas multi-core performance gain is 4x.


Over 6 years, 2x the performance in single-core and 4x (for 2.5x the cores, depending how you count) seems surprisingly low compared to stuff I've read (not experienced yet) with the M1.


I think a lot of the perceivable speedup in M1 Macs is less to do with these benchmarks and more to do with optimizing the OS and Apple software for the M1 and the various accelerators on the SoC (video encoding and ML).


> perceivable speedup in M1 Macs

Perception is key here.

Of course M1's are perceived as being faster because:

- video camera encoding/decoding is offloaded to a dedicated processor;

- disk encryption is offloaded to a dedicated processor;

- rendering is offloaded to dedicated GPU's;

- matrix multiplication is offloaded to dedicated matrix multiplication processors (neural engines);

- no wasteful memory transfers (even though the memory bus is 2048 bit wide anyway) across dedicated processors.

Kind of like channel processors running channel programs in IBM mainframes, brought to a mainstream ultraportable laptop.

And, on top of that, the screen that drops the refresh rate by a factor of 6 when there is no need to refresh the picture too often.

Ironically, the CPU is left with pretty much one job to do: to compute. It is possible to get fast compilation times whilst doing all of the above at once. Or have an app that does all of the above at once, too (no, it won't save Microsoft Teams). And be power efficient, too; hence a long battery life and an Apple marketing team brouhaha: «We can do this all day». They are right – they can actually do all of that, and they can brag about it now because they have a full list from the above fulfilled.

System performance is a holistic matter, not just single/multicore comparisons oftetimes taken out of the context.


We already had dedicated GPUs for all the graphics/video stuff and the T2 chip for the disk encryption in the Intel Macs. That leaves the unified memory architecture as the only innovation.


My M1 MBA is measurably faster than the i9 MBP it replaced in several purely software tasks. At the very least it performs on par with the i9.

There's plenty of ARM-specific optimization in macOS that gives boosts to M1s over Intel but the chips are just faster for many tasks than Intel.


Since Apple has been relying on Intel processors for many years now, I'd bet they've spent more time optimizing MacOS for Intel at this point. On the other hand, iOS has given them a quick way to transfer optimization knowledge.


> I'd bet they've spent more time optimizing MacOS for Intel at this point.

Keep in mind a lot of the core libraries on macOS and iOS have been shared since the inception of iOS. The kernel, CoreFoundation, and FoundationKit are on both platforms. There's also a bunch of other shared frameworks even before Catalyst. So much of macOS's foundational code has been running and optimized for ARM almost as long as Apple has been using Intel.


Also remember that a lot of apple customers stick with absolutely ancient laptops and are then amazed when a modern one is much faster e.g. I had to explain what an NVME drive is to someone who is a great developer but just not a hardware guy.


Don’t really understand this argument - Apple has had 13 years to optimise Mac OS for Intel and only a couple of years for M1 / can’t see how a video encoding accelerator affects responsiveness.


Apple has been optimizing both sides (hard and soft)ware for iOS for a lot longer. They’ve learned.


That’s a fair point - I think that it works both ways too - they know a lot about optimising CPU design for iOS / MacOS especially in low power environments.


I think you are on the right track, but I think the performance gain really comes from the RAM being on the chip itself, which raw number crunching won't make use (but actual usage will make great use of)


I've used both M1 Macbook Air for work and 2019 16" with 32GB of RAM and the M1 Macbook Air feels as fast if not faster then the 16"..


The vast majority of that 6 years is very gradual or minimal improvements. Then M1.


Well the 2019 16 inch i9 MacBook Pro scores are 1088 and 6821 so you can see the massive uplift over just two years.


Comparison with the 2012 macbook I'm typing this on: https://browser.geekbench.com/v5/cpu/compare/10496766?baseli...

I waited for these to come out for a long time, and now "should I upgrade" isn't even a question. It's a "shut up and take my money" situation at this point, but they aren't even taking preorders over here yet. I mean, it could probably run a browser, several electron apps, AND android studio at the same time, how cool is that?!


Ha I couldn't stand the order that was, so here's the reverse:

https://browser.geekbench.com/v5/cpu/compare/10496766?baseli...

3.1% increase in single core, and 69.4% increase in multi-core.


Sorry, I think you meant to reply to this post with it: https://news.ycombinator.com/item?id=28935095


What am I missing here? The marketing presentations spoke about 200GB/sec and 400GB/sec parts. Existing CPU's generally have 10s of GB/sec. But I see these parts beating out existing parts by small margins - 100% at best.

Where is all that bandwidth going? Are none of these synthetic benchmarks stressing memory IO that much? Surely compression should benefit from 400GB/sec bandwidth?

This also raises the question how are those multiple memory channels laid out? Sequentually? Stripped? Bitwise, byte wise or word stripped?


The SoC having 400GB/s of memory bandwidth doesn't mean that any individual CPU core can saturate 400GB/s of memory bandwidth (or even if all the CPU cores combined can achieve that). The SoC's memory bus is also feeding the GPU, and GPUs tend to be _very_ memory bandwidth hungry (see other discreet GPUs pushing over 1TB/s of memory bandwidth).

The CPU performance, even where memory IO limited, is more likely limited by how well it can prefetch memory and how many in-flight reads & writes it can do. A straight memcpy benchmark might be part of this suite, but that'd also be basically the only workload where a CPU core (or multiple) could come anywhere close to hitting 400GB/s of bandwidth. Otherwise memory _latency_ will be the bigger memory IO limitation for the CPUs, and that likely hasn't drastically changed with the new M1 Pros & Max's (there may be some cache layout changes which would shift some of the numbers, but DRAM latency is likely unchanged)

For an example of this in a different product: https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...

A single graviton2 CPU can "only" achieve 18-36GB/s of memory bandwidth even though the package in total can hit 200GB/s


A single firestorm (performance) m1 core in the m1 Mac mini can sustain ~60Gb/s read from ram. I have no doubt that 8 of them could come close to if not entirely saturate 400GB/s.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


As people stated in the announcement post, a high memory bandwidth doesn't really benefit CPUs much. That's also why AMD and Intel don't have high memory bandwidth on their cores, because it doesn't really help performance.

Where it is beneficial is the GPU; for comparison AMD and Nvidia cards often exceed 400GB/s. A desktop RTX 3090 has 900GB/s.


Intel is definitely looking at HBM for Sapphire Rapids though.

https://www.anandtech.com/show/16795/intel-to-launch-next-ge...


I don’t think you could even top 150GB/s sustained if you ran memcpy on all ten threads at once. (Though that would be 300 total.)


This turned out to be correct, look at https://www.anandtech.com/show/17024/apple-m1-max-performanc... -- the "sustained" measurement is on the right hand side of the graph. It hits 243GB/s bandwidth with 8+2 threads, 224 with just the 8 performance cores.


They have 5.2 and 10.4 TFLOPS of GPU power to feed in addition to 10 very wide cores.


Pretty impressive, all things considered. Looks like it's roughly on par with an AMD 5800X, which is a Desktop CPU with 8C/16T and a 105W TDP.


5800x seems 20% faster for single core: https://browser.geekbench.com/v5/cpu/5874365 so def not on part.


That's certainly overclocked or very high PBO offsets, at stock the 5800x gets around 1700-1800 ST.


But if you run TWO single-threaded workloads at once, the clocks dial WAY back. Meanwhile, the M1 can keep all cores at 3.2GHz pretty much indefinitely.



Yeah, that looks correct. With faster ram and PBO it should hit closer to 1800.

Here is my 5600x with PBO enabled and 3800MT/s CL16 DDR4 https://browser.geekbench.com/v5/cpu/10516280

5800x would have a max clock of 4950mhz with PBO


Impressive but unbelievably expensive considering what will actually be run on it.

I kind of want one, but the 5800x for example is about 10x less expensive than a specced out MacBook Pro


Are you comparing the price of a chip to a laptop??


Yes. If I bought one of these it would never actually move so I don't actually mind the comparison all that much for my personal use case, obviously it isn't apples to oranges but my point is that this performance is not free.


What is it with the M1 series of chips that causes people to bring completely disingenuous comparisons to the fray with a straight face.


Then perhaps you may want to wait for Apple to release a desktop-class processor to make the comparison, perhaps early next year?


Its not clear to me that Apple will make a desktop-class processor. The unit economics likely don't make sense for them.

All of Apple's innovation seems to be towards better and cheaper AR/VR hardware. Desktop-class processors would be a distraction for them.

And with all of the top cloud players building custom silicon these days, there is little room for Apple to sell CPUs to the server market even if they were inclined (which they are not).

The only strategic Apple vertical that might align with desktop-class CPUs is the Apple Car initiative and specifically self-driving. Dedicated audio/image/video processing and inference focused hardware could better empower creatives for things like VFX or post-processing or media in general. However, its not clear to me that is enough of a market for Apple's unit economics compared with more iDevice / MacBook sales.


IMO it's obvious that there will need to be a desktop version - and all the rumours are pointing towards a Mac Pro release with silly specs - i.e. an SOC with something like 40/64+ cores. Why would Apple want to give up their portion of the high-power desktop market to Windows?

What's the alternative? That they release another Mac Pro with intel, despite their stated intention to move everything away from x86, or that they release a Mac Pro with just a laptop chip inside?

Let's remember that Apple has an annual R&D budget of c$20 billion, so it won't be totally shocking if they diverted a small fraction of that to build out a desktop processor.


Well, those are rumors assuming a chiplet architecture, which Apple has never tried before and would require very significant modifications to the layout.

Simply quadrupling the die isn't really feasible, the price increases exponentially.


Well analysts have estimated the cost of the M1 at $40 to $50 per chip, so even if they double the size of the chip which quadruples the cost, that would still be completely feasible for the Mac Pro which retails at c$5,000+

Even if the SOC was $500 cost price they would still have plenty of room to pay for everything else and hit the $5k price point.

I mean, they could even use separate highly integrated chips to get that number of cores too, although I suspect they will want to do it on a single chip assuming cooling is OK.


This wouldn't be doubling the size of the M1, it would be quadrupling the size of the M1 Max, so more like 1500$ for the chip.

What you're talking about at the end is chiplets and it takes a lot of work to get it to operate, just ask AMD.


Quite right.


> Why would Apple want to give up their portion of the high-power desktop market to Windows?

I guess the question is, to what end?

- Apple hasn't had a good history with B2B sales. Evidenced by them getting out of the server business.

- Apple doesn't want to be a cloud infra company. Evidenced by them using other cloud providers to host iCloud.

- Maybe, Apple will one day sell servers again, or sell M1-like chips to cloud providers. However, we have no evidence of this. Apple prefers B2C products with 5-6 year refresh cycles. Their superpower is vertical integration rather than selling a single component.

- Cloudtops are becoming more and more popular, in-tandem with a powerful-enough laptop to appropriately hold local state such that the UX is responsive. Large companies that do software development, ML training, VFX, etc are already using cloudtops.

That said, small companies and individuals haven't yet started using cloudtops for intellectual property creation and a Mac Pro could still help their workflows.

With that in mind, where does Apple grow this high performance market? If there is no future for this market, why invest into any R&D? Let the Windows ecosystem continue to handle it for the few years it still exists.

Meanwhile, to help those small companies and individuals improve their current workflows, maybe Apple will release one or two more Mac Pros, but more likely than not all Mac Pro development will stop. I'm likely wrong about all of this but I feel, Apple will instead offer a "seamless compute cloud". Similar to their strategy with https://developer.apple.com/xcode-cloud/ they will likely release a general purpose service for "cloud cores" i.e. the machine will locally have 6 high-performance cores, 2 high-efficiency cores, and ∞ remote-performance cores that are seamlessly integrated and run Apple's Universal Binary in "their cloud" (likely through AWS/Azure/GCP). Then no matter what the workflow is (e.g. photo/video/audio/game editing) you get a local preview (e.g. a low-res render locally) but in parallel the final product is built remotely and the data mirrored back to your local machine.

The only other reasons to keep building beefy machines like Mac Pro are for iCar Self Driving or to sell "cloudtops" (as I described in the paragraph earlier) but in a B2C context, i.e. the cloudtop runs in your local LAN/WAN or even as your home's Wifi router to reduce latency and accelerate all the Apple devices on the network. However, its not clear if they will invest in the software first or the hardware first for this distributed compute future.


Can you explain your unit economic analysis?

They have made a Mac Pro desktop for several decades. I am trying to follow your reasoning for Apple to sunset that category of workstation as a result of transitioning to Apple silicon, but it is not working out for me.

My logic leads to a cheaper-to-produce-than-Macbook workstation in an updated Mac Pro chassis with best-binned M1X Max parts in the Spring followed by its first chiplet-based Apple silicon workstation using the same chassis in the Fall, followed by an annual iteration on Apple silicon across its product line, on about the same cadence as A/A-x iOS devices.

Part of my reasoning is based on the assumption that Mac Pro sales generate more Pro XDR sales displays at a higher rate than Macbook Pro sales. I think the total profit baked into an Apple silicon Mac Pro + Pro XDR is big and fills a niche not filled by any thing else in the market. Why leave it unfilled?


> in an updated Mac Pro chassis with best-binned M1X Max parts

Yeah this seems realistic to me. Specifically, I meant that there will be no D1 (desktop-class chip) to complement the A14/A15 (mobile-class chips) and M1/Pro/Max (laptop-class chips).

I agree that taking M1 Max, or future laptop-class chips (e.g. M2 Max) and adding better cooling and perhaps a chiplet would be most likely.

That said, Apple's R&D is always for some grand future product vertical, not to make some more money over the next few quarters. e.g. Neural Engine and Security Enclave R&D leads 1. to on-device Siri + Face Id to accommodate a marketing change to focus on Privacy; 2. moves towards a password-less future.

Even R&D for chiplets, may not make sense for them other than for Apple Cars. "Can you explain your unit economic analysis?" during a chip shortage, why sell 1 Mac Pro with a 10+ year "lifespan" when Apple could sell 5 iDevices with a 5+ year "lifespan" instead? e.g. had Apple known better that they would need to reduce iPhone production targets because of chips, its likely they would have delayed the M1 Pro/Max production/launch.


At worst, wait until they release these in the iMac Pro and maybe even the Mac Mini. Both easily have the headroom for these chips.


Perhaps. AMD should also release 5nm processors by then too.


A boxed CPU is an odd comparison without considering motherboard, ram, GPU, cooler, SSD, display in that calculation as well.


Not just an odd comparison. I don't think it's a valid comparison at all.


can't really get it any other way.

The issue to me is that because apple is the only one who can put these in computers this will have no real effect on PC component pricing. Apple makes, apple puts in apple computers, can't run windows, if you're in another ecosystem it's not even a real option.


Workstation laptops are expensive. These new Macs are priced quite competitively. E.g. the 14“ with the full M1 Pro is $2.5k and is faster, more portable and has a much better display than a 2.7k Dell Precision 5560…


Very True although I note that I wouldn't be able to actually work on one for the day job because of windows.

Still might buy one, I want an arm box to test my work on compilers on.


For me it’s going to be a massive improvement. Already the base M1 builds software faster than my Intel i9… and given that these new chips have 400GB/s bandwidth it will be a ridiculous improvement for my R data analysis code…


Get a new job! :)


Aren't Dells usually pricer than retail because they include some kind of service plan?


No, service plan is an extra $300+, as is applecare (and both are definitely worth it).


People are probably going to be reading your comparison as an objective comparison rather than an opportunistic one. For example, if you are choosing between upgrading AM4 processors on your desktop versus buying a new Macbook Pro, then of course it makes sense to compare the cost in those terms. However, the price of the M1 chip is obviously probably not that bad. Since you can’t meaningfully buy it on its own, I guess there is no fair comparison to make here anyways.


The M1 chip is actually probably extremely expensive. The top of the line one is literally 60 BILLION transistors!

My current machine has like 10 billion (GPU + CPU)


60 billion is obviously a metric ton, but 10 billion is not that ridiculous for an SoC to clear; there are snapdragons at higher transistor counts. The AMD 5950X clears 19 billion, and it is just a CPU with no GPU or integrated RAM. I’ve got to guess the M1 transistor count is inflated a fair bit by RAM.

I suppose it’s not likely we’ll know the actual price of the M1, but it would suffice to say it’s probably a fair bit less than the full laptop.


Not sure if these numbers can believed, but apparently a 300mm wafer at the 5nm node costs about $17000 for TSMC to process

https://www.tomshardware.com/news/tsmcs-wafer-prices-reveale...

Since the M1 is 120 mm2 and a 300mm wafer is about 70695 mm2, you could theoretically fit 589 M1 chips on a wafer.

Subtract the chips lost at the edge and other flaws, you might be able to get 500 M1 off a wafer? (I know nothing about what would be a reasonable yield but I'm pretty sure a chip won't work if part of it is missing)

Anyways, that would be $17000/500, or $34 per M1 chip - based just on area and ignoring other processing costs.


The M1 Max is slightly off-square and 432mm^2. 19.1 x 22.6 seems like a decent fit.

TSMC has stated that N7 defect density was 0.09 a year ago or so. They have also since stated that N5 defect density was lower than N7.

Let's plug that in here https://caly-technologies.com/die-yield-calculator/

If we go with a defect density of 0.07, that's 94 good dies and 32 harvest dies. At 0.08, it's 90 and 36 respectively.

If we put that at 120 dies per wafer and $17,000 per wafer, that's just $141 per chip. That's probably WAAAYY less than they are paying for the i9 chips in their 2019 Macbook Pros.

For comparison, AMD's 6900 GPU die is 519mm^2 and Nvidia's 3080 is 628mm^2. Both are massively more expensive to produce.


thanks for the link to the die yield calculator, very handy!

And I agree - Apple making their own chips is sure looking good for their bottom line vs buying from Intel


Exactly. For me, I can upgrade my existing desktop by spending $560 on a Ryzen 5900X. I have all the other parts so there's no additional cost.

I also don't need portability nor a fancy display because I use all external monitors and keyboard/mouse. Fan noise is irrelevant because I wear noise cancelling headphones.

For me, it's not worth spending the $2500 on a MacBook when I can get similar/better performance with a much cheaper upgrade.


The 10-core (8 performance) M1 Max starts around $2700 in the 14" form factor.

It's hard to compare to laptops, but since we started the desktop comparison, a Ryzen 7 5800X is $450 MSRP, or about $370 current street price. Motherboards can be found for $100, but you'll more likely spend $220-250 for a good match. 32GB RAM is $150, 1TB Samsung 980 Pro is a bit under $200. Let's assume desktop RTX 3060 for the graphics (which is probably slightly more powerful than the 32-core GPU M1 Max) for MSRP $330 but street price over $700.

So we're at about $1670 for the components, before adding in case ($100), power supply ($100) and other things a laptop includes (screen, keyboard...).


M1 Pro has the same CPU cores as the M1 Max, just half the GPU cores (and video coding cores). So you can get the same CPU performance in the 14" for as little as $2499.


Most of if not an of those components are reusable, such as case, power supply, monitors, peripherals, RAM, etc.

In my case I'm reusing everything and just upgrading the CPU for $560. The end result will be similar or better performance than the MacBook.

If you don't own any desktop computer parts at all then, yes, one can easily spend $2500 building a machine with similar performance.


IMHO, ditching Intel is a smart move for apple but a bad one for hackers. I’m sticking with older intel mbps until decent emulation scores are achieved with qemu, and probably further down the line. Hypervisors (say VBox proxmox combo) are so useful when you have to experiment/test with other platforms. Happy to see HDMI and escape key back tho, and it looks like we’ll even get a Target Display Mode equivalent! We’ll be back to my 2015 setup.:-)

My ideal M1 mbp would be this new one with 64gb ram but with a second cheap mid tier i5 with 16gb dedicated, and a rtx2070(cmon NVdia has Cuda —we need that) with 8gb, connected via internal tb3. I don’t care that it would be 700g heavier and a bit bigger or have less battery life when used to the max (idled chips should not eat a lot). It terms of bare costs thats maybe 1000$ more silicon, but it’s a world more of possibilities. What’s wrong with dedicated gpu offering— not for everyone but hackers love it and help the aura.

And —-please—- bring back a modular slot (m2?) for (additional?) storage! (Also good for hackers but not for Apple)


Target display mode on the laptop? Where'd you see that?


I read that Monterrey will Have an AirPlay based feature to do something like TDM. www.macrumors.com/2021/06/09/airplay-mac-to-mac-external-display/amp/


One big thing to consider is that this is just the first m1 Max Geekbench score, compared against the world's greatest 11900Ks. Most 11900K's are nowhere near the levels of the top preforming one.

Once Apple starts shipping M1 Max in volume, and as TSMC yields get better, you will see "golden chips" slowly start to score even higher than this one.


Golden chips?


In every manufacturing process there are always chips that perform better than others. They all reach the "bin" that they're designed for, but the actual design is for higher than that, so that even imperfect chips can perform at the required level.

The corollary of that is that there are some chips that perform better than the design parameters would have you expect, they're easier to overclock and get higher speeds from. These are the "golden" chips.

Having said that, it's not clear to me that the M1* will do that, I don't know if Apple self-tune the chips on boot to extract the best performance, or they just slap in a standard clock-rate and anything that can meet it, does. I'd expect the latter, tbh. It's a lot easier, and it means there's less variation between devices which has lots of knock-on benefits to QA and the company in general, even if it means users can't overclock.


There are slight variations in materials, processes, etc.

End result: some chips just end up being better than others.



I'm in the situation where I really want to pre-order a 14", but I have no idea if going with the base model would be a mistake.

Would upgrading to 32GB RAM make Xcode faster? Or would it be a waste of $400?


My opinion: the jump from 16GB to 32GB probably won't make a huge difference today, especially if your workload is already running alright in 16GB. I think it'll greatly extend the useful life of the laptop, though. For example, I wouldn't want to be using an 8GB laptop today.


This is exactly why I waited for the "M1X" over the M1 Mac. Over time, apps and websites tend to bloat up and consume more memory. Especially since you can't upgrade your Mac later, buying a higher amount of RAM upfront will no doubt extend the usable life of the machine.


When I start up kubernetes, my usage for that alone goes up to 6+GB. Swapping to SSD is terrible for it's lifecycle. 32GB should have you going for quite a while unless you need/want lots of vRAM in which case I'd go with 64GB.


I have no idea why, but my 16gb M1 mac is working perfectly for me while my 16gb dell xps with linux was not cutting it. I have no idea why doing pretty much the same thing is using less memory but other people have said this too.


The M1 SSD is significantly faster than previous Macs. Could be that swapping is much less noticable now. I also have a 16GB M1 Mini and haven't noticed an issue either.


As I understand it, OSX is more efficient with memory compression, another variable to keep in mind.

I often work with large sparse matrices (which can be compressed well) and I’ve loaded up a 500 GB matrix into 64 GB of memory (although just barely, memory pressure was red in activity monitor)


RAM is something where you either have enough or you don't. When you have enough, adding more doesn't really get you anything. When you don't have enough, performance tanks.

The integrated nature of the M1 Macs gives them a much better ability to predictively swap unused content to SSD and make the most of the available RAM when things get tight while multitasking, but if you have a single task that needs 17GB and you have 16GB it's going to suffer a lot compared to if you had 32GB.

I wish Apple (and everyone else) would give up on the ultra-thin crap and make a real Pro machine that returns to upgradability at the cost of a few extra millimeters, but for now since you're stuck with what you start with I'd recommend always going at least one level above what you think you might ever need during its expected useful life.


hard to tell given that m1's address and page memory completely differently from how x86 does it. My 8GB M1 MacBook Air performs extremely well even when memory pressure is high...and it never seems to hit swap space.

Anecdotal example: I could have several Firefox tabs with active workers in the background (WhatsApp, Slack, etc.), a Zoom meeting with video conferencing on, with audio and video being routed via OBS (native), a Screen Sharing session over SSH going, and a Kubernetes cluster in Docker running, and that won't even make the MacBook hot. Nothing slows down. I could get maybe five hours out of the battery this way. Usually six.

Doing that on a maxed out Intel MacBook Pro will make it sound like a jet engine and reduce my battery life to two or three hours. It will also slow to a crawl.

I'm guessing buying a machine with 32GB of RAM is an investment into the future where workloads on m1 machines are high enough to actually give the memory a run for its money.


16GB of RAM seems so low in 2021. OTOH, hard drives are so fast on these things that maybe 16 is good enough with the SSD as overflow.

I ended up shelling out the extra $400 for 32GB, but didn't feel great about it!


I've heard from people with M1 laptops that they perform surprisingly better than Intel laptops with less RAM (on the Mac). I imagine the same will hold with the Pro and Max, although it will depend a lot on what type of work you're doing.


I am one of those people. I bought an 8gb M1 Air last month for a project (while I waited for the new models to be released). It baffles me how well this little computer performs with less RAM than I've had in a computer since 2009. I'd love an explanation of how that works. Maybe the SSD is so fast that swapping isn't a big deal?


I looked into this angle recently. The SSDs in the new MBPs are roughly 3x as fast as the one in my 2015 MBP (7.4GB/s vs 2GB/s). To contextualize, the new MBPs have roughly half as much SSD bandwidth as my MBP does memory bandwidth.

Which is to say the SSD is much closer to being as fast as RAM, which would explain why is subjectively can make better use of less memory.


It’s also possible that they’re using more and better tricks from iOS to handle memory - in which case the Intel Macs might show similar results if given similar speed disks


What kind of tricks?


Neat!


I have m1 air with 8Gb and I see that had to take 16Gb at least. Not sure about 32GB, but 16 is the minimum. Profile: JS, TS, Rust, sometimes Java.


On the other hand I've been running 16GB of ram for a while and I can't conceive of a reason why I would need more. 32GB seems like overkill. What would you do with all of that ram? Open more tabs?


Looking at activity monitor, I'm currently using ~19GB, doing nothing special (IntelliJ, 15 chrome tabs, Spotify, Slack). Docker Desktop was using 8GB before I killed it. And this is on an Intel Mac so it doesn't include GPU memory usage I believe, which is shared in M1 Macs.

This likely isn't a perfect metric, if I were closer to the limit I think MacOS would get more aggressive about compression and offloading stuff to disk but still.


Browsers generally allocate RAM with abandon but any one process is not necessarily memory bound. It just means that they are using that as page cache.


Bah IntelliJ is a pig. It's worth it, but it's a pig.


Compilation, graphics work, heavy-weight IDEs

And, if you're spinning on 16GB ram on an M1, you might be eating through the SSD powering your swap space and not know it.


indeed. From my experience, I see that my MBA uses swap sometimes, but I can't notice it. Still, I want to avoid it.


Ram is ram. Don’t fall for marketing hype and think M1 means you need less ram. If you think you need it, you still need it on M1… and I say that as someone who owns an M1 Mac.


Yes - I wish people would stop implying that the M1 magically doubles RAM and other such nonsense. I found the same. I have a game that requires at least 32GB (Cities:Skylines - mainly because of my self inflicted steam workshop asset addiction) and ended up returning the Air to wait for the next round. Decided to go all out - have a 16" 64GB M1 Max with 2TB of storage on the way.


There is something to the hype. I have no idea why but I was not doing well at all with 16GB on my Dell XPS on Linux but I'm going perfectly fine with the M1 with a few GB left over.


Consider that you always can download more ram: https://downloadmoreram.com/

/s


Figure out your hourly wage, how many hours a day you use your laptop, and whether $400 amortized over the life of the device is worth the "risk".


Same situation as you, looking at 14". Need to see what the Xcode benchmarks are like for the variety of M1 Pro processors on offer, to see if any upgrades from the base model are worthwhile.

If I was sticking with 16GB RAM I think I'd get a M1 Air instead. The smaller screen is the downside, but 300g lighter and substantially cheaper are good points.


I’m upgrading from an M1 Air (16GB) to a 14” Pro base model just for the display. Extra M1 pro performance is bonus but the M1 has already been amazing for development work past year.


Is it the resolution you've found limiting on the M1 Air? My eyesight is slowly getting worse so I'm assuming I'll have to run both at 200%. Which makes the Air a 1280x800 screen - something I last had on my 2009 13" MBP!


I’m lucky enough to still have good eyesight so it’s not limiting for me personally. Most of the time when working I’ve got it hooked up to a big 4K display though. My expectation is I’ll appreciate the new XDR display for the times I’m not on an external monitor.


M1 only allows one external monitor (you can run 2 with some docks if you turn off the laptop screen). This isn't such a problem for me as I have an ultra-wide, but lots of people with dual/triple monitor setups haven't been super thrilled.


Depends upon your workflow but I use xcode and android studio and 16gb isn't enough if I run a simulator or emulator. Definitely get the 32gb imo.


Unlikely to make Xcode any faster. Just look at your ram usage now and then forecast a bit to know.


I wouldn't go for the base model, since it has 6 performance cores, rather than 8.


Site is under heavy load:

1783 Single-Core Score

12693 Multi-Core Score


For comparison, M1 results:

1705 Single core, 7382 multi core


For comparison, Mac Pro (Late 2019) 16-core: 1104 Single-Core Score, 15078 Multi-Core Score


thank you


Anyone know how this compares to the M1 Pro? Curious if it's worth the additional $400. The search doesn't seem to allow exact matches.


Should be the same as the difference between the max and the pro lies in the GPU core count.


No, the Max has twice the memory bandwidth as well - 200GB/s vs 400 GB/s, which will have a big impact on anything memory bound.


How much and on what is the question. I'm not in the market for a laptop, so I can wait and see.

That said, if Apple releases a MacMini M1 Max, I'll probably buy it.


Yeah, I would've already ordered a MaxMini if it existed. Maybe next year.


We have to see how much it matters in practice, since the M1 Max has roughly the same single threaded score as the M1.


Memory bandwidth as well.




Do we know of the M1 Max in the MacBook can sustain this level of performance without thermal throttling?

Mobile phones often have this issue where the benchmark looks gear until you run it for 15 minutes and the thermal throttling kicks in and kills the performance.

Thermal throttling is something I'd like to understand when comparing these against a desktop CPU.

For example, I just bought a 5900X and the Geekbench scores are very similar. My assumption is that the 5900X will be able to sustain full boost speeds indefinitely because of the additional cooling. I don't know if the MacBook can do the same.

If it can sustain mac performance without overheating inside the laptop form factor that'll definitely be a big win for Apple.


They increased the thermal cooling area inside the laptop by something like 60% if I recall, it’s a whole new design for temperature control. But they were quick to add that most of the time the fans will still not be needed because the chip rarely gets warm enough to require that.


Where is the graphics test? The M1 Max versus an NVIDIA 3080 or similar?


I'm waiting to be corrected by someone who knows GPU architecture better than me but as far as I can tell the synthetic benchmarks can trade blows with a 3070 or 80 (mobile), but the actual gaming performance isn't going to be as rosy.

Also recall that very few games needing that performance actually work on MacOS


"Also recall that very few games needing that performance actually work on MacOS"

But many Windows games do run under Crossover (a commercial wrap of WINE - well worth the measly licensing fee for seamless ease of use to me) or the Windows 10 ARM beta in Parallels. I got so many games to run on my M1 MacBook Air I ended up returning it to wait for the next round that could take more RAM. I'm very, very happy I waited for these and I fully expect it will replace my Windows gaming machine too.


Well, Apple G13 series are excellent rasterizers. I‘d expect them do very well in games, especially with that massive bandwidth and humongous caches. The problem is that not many games run on macOS. But if you are only interested in games with solid Mac support, they will perform very well (especially if it’s a native client like Baldurs Gates 3).


"very well" 45 fps top at 1080p medium settings https://www.youtube.com/watch?v=hROxRQvO-gQ

You take a 3 years old card you get 2x more fps.


Which other passively cooled laptop can do it? And what 3 year old card are you comparing it to? Hopefully something with 20W or lower power consumption.

45fps at medium Full HD is not far off a 1650 Max q


Apple compare themself to a 3080m, the perf from an M1 is not even close to a 3 y/o card. I don't care if it takes 10w if I can't even play at 60fps on "recent'ish" games.


You may have mistaken last year's M1 (the one in the video, available passively cooled in the MacBook Air) with the new M1 Pro and M1 Max (the ones being compared to the more powerful counterparts).


That's the M1. Yes, those numbers are superb for something that's competing with integrated graphics.

M1 Pro and M1 Max will be legit gaming GPUs.


What is the most intensive game you could actually run?

I was going to say Fortnite but I'm guessing that's not the case anymore


Baldurs Gates 3, Metro Last Light, Total War…


It's really hard to compare Apple and Nvidia, but a bit easier to compare Apple to AMD. My best guess is performance will be similar to a 6700xt. Of course, none of this really matters for gaming if studios don't support the Mac.


The mobile RTX 3080 limited to 105W is comparable to about an RX 6700M, which is well behind the desktop RX 6700XT.


The gaming performance will be CPU-bottlenecked. Without proper Wine/DXVK support, they have to settle for interpreted HLE or dynamic recompilation, neither of which are very feasible on modern CPUs, much less ARM chips.


From all reports, rosetta2 performs remarkably well. Apparently they added special hardware support for some x86 features to improve the performance of dynamically recompiled code.


Does someone know how much VRAM the M1X has? Because I bet it's far less than a 3070 or 3080.


There's no vram. It's unified / shared memory. There's no M1X. There's M1 Pro and M1 Max


I think we can safely shorten them to M1P and M1X.


Can't allow that. Either M1P and M1M or M1O and M1X.


M1 Pro & Max (and plain M1 too for what it's worth) have unified memory across both CPU and GPU. So depending on the model it'd be up to 32gb or 64gb (not accounting for the amount being used by the CPU). Put differently - far more than 3070 and 3080.


The memory is unified, and very high bandwidth. No idea what that means in practice, guess we'll find out.


It's very high bandwidth for a CPU, but not that great for a GPU (400GB/s vs 440GB/s in 3070 and 980GB/s in 3090).


It's not quite apples to apples. The 3070 only has 8GB of memory available, whereas the M1 Max has up to 64 GB available. It's also unified memory in the M1 and doesn't require a copy between CPU & GPU. Some stuff will be better for the M1 Max and some stuff will be worse.


>> ...not that great for a GPU...

* almost equals the highest laptop GPU available


But it's also a premium product, so it matching a 3070m isn't really above what you'd expect for the cost (but efficiency is another story)


On the other hand, it's zero-copy between CPU and GPU.


Of course, with a 3070 and up you have to play with headphones so you don't hear the fan noise.

This is the best feature of the new Apple CPUs if you ask me: silence.

Now to wait for a decent desktop...


the memory is unified so whatever ram is on there (16,32,64) can be allocated as vram.

That's why during the presentation they bragged about how certain demanding 3d scenes can now be rendered on a notebook.


I still didn't get their example about the 100Gb spaceship model - max RAM supported is 64Gb...


Up to 64GB…


It's probably bad, the M1 could not get 60fps on WoW so ... When I see Apple comparison I would take that with a grain of salts because the M1 is not able to run any modern game at decent fps.


My M1 cannot properly without stuttering show https://www.apple.com/macbook-pro-14-and-16/ (esp. the second animation of opening and turning the laptop).

Both safari and chrome


That's because it's actually a series of jpegs rather than a video(!!) - the same happens on my Intel Mac


Are modern games built with Metal? Pretty sure Apple deprecated OpenGL support. Macs have never been gaming computers.

The GPUs in the M1 family of Macs are for “professional” users doing Video Editing, Content creation, 3D editing, Photo editing and audio processing.


MoltenVK is Vulkan's official translation layer to Metal, and doesn't have too much overhead. Combine with dxvk or d3vkd to translate from DirectX—DirectX before 12 is generally faster with DXVK that Windows' native support.


"the M1 is not able to run any modern game at decent fps."

Do you have first hand experience with this? I do . We play WoW on MacBook air M1 and it runs fantastic . Better than my intel MacBook Pro from 2019


"Running fantastic" is what Apple would advertise, but what matters is fps, utilisation and thermals when benchmarking games


Defines fantastic because a 1080ti from 4 years ago run faster than the M1. My 2070 could run wow at 144fps, and it's a 2.5y/o card.

Yet most people can't get 60fps: https://www.youtube.com/watch?v=nhcJzKCcpMQ

Edit: thanks for the dates update


Comparing the M1 to a 1080ti is ridiculous. The 1080ti draws 250+ watts. The M1 draws 10w in the MacBook Air.

In the current market you can buy a MacBook Air (an entire laptop computer) for less than buying just a midrange GPU.


Well Apple compared themself to a 3080m which is faster than a 1080ti.


Apple compared the M1 Max to a 3080m. 4x the GPU cores and up to 8x the memory makes a difference, and it wouldn't be at all surprising to see that their numbers are accurate.


lol, got ‘em


No one has explained what you got wrong, so in case anyone reading this is still confused, Apple compared an M1 Max to a 3080m. An M1 Max's graphics card is ~4x as fast as an M1.


The 1080 Ti was made available in March 2017, so it's 4.5 years old. Not 6.


That’s not great especially because I believe WoW works natively for the M1 and uses the Metal API.

My follow up would be what settings were you playing at?


WoW is not a graphical demanding game even on the highest settings


In the keynote Apple said the M1 Max should be comparable to the performance of an RTX 3080 Laptop (the footnote on the graph specified the comparison was against an MSI GE76 Raider 11UH-053), which is still quite a bit below the desktop 3080.


No way it can get anywhere near 3080


Seems kind of unfair with the NVIDIA using up to 320W of power and having nearly twice the memory bandwidth. But if it runs even half as well as a 3080, that would represent amazing performance per Watt.


Apple invited the comparison during their keynote. ;)


I believe they compared it to a ~100W mobile RTX 3080, not a desktop one. And the mobile part can go up to ~160W on gaming laptops like Legion 7 that have better cooling than the MSI one they compared to.

They have a huge advantage in performance/watt but not in raw performance. And I wonder how much of that advantage is architecture vs. manufacturing process node.


I am very confused by these claims on M1's GPU performance. I build a WebXR app at work that runs at 120hz on the Quest 2, 90hz on my Pixel 5, and 90hz on my Window 10 desktop with an RTX 2080 with the Samsung Odyssey+ and a 4K display at the same time. And these are just the native refresh rates, you can't run any faster with the way VR rendering is done in the browser. But on my M1 Mac Mini, I get 20hz on a single, 4K screen.

My app doesn't do a lot. It displays high resolution photospheres, performs some teleconferencing, and renders spatialized audio. And like I said, it screams on Snapdragon 865-class hardware.


What sort of WebXR app? Game or productivity app?


Productivity. It's a social VR experience for teaching foreign language. It's part of our existing class structure, so there isn't really much to do if you aren't scheduled to meet with a teacher.


The MSI laptop in question lets the GPU use up to 165W. See eg. AnandTech's review of that MSI laptop, which measured 290W at the wall while gaming: https://www.anandtech.com/show/16928/the-msi-ge76-raider-rev... (IIRC, it originally shipped with a 155W limit for the GPU, but that got bumped up by a firmware update.)


The performance right now is interesting, but the performance trajectory as they evolve their GPUs over the coming generations will be even more interesting to follow.

Who knows, maybe they'll evolve solutions that will challenge desktop GPUs, as they have done with the CPUs.


A "100W mobile RTX 3080" is basically not using the GPU at all. At that power draw, you can't do anything meaningful. So I guess the takeaway is "if you starve a dedicated GPU, then the M1 Max gets within 90%!"


But does it support running x86 Docker images using hardware emulation?

This is what's holding me off buying one since I don't know whether to get 16 GB RAM (if it doesn't work) or 32 GB RAM (if it does work).


There is no hardware in the M1's for x86 emulation. Rosetta 2 does on the fly translation for JIT and caches translation for x86_864 binaries on first launch.

Docker for Mac runs a VM, inside that VM (which is Linux for ARM) if you run an x86_64 docker image it will use qemu to emulate x86_64 and run Linux Intel ELF binaries as if they were ARM.

That means that currently using Docker on macOS if there is a native ARM version available for the docker image, it will use that, but it can and will fall back to using x86_64 docker images.

That already works as-is. There is no hardware emulation though, it is all QEMU doing the work.


That said, x86_64 images that require QEMU are much buggier than pure arm64 images. Terraform's x86_64 image, for example, will run into random network hangups that the ARM image doesn't experience. It was bad enough for me to maintain my own set of arm64 Terraform images.


"There is no hardware in the M1's for x86 emulation. Rosetta 2 does on the fly translation for JIT and caches translation for x86_864 binaries on first launch."

This is not quite correct.

First, as I understand it anyways, Rosetta 2 mostly tries to run whole-binary translation on first launch, rather than starting up in JIT and caching results. It does have a JIT engine, but it's a fallback for cases which a static translation can't handle, such as self-modifying code (which is to say, any x86 JIT engine).

Second, there is some hardware! The CPU cores are always running Arm instructions, but Apple put in a couple nonstandard extensions to alter CPU behavior away from standard Arm in ways which make Rosetta 2 far more efficient.

The first is for loads and stores. x86 has a strong memory ordering model which permits very little program-visible write reordering. Arm has a weaker memory model which allows more write reordering. If you were writing something like Rosetta 2, and naively converted every memory access to a plain Arm load or store, the reorderings permitted by Arm rules would cause nasty race conditions and the like.

So, Apple put in a mode bit which causes a CPU core to obey x86 memory ordering rules. The macOS scheduler sets this bit when it's about to pass control to a Rosetta-translated thread, and clears it when it takes control back.

The second mode bit concerns FPU behavior. IEEE 754 provides plenty of wiggle room such that compliant implementations can be different enough that they produce different results from the same sequence of mathematical operations. You can probably see where this is going, right? Turns out that Arm and x86 don't always produce bit-exact results.

Since Apple wanted to guarantee very high levels of emulation fidelity, they provided another CPU mode bit which makes the FPU behave exactly like a x86 SSE FPU.


Note that Docker Desktop for Mac has always used a VM to run a Linux instance. Same with Docker Desktop on Windows (I don't know if this has changed with WSL). The main difference on M1 Macs is the qemu emulation when a Docker image is only available as x86_64. If the image is available in AArch64 it runs native on the M1.


WSL1 doesn't support cgroups and other pieces needed to run containers, but Docker Desktop can use WSL2, which uses a lightweight VM in the background, so you are correct.


Oh yeah, I thought the whole VM thing was implied with the fact that Docker is a Linux technology...


> Same with Docker Desktop on Windows (I don't know if this has changed with WSL).

Not really. On Windows you can choose if you want to run Linux binaries in a VM or native Windows containers.


Native Windows Containers run Native Windows Binaries. You can't just launch your Linux docker containers using Native Windows Containers.


I feel Windows Containers are a whole separate thing. I personally have never seen anyone use them, but then again I have never worked on a Windows Server stack.


Thank you for the thorough explanation.

Why can’t Rosetta 2 do the x86-to-ARM JIT translation inside the “Docker for Mac” VM instead of using the (presumably slower) QEMU emulation?

Or is QEMU smart enough to do the same translation that Rosetta 2 does which, if I recall correctly, only loses about 25% performance compared to native x86?


Rosetta 2 is Apple's product that is specifically geared towards translating Darwin binaries. Inside the VM is standard Linux. Linux can't run Rosetta 2 since that is a macOS tool.

QEMU emulates a platform on the fly, it does not attempt to rewrite the code to run on ARM and cache that result. QEMU also emulates all the specific quirks of the x86_64 platform, whereas Rosetta 2 does not.

It would be neat if Apple open sourced their Rosetta 2 technology and made it more widely available, but at this time there is no option for 3rd parties to use Rosetta 2, especially from within a Linux VM.


Single core regular M1 Macbook about the same. Multicore the Max is a lot higher.


Looks not the same to me: https://browser.geekbench.com/v5/cpu/compare/10508124?baseli...

Did I pick the wrong regular M1 Macbook?

Edit: Hmm. https://browser.geekbench.com/v5/cpu/compare/10508059?baseli...

I guess Geekbench is a little unpredictable?

And it looks like HN is pushing their capacity...


Geekbench has always been a little unpredictable. It was most famously skewed quite heavily in Apple's favor before Geekbench 4, and even the modern scores are pretty opaque. I'd wait to see real-world performance metrics out of it.


Hmm not sure what I was looking at before, that site is not the easiest to navigate.


My second link shows what you're talking about. I'm not sure which result to trust.


It should be about the same. It is the same micro-architecture which is why they are still called M1. There are just more of the same cores.


I’m not sure how Geekbench browser tests hold up to real world usage, but note that Apples ecosystem is WAY different compared to x86.

For starters, the chip isn’t general purpose compute, it’s more of an ASIC that is optimized to run the OS abstractions available through the official APIs (Think on screen animation such as rendering an image or detecting things in it. On x86, you implement the code and in some cases, proprietary libraries such as Intels MKL make it seemingly faster to run. On Apple Silicon, for such common use cases there are DEDICATED chip areas. Thus, an ASIC)


Is geekbench seen as a valid score? It seems that on the pc hardware sites and youtube channels I frequent that they don't seem to mention it. The only time I seem to see it is for mac stuff.


It is one of the few cross-platform benchmarks that can be run on both PCs and Macs, as well as iOS, Android, and Linux.


Not really. It's nice to run the same benchmark on all platforms but most hardware sites run a game demo that's easily repeatable and take an fps reading or run Cinebench which renders a scene.


You can find more results at https://browser.geekbench.com/v5/cpu/search?q=Apple+M1+Max

(it does, broadly, appear to be pretty comparable to Intel i9-11900K https://browser.geekbench.com/v5/cpu/search?q=Intel+Core+i9-... )


i9-11900K is 125W TDP, and the M1 is probably nowhere near that (M1 is 10-15W TDP)


TDP is the max design power of the platform, not the average though.

The M1 maxes out at somewhere just under 30W - Anandtech measaured it at 28W after removing the platform power use. The M1Max will probably run somewhere from 40-50W under max CPU load, more if you load the GPU as well. The M1 is a great chip, but it’s not magic - it appears to be approximately 1.5-2x as efficient on a per core basis? Which is about what we’d expect from a process shrunk + decent optimisation work by a very able team. What absolutely isn’t is 10x more efficient.

(Yes, TDP has been gamed somewhat by both Intel & AMD, and might not be the absolute max power over very short timescales, but it’s still intended to be the measure by which you scale your cooling system for extended max cpu usage & on that level the M1 is not a 10 W chip.)

For most users, this doesn’t matter - Apple has specced the cooling system in their laptops to cope with typical usage so it can run the M1 at full rate for a few minutes, which is plenty to render a web page or do some other UI work. It’s even enough to run most developers compile jobs. But if we’re comparing processors, then comparing the TDP of an Intel or AMD CPU with Apple’s “typical” power usage is misleading, one might even say disengenuous.


Yeah, that's one of the most mind-blowing (and awesome) things about these comparisons! So cool :D


I ordered an M1 Max MacBook, really looking forward to it.

It looks like the battery life is fantastic, and it's quiet and cool. Given that, what are some reasons Apple might not have upped the wattage for an even faster chip? Is it just that they are so far ahead of the competition, and these are things that can't easily be matched (fanless operation, long battery life), or is there some physical reason they wouldn't want to run the chip hotter?


Anyone know the score for Intel 12th gen Alder Lake?


1834 ST / 17370 MT [1]

But that's also for something which has a TDP of 125W [2], unsure if that's the right number for a mobile chip? Also no clue what M1 Max's TDP is either.

[1] https://browser.geekbench.com/v5/cpu/9510991

[2] https://cpu-benchmark.org/cpu/intel-core-i9-12900k/


M1 Max TDP will be around 30-40 watt for the CPU cluster and 50-60 watt for the GPU cluster. Note that unlike x86 CPUs (which can draw far more than their TDP for brief periods of time), this is maximal power usage of the chip.

M1 needs about 5W to reach those signs-core scores, Tiger Lake needs 20W


The Intel CPU is pretty underwhelming. It has twice the number of high performance cores (16 vs 8) but is only 37% faster for multi-core tasks.


I guess that's what happens when Apple's process is a generation ahead of Intel's.


That Intel CPU (12900K) has 8 performance cores, i.e. as many as M1 Max.


Not as far as I can see.

> The Intel Core i9-12900K operates with 16 cores and 24 CPU threads. It run at 5.30 GHz base 5.00 GHz all cores while the TDP is set at 125 W.

https://cpu-comparison.com/intel-core-i9-12900k/


12900K has 8 high performance cores (with 2 hardware threads each, look up hyperthreading), and 8 efficiency cores for a total sum of 24 hardware threads.

Previous Intel generations have only had so called high performance cores (except, maybe, for the Atom line of CPUs), even though not all of them have had hyperthreading enabled.


I can't check for you due to hug but the 12900K was clocking in at about 1900 ST IIRC

Edit: Leaving this up so I can be corrected, don't think I have the right figure.


If true I’d be interested to see what that is in points per watt.


The points per watt is probably going to be crap but equally I don't care all that much.

One thing as well is that there are always headlines complaining about power usage, but the figures are nearly always from extreme stress tests which basically fully saturate the execution units.

Geekbench is slightly different to those stresses so not sure.


Sure but I’m only asking for Points per watt to use as a baseline for testing Apple’s claims.


Peak wattage is apparently 330W on a super heavy test, not really sure how to extrapolate that to geekbench.


Wow, there's a power virus for Alder Lake client that can make it draw 330W? Reference?


Bitcoin? ;)


Looks like Geekbench just got the HN bug of death.


I'm envious. I like the hardware a lot. But at the same time, I can't help but think "This has to be the most powerful computer ever built to run a browser and a terminal". I can't run PyTorch with acceleration, I can't use Numba, or CUDA. It seems it will soon be possible to use Blender, which for my artsy side is a plus, and I know that a few of the other design tools I use work on MacOS with varying degrees of acceleration, so there is that.


These are not insurmountable, probably only a matter of time. Tensorflow has already been ported and optimised. https://blog.tensorflow.org/2020/11/accelerating-tensorflow-...


It’s obviously not for you, yet, then? People who need PyTorch or numba or cuda are a vanishing minority even in the tech scene.


This is the first bench I believe might be real. i saw 11542 multi core spammed across news sites a couple days back, but it didn’t align with a 1.7x boost to performance (which this result actually does). The single core was 1749 which also didn’t make sense as I’d imagine there’s maybe a tiny bit more TDP per core in the MBP 14 and 16 than the OG 13” M1 mbp. That and it was on macOS v 12.4… which is maybe nonexistent and thus fake? This score here is believable and incredible.


The single core performance is still the same as the M1, so frankly, for a laptop that's awesome, but I wish single core performance could hit the AMD Ryzen scores.


Depends what benchmark you ask. According to Passmark, the M1 is the best single core CPU no question.


All this makes me want to see Geekbench ported to other binary architectures. I'd love to see how POWER9 and POWER10 compare to Xeons and EPYCs.


How will these chips do for training neural nets? The 64 GB RAM would be awesome for larger models, so I’m willing to sacrifice some training speed.


I know this doesn't answer your question, but can I just ask (out of curiosity) why you're training ML models on a laptop?


It’s all about the RAM. 64GB would allow input of larger image sizes and/or nets with more parameters. Right now, the consumer card with the most RAM is the rtx 3090 which is only 24GB, and in my opinion overpriced and inefficient in terms of wattage (~350W). Even the ~$6000 RTX A6000 cards are only 48GB.


I don't think replacing a workstation with a Macbook because of RAM makes too much sense: If running one minibatch of your model already takes up all the memory you have, where would the rest of your training data sit? In the M1, you don't have a separate main memory.

Also, software support for accelerated training on Apple hardware is extremely limited: Out of the main frameworks, only tensorflow seems to target it, and even there, the issues you'll face won't be high on the priority list.

I know that nvidia GPUs are very expensive, but if you're really serious about training a large model, the only alternative would be paying rent to Google.


Good points. I look forward to some benchmarks. Just hoping for an alternative to Nvidia sooner than later. Dreaming apple will solve it and offer a 1.5 TB mac pro.


Is this much different than asking why someone would do _any_ work on a laptop?



Why does Apple not open up their hardware to other operating systems like Linux. They will already get our money from the hardware purchases, what more do they want.

I know Asahi Linux exists but without Apple's (driver) support it will never reach macOS on M1 level performance.

(If someone disagrees, please compare Nouveau with proprietary Nvidia drivers.)


Nobody wants the Linux desktop experience associated with their brand.


^ I agree, and also the support burden of N distros, for N -> inf.


Steam hardware?


Operative word: "want".

There was no other viable option.


Valve didn't have to include a KDE desktop on the Steam Deck, but they still did, so it sure seems like they wanted it it.


Dell does


They have some of your money, but they want more of it.

Mac owners are more likely to buy iPads and iPhones and become embedded in the Apple ecosystem. That draws people into the app stores, which Apple gets a cut of, and into Apple Music and Apple TV...

If they're very successful they get you to sign up for Apple Pay and the Apple Card and get a chunk of your spend for things entirely unrelated to Apple.

If they just sell you the hardware and you can easily run Linux on it, you might not run macOS and move further into their ecosystem.


Also - even if they only wanted your hardware money, if you run Linux on Apple hardware, there's less chance your next purchase will be Apple hardware. Linux is portable, macOS isn't (legally, easily -- Hackintoshes notwithstanding, but even those are going to go away when Apple ditches Intel hardware for good).


>Hackintoshes notwithstanding, but even those are going to go away when Apple ditches Intel hardware for good

Maybe people will find a way to build ARM hackintoshes. :)


As cool as that would be, why would they? I mean what do they have to gain from it? Maybe a few more Linux users will buy MacBook's but not enough to impact their bottom line. Plus those users wouldn't be part of the Apple ecosystem and at least since the iPod Apple's goal has been bringing you into the ecosystem.


More importantly, those users would complain and some would even ask for support.

As horrible as this is going to sound, I'm thinking both Apple and Microsoft file that under: "Some users are not worth the trouble."

Probably OK for server users though? But Apple doesn't really make servers.


Speaking of fancy new chips for servers, I'm not seeing anyone complain about Amazon not opening up their graviton processors... (can't even buy one for F's sake, have to lease them via AWS)


Apple could gain some trust and goodwill from the technical user base they have been alienating for 10 years


shrug doesn't look like they need it. As long as they keep churning out machines and devices that make people's wallets reflexively open, they don't have to pander to the tiny minorities...

Not a particularly nice situation to be in, if you're not in the Apple ecosystem, but there's the world we live in, and the world we wish it to be. Only sometimes do those worlds overlap.


As though programmers don't use Macs, at least in SV they're by far the device of choice.


I bet the number of people within the technical user base for whom the lack of linux support is a deal-breaker is significantly smaller than the number of people who don't care.

So while it's a nice boost in goodwill, it's probably small enough for Apple to safely ignore.


Nouveau is a bit of a special case because Nvidia actively blocks third party drivers from fully leveraging the capabilities of their cards. While Apple isn’t officially supporting any particular third party OS on M-series hardware, they’re also not obstructing the creation of high performance third party drivers for it.


Apple's core competency is selling hardware, with a tightly integrated ecosystem to optimize user experience with that hardware.

There is no incentive to facilitate introducing that hardware into an uncontrolled foreign ecosystem. Apple does not want users looking at their computer while muttering "this sucks" when "this" is an OS+UI famous for incomplete/buggy behavior.

(I've tried going all-in for Linux several times. The amount of "oh, you've got to tweak this..." makes it unusable.)


They did open up their hardware to other operating systems: they engineered a feature into their secure boot system which allows a Mac's owner to attest (with physical presence required) that they'd like to run a kernel not signed by Apple. That's the door Asahi Linux uses. It isn't there on iOS devices, even those also based on M1.

Everything else you're asking for would require them to spend significantly more money and/or engineering resources on supporting the port, and that just doesn't seem like a thing Apple's senior management would go for.


I’d like this but you make it sound like this would not require significant effort from them.

I suspect they would say if you want to run Linux you can do it in a VM and use the already debugged MacOS device drivers.


> Why does Apple not open up their hardware to other operating systems

It's extra effort they don't want to go to. They spent a lot of engineering and support time working with MS on boot camp, handling (what are to them) special cases and back compatibility. They really needed it at the time, but no longer need it so make no effort in making it happen. And Apple never did linux support, it's simply that linux runs on most hardware that runs windows.

Among other things, here's a major reason why it's hard: Apple supports their hardware and software for quite a long time, but is happy to break back compatibility along the way. MS considers back compatibility critical, despite the cost. I respect both positions. But getting windows running on intel Mac hardware wasn't automatic, and required developing support for bits of code designed (by MS) for all sorts of special cases. They simply needed it, so did the work, with the cooperation of MS.


Because Apple intends to sell UX, not just some hardware.


How does that compare to top of the line AMD Ryzen latest gen (site is dead for me right now)?


Single core is similar to the 5900x but multicore is more in line with 3900x.

Quite impressive for a laptop, I'm sure the power consumption will be much higher compared to the M1 but likely no where near desktop parts.



This confirms that single-core performance is essentially the same as the basic M1.


Would like to see the Metal benchmark too, but doesn't appear yet on the Geekbench Metal page. That would be interesting to see how Geekbench scores it against many discreet GPUs.


I'm curios on the difference between the 8 cores and 10 cores for the M1 Pro. I'm definitely not going to get the Max, I'm just not the user that needs the GPU's


Someone needs to come up with realistic Mac benchmarks. Like 'seconds it takes for Finder beachball to disappear'. At this metric my M1 sees no improvement over my MB12.


Beachball disappears when Apple's developers learn to write asynchronous code. (You can't blaame the hardware for programming decisions that are fundamentally slow or stupid).


If I’m reading this right, it’s only a tiny bit faster than the existing M1 on single core. 1780 vs 1720 or so. For a lot of tasks, single core is what really matters.


Today's the day when people actually check Apple's claims with a graph that didn't originate inside Apple. Remember the reality distortion field guys?


A few days ago there were comments on HN that the Geekbench score on M1 Max was run on an operating system not optimized for M1. Has that changed with this one?


Hmm.

M1 Max = 1783 Single-Core Score 12693 Multi-Core Score

Intel Core i9-9900X 3500 MHz (10 cores) October 20th, 2021 = Single-Core Score 1078 Multi-Core Score 11305


For someone with more knowledge- are these scores dependant on the OS? Like would it have different scores if it ran Linux?


On macos, if you are running some x64 apps, they are jit'ed and emulated. On Linux, all your repo and manually compiled apps will be compiled for the native CPU so it would be fast.


Looks like it's somewhere between my AMD Ryzen 5800X desktop (paid about $800) and a 5900X.


Wondering whether it will be more profitable to work for AMD or APPLE in the next few years?


This puts it at about 3/4 the performance of a 5950X. Not bad for mobile.


I was dead set on getting a new Mac but I think I will opt for Alder Lake. It appears Apple Silicon for desktop will be pretty much constrained to mobile designs and limitations. Perhaps I will revisit if they decide to release M1 Max inside a Mac Mini but I highly doubt that will happen.


Site not loading for me :(


So can someone build a mini cloud service with these machines?


How does this compare to a Mac Pro for video editing


Should be better. The discussed how it was even better than MacPro with afterburner.


Wondering if some of the avid videogamers, in need for powerful GPUs, will consider Macbook Pros with M1 Pro or Max as a good option.

AFAIK, some of the most popular videogames run on Mac too.


Almost none of the most popular actually. No Fortnite no warzone


Some, but not nearly enough.


I wonder what Asahi Linux devs can show us.


Where are the Gflops/W?


you guys nitty grit. it's all about efficiency.


I will get one if it can run Windows and nothing comparable released by then.


Pretty underwhelming from a pure performance standpoint after all the hype from the launch.

https://browser.geekbench.com/v5/cpu/singlecore


Is that supposed to be fast guys?

Isn't that still slower than some ryzen 5 5600x? (my pc uses this but below is not my benchmark)

https://browser.geekbench.com/v5/cpu/8238216

I'm not sure how fast or good is that number.. But i've heard good things about m1 and planning to probably upgrade.

edit: Not sure what's with the downvotes, i'm genuinely curious. I'm in the apple ecosystem with many apple products, windows pc, basically a tech enthusiast without any hate towards 1 brand. geez


It's very fast for a mobile form factor, and still very fast for a desktop processor.

The benchmark you linked is grossly overclocked, as are many others people have linked throughout this thread. Which is fundamentally the problem with the Geekbench browser and people searching around for something to prove a point, when the numbers on there are completely unverified (and sometimes wholly fictitious), unproven, and often under ridiculous scenarios.

Use actual reviews from credible reviewers and extract the GB results from that. e.g. https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...

And FWIW, I don't even put any credibility on these M1 results. I'll just wait until Anandtech or someone similar has a real review with competently created, reproducible benchmarks.


> Base Frequency 4.72 GHz

> Maximum Frequency 6.03 GHz

This was seriously overclocked, wouldn't be surprised if with liquid nitrogen. So probably poor comparison to laptop CPU ;)


We're also seeing slower m1max benchmarks though

https://browser.geekbench.com/v5/cpu/10476727


Ah yes that's true, didn't realise it was overclocked.

Probably good to compare to those ryzen in laptop..


What did I miss? Dell XPS 15 with intel CPU has way higher scores… is Geekbench not capable of working correctly on Apple Silicon?

https://browser.geekbench.com/v4/cpu/16376678


That's a Geekbench v4 score, which isn't comparable to a Geekbench v5 score.

You can see the same laptop on Geekbench v5 here, where it scores much, much lower than the M1: https://browser.geekbench.com/v5/cpu/10506812


For those that can't load and are curious, the M1 Max scores about 2x the Intel Core i7-11800H in the Dell laptop on both single core and multi core scores.


I don't think that was a good sample, most of the XPS 15's I see have a much higher single thread score, almost double the one linked above:

https://browser.geekbench.com/v5/cpu/10491562


It looks like the original as been updated, the differences don't seem very drastic especially when you consider what the SKU's w/ the m1 max cost

guess it's about a 1.5-2x difference in battery life though


It also costs twice as much, so I should hope I'm getting something for the money.


I think there is a big variance in XPS 17 scores. For reference, here is a top scoring XPS 17. In this case, scores are much closer.

https://browser.geekbench.com/v5/cpu/compare/10481116?baseli...


You’re comparing Geekbench 4 and 5 scores - you can’t do that. It’s a different scale.

Here’s an example of a Geekbench 5 score: https://browser.geekbench.com/v5/cpu/10501477


Here's the actual Dell XPS 17 vs MacBook Pro: https://browser.geekbench.com/v5/cpu/compare/10502109?baseli...


That might be a lemon XPS 17. With a higher scoring XPS 17, the benchmark is much closer:

https://browser.geekbench.com/v5/cpu/compare/10481116?baseli...


Is that a legitimate GB score for that machine? Searching for scores from that machine gives me: https://browser.geekbench.com/v5/cpu/search?q=XPS+15+9510

Which shows score much lower than the linked test (maybe I'm missing something). Likewise unless it's coming from someone like AnandTech I'm skeptical of any benchmarks for the M1 Pro/Max until these machines are actually released next week.


I think you’re looking at a geekbench 4 score, vs v5 in the post. It seems that this machine performs significantly better than an XPS 15.




That's a v4 score, which can't be compared to a v5 score.


Geekbench 4 and Geekbench 5 are different scales.

Edit: about 5 people beat me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: