Hacker News new | past | comments | ask | show | jobs | submit login
Apple’s M1 Positioning Mocks the Entire x86 Business Model (extremetech.com)
604 points by danaris on April 23, 2021 | hide | past | favorite | 918 comments



One thing which I don't think has been commented on is the marketing effort behind the M1.

It's getting a lot of prominence and its use across lots of computers means that there can be a consistent message that "M1 is great - get a new Mac and get an M1".

It also provides an opportunity to distinguish between M1 and the next generation M2 (presumably).

I've always thought Intel's marketing was a bit confused - i7 stays the same over 10+ years with only the obscure (to the general public) suffix changing from generation to generation.


> I've always thought Intel's marketing was a bit confused - i7 stays the same over 10+ years

I think that is because the performance has been the same for the past 10+ years.


I am continually astounded that the quad core sandy bridge core-i7 I bought in January 2011 is still completely serviceable for pretty much all tasks outside of gaming. The bezel and overall form factor is clownishly large by todays standards, and the screen is starting to look a bit faded, but as a portable but mostly stationary PC it still works great alongside my main laptop I use for work.

I knew back in 2011 I was buying something that was going to be pretty future proof, but 10 years... I would have laughed at you.


I think that’s because 99% of regular computer tasks don’t need 24 cores operating at 5 billion cycles per second.

The differences between an i7-4xxx and i7-11xxx are more about hardware acceleration, eg HEVC and SIMD. That’s why they look so much better in benchmarks that are designed with that in mind.

My main desktop is an unRAID server that hosts multiple VMs. Only yesterday I discovered that I accidentally only gave Windows four (of my 16) cores. Literally 3/4 of the processor was unused for months, and I didn’t notice any slow performance the entire time.


Speak for yourself. Me personally? I would like one core per Chrome tab.


My new M1 constantly freezes, and I’m pretty sure it’s because of chrome. Or the secret SSD thrashing design flaw that Apple is hoping nobody talks about till they quietly fix it.

Was a bit disappointing. I was like, finally! A new thing! I’m freeeeeee..... wait no it still freezes, I miss sandy bridge, etc.


I use an M1 powered 16GB MacBook Air and I've never seen any freezing, even under heavy load. I don't use Google Chrome though.

Also, system is writing to the SSD with the same rate of my Linux machines: 4TB/year.


Heavy load in my case is a few dozen terminal tabs, six of which are running TPU training jobs, 40 chrome tabs, a few of which are playing twitch streams and YouTube videos, a half dozen instances of pycharm, one webstorm, one clion, around 35 macvim windows, and a few dozen PDFs open in Preview. I’m not exactly representative, but I think the stddev of usage patterns is high across developers.

The unfortunate part is, when it freezes, it does the beachball for a full minute before I successfully execute a “switch to activity monitor / s click topmost process / force quit / confirm” sequence. Each step takes around 45 seconds to complete. I assume Apple QA never tested the extreme edges of workloads.

One poor guy on Twitter is already up to 18% of his SSD lifetime, in the first three months of having an M1. We had a heart to heart about how silly it was that the drive will die within a year, that Apple better replace it under warranty, and so on. But it’s a coin toss whether he’ll just have to eat the cost and buy a new one: https://twitter.com/_wli/status/1364934834229977090?s=21


This is heavy load for any machine unless it's configured as a bona fide workstation. Any machine, any OS (with M1 MacBook Air's HW configuration) would trash its SSD while swapping in that case, IMHO.

> I’m not exactly representative, but I think the stddev of usage patterns is high across developers.

When I'm in development mode with all cylinders firing, I have a single Eclipse window in CDT perspective, a couple of terminal tabs, Zeal (or Dash) for documentation and a couple of firefox tabs for documentation not available in Zeal.

Music is either supplied by a single YouTube tab or Spotify, and that's it.

The most unfortunate component is CPU in my case. I try to ooze every single bit of performance from it while running the application. Memory controller comes the second due to transfer operations but, I neither max out the RAM or cause system to do heavy swapping.

I'm always kind of conscious about the hardware resources of the system and minimize my resource usage habitually, without killing productivity.


This guy seems to have pinpointed the problem: https://www.youtube.com/watch?v=FyMCoQmsv-I

The problem is unoptimized apps running on Rosetta2.


Thank you for finding this!

Unfortunately, and somewhat sadly, and melanchololy, I must report that I've been at OS version 11.2.3 (the latest) and run almost zero Rosetta apps: KeepassX, Beyond Compare, and Optimal Layout; of those, Optimal Layout has read 100GB, which is almost nothing.

In comparison, my kernel_task process is at 43TB bytes written, and WindowServer is at 8TB read.

So, as with cancer, there are multiple contributing factors, and hopefully Apple will ship an update to fix it... someday.


Surprise. I bought the rose... theory. Sigh.


Well on Linux my 16GB thinkpad would thrash for more than 60 seconds unless I sysrq.


Thrash is an odd word for an SSD... I mean, there's no moving parts, so I can't imagine how it's thrashing about.


Swamping an SSD with write requests and forcing it to write-level aggressively is trashing an SSD IMHO. Since it both silently grinds to fulfill the requests and loses a lot of lifespan during the process.


Apple doesn't and shouldn't care about optimizing for your use-case. At some point, the user should be responsible for stopping processes when they're not using them anymore.


> 40 chrome tabs, a few of which are playing twitch streams and YouTube videos

> around 35 macvim windows

You do you, but I'd wager you could probably trim some of that low hanging fat


I was going to say, there's no humanely possible way you can watch and absorb the content from 5+ videos at the same time right? Like what's even the point? (I presume tabs with videos on them that aren't playing still perform around the same as tabs without videos at all, since there's no need to re-paint frames.)


And here I am, thinking that heavy load is one node instance running webpack, a sql server, vscode, Figma and a few chrome tabs.


a few of which are playing twitch streams and YouTube videos

How many eyes do you have?


I am genuinely curious how you keep track of all that. Presumably this is all spread out over maybe a dozen workspaces?


I’m reminded of the podcast ATP and the great episode ‘The Windows of Siracusa County’. John’s window management system is both baffling and amazing. It’s a good listen.

https://atp.fm/96


Possibly multiple desktops.


If I do the same in my intel Mac book pro, it not only screams, burns but also slows down. honestly intel chips are overrated and overpriced.


The surprising part is that you have attention to spare for all those distractions.


I understand the idea of integrating everything to reduce the electronic footprint, but some stuff like RAM and storage should be serviceable outside of replacing the entire board..


Sounds like you should report it in Feedback Assistant since you have an unusual workload.


Same here, zero freezing and amazing performance. I use Safari though and have not even installed Chrome.


I consider Chrome harmful TBH. I use Safari and Firefox (and I sync it with my other, non-apple computers).


Is the the 8GB model? I had the same issue with my workload and found that it was always under high memory pressure, upgrading to a 16GB model fixed it for me.


It’s the 16GB. The memory pressure is indeed the issue. I find it hilarious that Apple accidentally inverted their LRU into an MRU and constantly swaps for no reason, yet it’s still so fast that nobody even noticed during QA. Mine has already read/wrote 48TB in the first three weeks; at this rate the SSD will be dead in three years.

(If you haven’t looked into SSDGate yet, be sure to check your smartmon to see whether you’re affected.)


This guy seems to have pinpointed the problem:

https://www.youtube.com/watch?v=FyMCoQmsv-I

The problem is unoptimized apps running on Rosetta2.


Emulated apps don’t use more memory. It’s more likely Activity Monitor calculates their usage wrong, it doesn’t show very accurate numbers in the first place.


Any sources for that. I did a quick web search but didn’t find any conclusions to the saga.


Chrome is by far the worst-performing browser on my (Intel-based) Mac with many tabs open. It doesn’t come anywhere close to keeping up with my browsing style, wasting tons of system resources, glitching, crashing, etc.... and then oh wow the battery drain.

Safari is best, but Firefox is also pretty decent nowadays.


I have the same issue with freezing regularly. Using emacs aarch64 build and clang from llvm sources ( or OS X native clang, either one ) compiling in emacs window (Mx compile) and it will stop, hang for about 10 seconds, and then reboot. Rosetta isn’t in this chain. Haven’t looked to find why yet, but task switch to Apple news while I’m waiting and I can see the build in emacs halt for a while. Task switch works after a while, but sometimes it doesn’t and a watchdog kicks in it seems.


I actually have an orthogonal issue, but I wonder if it's related. When I have an external monitor connected via a Thunderbolt dock, the device takes forever to wake from hibernation (that is to say when I've left it on sleep for several hours). i.e. 10+ seconds, which is kind of ridiculous in 2021. This doesn't happen when the monitor is not connected. I also have a similar amount of things open as you described


Not m1; but unplug and plug it back helps a lot


What about Safari?


That's a perfectly valid question. Lots of people use Chrome because they've used Chrome, and that's the whole reason. Some people need Chrome because it supports some site or tool that they require. I recommend that everyone else at least try Safari and see what they think of it.


Last time I tried it (couple of months ago) there were practically no extensions and Bitwarden didn't work in incognito. And Chrome's UI/UX is simply great. And dev tools as well.


That’s a personal preference thing. Personally I think chrome UI/UX is horrible and slow. Ut I’m a safari user so again: personal preference.


Sure, I've never used Safari enough myself to have an opinion (since it's never been an option for me due to various things), it might be equal to Chrome or even better (I like Safari's clean UI). I'm mostly comparing it to Firefox which is quite clunky in my opinion (I've given it many chances).

And I'm actually typing this on Safari now and it looks like the issue has been fixed - Bitwarden now works in private mode. Maybe it's time to give it yet another shot. :)


Whoa.

Maybe an hour into testing Safari and I look at Activity Monitor and see that https://calendar.google.com is consuming 1 GB of RAM and https://docs.google.com 865 MB. :o

Don't know if the numbers are comparable but in Chrome's Task Manager the numbers are 224 MB and 121 MB respectively.

Edit: This might be an old (unfixed) issue https://discussions.apple.com/thread/6640430 ... This sucks because I have a specific doc and the calendar that I always keep open in pinned tabs. There are some other sites with seemingly quite high RAM usage as well. Well at least nothing is lagging so I guess I'll just chug on and see what happens.

Edit2: This behavior is just insane. I closed those two tabs and suddenly Gmail shoots up from out of the blue to 1.22 GB?! Then I reopen the two previous tabs (doc/calendar) again and Gmail stays the same while Calendar goes on a diet and sits at 187 MB (doc at 983). This is super weird. I'll just keep them open and see how it behaves overall, might just be wonky numbers?


I recently got an M1 mac myself and tried Safari out again -- I seldom used it on my 2016 Pro (preferred Firefox) but it's supposed to have stellar battery life, so why not give it a go?

Nice that they've got built-in tracking protection now, I guess, but I left Twitter running in a tab overnight and when I woke up Safari was reporting over a gb of memory use for that tab alone. Something in how they cache or handle JS for some of these long-running services, or maybe having to do with service workers ?, Safari just seems to consume a lot of memory. It's not a huge issue I guess, with the way the M1 never seems to have any memory pressure issues, but it's part of why I remember switching off Safari in the first place.

If anyone knows why Safari seems to consume more memory with these long-running processes, I'd be real curious to know why...


Battery life has been the reason for me trying it in the past as well.

And yeah I mean even though Activity Monitor is reporting high RAM usage I don't seem to have any memory pressure issues so it might not be that big of a deal (MBP 2015 w/ 8 GB RAM).

I'm actually considering getting an M1 myself next week. Have been waiting for the initial kinks to get sorted out and most apps ported to it and I think it's time now. :) SSD wear seems to have been resolved with the latest Big Sur versions and fewer Rosetta apps and now even ST4 beta has an M1 build.

My justifications are that I'll get an excuse to upgrade to 16 GB RAM and I'll get touch ID so I can have a more secure password and not have to type it all the time. And with the stellar performance and battery life of the M1 I can just keep using Chrome (Brave). :P


It sounds like they are sharing memory and the shared memory is getting accounted to whatever tab was opened first until it closes.


Meh.

New tabs are opening in weird places, really faaaar to the right skipping between 6-7 (unrelated) tabs. Really cumbersome to navigate.

Horrible selection of extensions still. Really missing searching for any tab with Cmd+Shift+K (I often have ~100 tabs open across multiple windows) that I get with the Tab Switcher extension.

There's a Noscript equivalent extension but it costs $3...

And I can't initiate a search with "url bar -> you[tab]<searchterm>[enter]" (for a Youtube search).

Meh. Safari just doesn't work for me. Maybe if I sat down for a week or two and wrote a bunch of extensions myself but I don't see the value in that.

On the plus side Safari does seem snappy but then again I only have ~30 tabs open (across 6 windows) currently. So I might give it a win on speed but it definitely loses in overall usability.

Will try again in a year or so.

Edit: OH SHIT this is a definite dealbraker - apparently I can't select multiple tabs by Shift-clicking to pull them out into a new window. That completely kills my workflow of separating topics into different windows, and/or closing tabs in bulk.

Edit2: Wtf, how didn't I notice this earlier - I cannot see the URL when hovering over a link. How does anyone consider this within the realm of good UX?


Command+/ will fire up the status bar for Safari which shows you the URLS.


Ok that's a weird default...


For your last issue you can see it when you display status bar.


Ok that's a weird default...


When I first had my M1, I ironically had to use Canary for stability purposes


This does not happen to me on Firefox, I am running 2 windows with dozen tabs each.


Yeah, 2500 cores per CPU sure would be nice.


Shouldn't that be one 8GB memory bank per tab?


I’m unsure if this is sarcasm.

But most tasks (& tabs) are idling your CPU for a majority of the time. There’s no value to this at all.


This is perhaps the real genius of the M1. It's a great chip and all. But when you make it the only choice, people are finding the only choice is more than sufficient. And now Apple only has to produce one piece of silicon for their iMacs, iPad Pro, and Laptops. What a boon for logistics.


The success of the M1 is a boon for Apple, but I'm not sure that only having to produce one CPU to be used across all devices is where the optimization lies, especially considering that Apple is selling the iPhone SE with an older CPU and not simplifying by not selling it.


Just because it's being sold today doesn't mean it's actively manufactured still. They could have stockpiled the older processors or have spare inventory to continue meeting demand


I have a 15" retina MacBook Pro from 2012, and another from 2018. The one from 2012 is fine for everyday use. But the difference in single core performance, and also IO, is substantial. That's really important when it comes to a lot of typical development work. For example Rails starts at least 2x faster on the newer laptop.


My gaming PC is almost 10 years old and is with an i7 3770. Only upgrade was a GTX 980 five or six years ago.

That thing flies for anything and everything I put it through.

One day I'll retire it as a gaming machine and convert it to a Linux server or a casual workstation and I am sure it's going to be even faster than Win10.


My primary desktop is a 3770K running Linux. That CPU's the unexpected beast. Just churning through what you throw at it with incredible speed even for today. I process RAW photos, encode some audio and develop high performance scientific software on it (testing it lowers my heating bills, somewhat).

I want to try one of the new Ryzen CPUs, but I have no reason to upgrade.


I’ve been doing big C++ builds (think gcc, LLVM-sized projects) on my Ryzen 9 5950X and I wouldn’t have it any other way. This thing saves me so much time every day. It’s so nice not to have to do remote builds to get a quick prototype.

The best part is that it was a drop in replacement for my 2700X. I already had a great Noctua air cooler so I was all set.


Ah, Ryzen 5000 and Threadripper 3000 series are absolute beasts, no doubt about it.

I want to work with Rust more and when I get to that point I'll likely start gathering money for a TR Pro but until then, and me mostly working with dynamical languages, even old-ish i7 CPUs are more than adequate.


Yeah, I'd like to upgrade my gaming system (4970k and a GTX1080ti) but past wanting to play at 4k and experience ray tracing there is almost no reason to do so.

Now, if the 3080 was actually available at MSRP I'd probably have upgraded, but that's at least a year away from happening.


IMO go big or go home in this case. For a gaming machine I'd aim for some of the Ryzen 5000's (likely even the 5700 is an overkill) and an RTX 3090 so I can play 3440x1440 @ 144FPS (or maybe even 4K @ 120FPS).

As it is, I almost have no reason to upgrade my gaming machine right now. Playing on a 21:9 wide display (2560x1080) and never go below 70-80 FPS in any game. Usually it's 100-120 FPS.


The problem with these cards is actually being able to get one. Even if you've got the $2k to spend for a 3090, good luck with finding stock.


True. Right now I could only buy an RTX 3090 for 3100 EUR (~3740 USD) and I am not willing to pay that much.

But I am happy to wait. Upgrading my gaming machine is a very low prio.


Yeah, I have two thinkpads -- one from 2010 and one from 2012. Both of them have enough grunt to get through most any task, the heaviest of which is compiling large codebases. One of them was my daily driver for eight years; I only upgraded to a Ryzen because I wanted MOAR COARS and more RAM.

It's funny how today's computers could theoretically handle a user's workload for decades yet Apple builds them to fall apart after four years. There was a Hackernews story recently about a guy faffing around with old 68k Macs from the late 80s, he said that those old Macs were rated by Apple to last for fifteen years. And yet they come from an era of much more rapid utility decay for computer hardware, when a computer would be utterly obsolete after just a few years.


I think you might be going to the other extreme just by a tiiiiiiiny bit. :D

It's true that a lot of people are just fine with a a 10-year old i5 or an i7. Hell, a lot of store owners and front-end offices grumble at an i3 CPU because it's still too powerful for the daily activities of the staff.

But when it comes to routinely compiling C / C++ / Rust then you need all the power and hardware innovation you can get.


This would be more convincing if I wasn't reading it on a 2013 Macbook. But yes, Big Sur is the last major update available and in a few years it's going to be obsolete.


TBH a 68040 mac would be useful in 1999.


I built my own gaming pc back in Nov 2010 with an i7-970 24g ram. While I have upgraded the GPU twice and upgraded to larger SSD's as prices have come down, I am still using the same machine for gaming. Not that what I am doing is very intense or hardcore intense graphics. I can comfortably play all the Blizzard titles, modern FPS like whatever COD is newest, etc. Maybe not on max settings anymore.

Still, I am impressed by how much I have gotten out of my machine.


Yep and take computer from 2001 and run it in 2011, you wouldn't have been nearly as happy with the performance. There have been incremental upgrades but mostly just the core count which is great for handling server stuff but not nearly as noticeable as Hz counts increases


The other main improvement in the past 10 years is the power draw. You can get more than 10 hours of battery life in a machine that weighs less than 3 lbs. That would have been unthinkable a decade ago.

CPU improvements are only part of the story in making that happen but the are certainly far ahead of where things were in the not so distant past.


10 hour battery life was achievable 15 years ago on Pentium M provided you under-volted, under-clocked, and had a high capacity battery.


How far does this goalpost go?


The point is that the technology for long life laptops has been ready for a long time. It's just that Intel marketing calls all the shots and they only wanted to sell multicore space heaters with space heater memory to sweeten the deal. With this burden on chipset side, it was impossible to compensate with extra amp hours and maintain portable weight.


> The point is that the technology for long life laptops has been ready for a long time.

You're desperately moving the goal post to ridiculous places. It's like claiming that walking on the moon is supposed to be considered normal just because the technology has been ready for a long time.

That claim is irrelevant, isn't it? I mean, what good does a "technically it's possible" claim do if a) no, it's clearly not possible outside exceptionally rare circumstances b) the average consumer device is way behind any of those outlandish claims.


> (...) provided you under-volted, under-clocked, and had a high capacity battery.

The fact alone that you're talking about using/adding a nonstandard and exceptional battery is more than enough to throw out that claim.


If you took the small 2010 macbook air and doubled the size of its battery you'd be extremely close to 10 hours and 3 pounds, so I wouldn't go with "unthinkable".


My desktop is 2010 vintage 6-core Phenom II 1100T with 16GB RAM, the motherboard is officially maxed out, but I've been told it can take 32GB just fine. It has been upgraded with an SSD and more recently a Radeon RX560, and it really does everything I need. Newer games do need lower settings, but it'll motor through most games at 720p. I added a PCIe USB-C interface card, so I'm not even stuck at USB 2.0 anymore.

I have considered replacing it with something smaller, now that all of my files are on a NAS and I don't need room for 5-6 drives in this machine. But I just don't feel like I would be getting enough of an improvement over what I already have, even with hefty new parts.

My laptop is a 2011 vintage X220i, and that is starting to feel a little bit behind, mostly thanks to the i3 inside. But it connects to 5GHz WiFi and is fast enough for browsing and video playback, so for now it keeps on truckin'.

Actually the best upgrade would be a decently powerful 12-14" laptop paired with an eGPU dock. That setup would handily replace both machines, but eGPUs just aren't a mature configuration yet, especially combined with Linux. Maybe in a few years.


Gaming at 720p in 2021? Oh come on, you really need to try some 1440p/144hz gaming ;)


I have to admit this is right. Even my old 2012 i7 Mac mini is still a perfectly serviceable machine to this day.


Late-2012 ivy bridge i5 Mac Mini here. For a dual core machine it's really not all that bad of a daily driver experience. Makes me sad that I have to spend a crazy amount of money to get 16GB of ram and 1TB of SSD in a M1 Mini to match what I currently use. (although I understand the 1TB of SSD in the M1 is vastly superior)


Just for the fun of it, I tried to play WoW classic on my late-2012 mac mini and it ran like a toaster oven, but it worked just fine. Pretty fun. After putting the SSD in it, of course.


I used to play WoW on a G4 PowerBook! To be honest, it was decent, but the resolution was not great. I remember vividly having to look at the ground and zoom in all the way just to be able to go from the bank to the AH at IF. Hopefully whatever Intel GPU you have in your Mini is better than the Radeon 9700 in my old laptop.


Yeah smcfancontrol is a must. I keep the fan around ~3000rpm during moderate use and bump it up to 5000rpm or max it out if I'm going to game or render something. I've been tempted to look into TB2 to TB3 adapters and eGPU's but its a janky solution to extending the life of an 8+ year old machine.


11-inch 2011 MacBook Air here. It's my main machine.

I'll turn it into a media server once Apple comes out with an Mx machine with a screen of at least 15 inches.


I wish they could just give me that machine with a modern screen and an M1 chip.


Those things are beasts and have incredible ergonomic profile. I just bought one last year, with an i7 and 1tb ssd. Love it almost as much (for other reasons obviously) as my $23,000 Mac pro music rig purchased about the same time.


I have a near death grip on my late 2012 Mac mini i7 (upgraded to 16GB and 2 1TB ssds) It has stood the test of time and to me is an engineering marvel.

Aside from gaming it can last another 10 years.

For longevity, I keep it on its side and replaced the bottom lid with an aluminum mesh lid. This has made it near silent for 80% of my tasks (programmer centric). I also regularly clean out the fan.


Same here - I was tempted to upgrade when the newer mac intel models came out, but the soldered RAM / SSD bullshit kept me off them. I've hear that the 2012 mac mini can theoretically support 32 GB RAM (as other similar i7 processor devices can) - I think I'll try that 5 years down the lane for the heck of it. For now, 16 GB RAM and an SSD disk make it a very capable machine, that can (as you rightly pointed out) last for another decade.


My 2012 MacBook Pro (the very first retina model) is 100% usable as daily driver. Only problem is it needs a new battery.


I got a 2009 Mac Pro with dual sockets at a state auction and put in a couple of inexpensive hexcore Xeons. The sum total after 2 graphics cards and 48 GB of RAM was less than $750 and it runs everything pretty well.


If it weren’t for the fact that it’s not good at being a laptop (chunky, heavy, hot), I’d be perfectly happy using my old QX9300 (Core 2 Quad) workstation laptop for day to day. With an SSD and the Bluetooth+Wifi upgraded to Intel AX200 it’s only marginally slower than my newer x86 boxes at most things, and its 15.4” 1920x1200 display is still nicer than what ships on a lot of laptops today. Wouldn’t want to use it as a dev machine, but any less demanding usage would be fine.


Two years ago when I got the Lenovo e590 (Intel Core i5-8265U), I benchmarked running the unit tests of my project on it. (though I didn't really intend on using the e590 for development) I compared against my old desktop from 2010 (1st-gen Core i5-750).

Turns out that the old desktop won by a solid 10% margin! A whole decade of new Intel generations still can't make up for the disadvantage of using a mobile CPU.

P.S. The e590 barely survived 2 years and is dead now (broken USB-C charging port).


I'm also staring at a Mid-2011 Sandy Bridge quad-core iMac that's been on my desk since May 2011.

It'll be replaced come August-ish with a 32-core Zen 3 Threadripper (with 256GB of ECC). Though in a few years I'll likely put an M3 Mac Mini alongside it to figure out when I'll feel comfortable going to ARM full time, as all my customers (and their applications) are still native x86 only... for now.


There is only one other popular task that a decade-old CPU can’t handle acceptably well: running a web browser. Outside of that, computers have been good enough ever since HiDPI/Retina screens became standard.


I have a quad core i7 from 2013 and the laptop still runs just as fast. I had to replace the hard drive and the RAM is maxed at 16GB (not good for virtualization), but it runs like new except for some odd screen recording hiccups with Microsoft's Xbox thing, but really all of the performance problems seem to just be driver bugs in relation to Dell's way of doing things.


> I bought in January 2011 is still completely serviceable for pretty

Isn't this essentially what Apple's observation was (and hence their spec choices)?


Which computer are you referring to?


> I think that is because the performance has been the same for the past 10+ years.

Ah yes, the exaggerated funny phrase. On a serious note, the performance difference from an i7 7th generation (7700k) to an i7 10th generation (10700k) felt quite impressive.


Not a surprise. Stronger cores and double the cores and threads. And that's not a theoretical difference, they had a performance uptick of up to 100% even in games [0].

[0]: https://www.pc-kombo.com/us/benchmark/games/cpu/compare?ids%...


The 7700K and 10700K use virtually the same Skylake architecture cores with very minor tweaks between them. The only CPU performance gains Intel made in the 2015-2020 period were clock speeds, core counts and support for higher memory speeds. The 7700K and 10700K even share the same integrated graphics architecture.


I think that's correct. But it is all true at the same time. The cores are stronger because of the higher clock they can achieve and how the turbo clock works, and the higher core count does make a big difference. Higher memory speed also helps (though the difference in practice on Z board should be just a matter of what was typically on the market). Intel UHD 630 is even usually a tiny bit faster than the Intel HD 630, even if that's also only from the clock difference - only now with the UHD 730 and 750 did they improve things there.


Most people have no need to compare CPUs between mobile/non mobile or generation. They just need to know if the laptop they're looking at has the high end / middle end / low end CPU. You'd be comparing a new laptop with latest(-ish) CPUs.

The i3/i5/i7 makes it plain & simple to know that.

You don't need to figure out what the "middle end" CPU is called this year. It's the i5, problem solved.


Except you go to the store and there's laptops with 3 generations of cpus in there and the 10nm/14nm split for tenth gen so it's entirely possible to run into a laptop with an i7 worse than the i5 in the laptop next to it on the shelf/store webpage.


Or in my case, my new i3 laptop runs circles around my 5-year-old i7 laptop.


Right, and this describes the marketing problem. If I already have an i5, why would I need another one unless it's broken?

If I have an M1 and they're selling M5's, I'm continually reminded that my rig is out of date.


This reminds me of when the iPhone first came out. Back then, most phones from other manufacturers had an assortment of random names and model numbers that carried no hint as to which one was better or newer. Meanwhile, iPhone N+1 was obviously better than iPhone N.

Samsung quickly learned to play the same trick with their yearly Galaxy S and Galaxy Note releases. That decision probably contributed to the market share they've been enjoying.


The confusing names can be good for big box stores because they prevent comparison shopping, if every store gets their own SKU then they can offer to beat competitors' prices without ever having to do it. Although for phones, they were mostly handed out by carriers at the time and people didn't shop on their own.


You can't forget that there are multiple variants of i3, i5 and i7 every year! And presumably now the i9 but I haven't been paying attention.

6400 6500 6600 6600k random assortment of numbers and letters!

It hardly works! The consumer still has to do just as much work!


It's even worse these days. There are letters in the middle of the model name, e.g. i7-1165G7, not just at the beginning and end! That thing doesn't even sort. You have no idea where to position it without seeing the spec sheet and Passmark scores.

We consumers have it easy, though. Go find some Xeon or EPYC model numbers and try to figure out which digit means what. They make absolutely no sense.


> You have no idea where to position it without seeing the spec sheet and Passmark scores.

You can probably get pretty far by adding the 3/5/7 to the generation number and sorting by the sum.


The decision that people think about most is when to upgrade (by definition it stretches over a longer time period than the single point purchasing decision).

That's why we have iPhone 7/8/9...13 and similarly for Samsung etc.

If your i3/i5/i7 designation gives no clues then that's not helpful.


Yes, this point is so obvious, I feel like I've got to be missing something. I am a professional programmer. I am vaguely aware that Intel chips have X-Lake based codenames. I have no idea how to compare the chip in my current laptop to a new laptop based on Intel's marketing names. To convince myself to upgrade, I just go to Geekbench, because nothing else in the marketing tells me the new chip is better. Seems like a very fundamental marketing failure that could be resolved by exposing the X-Lake names to the public in some friendly form.


Intel CPU numbering:

iX-YYZZZ[Suffix]

X = 3/5/7/9 = market positioning bucket. Higher is better

Y = generation, currently at 11. This is basically a numeric form of the "* Lake" naming, except for 10th gen where Ice Lake (10nm, better power efficiency) and Comet Lake (14nm, higher max perf) co-existed in laptops

ZZZ = position within that generation, higher is better, more detailed than the i3/i5/i7/i9 bucketing.

Suffixes:

H = "High Performance" - laptop CPUs with higher perf and higher energy usage. The fastest laptop chips in a given gen previously got branded "HQ" (originally the Q meant quad core), but otherwise HQ = H.

U = "Ultra portable" - laptop CPUs with lower perf and energy usage. Usually can boost well for short tasks, so you might not notice if the most intensive thing you do is compiling code but fall flat for longer workloads such as gaming.

Y = Lowest power usage - These are all garbage, to be honest. You might have one in your windows tablet or netbook.

M = "Mobile" - dead these days, as the H chips replaced them, there were a couple of gens where H and M co-existed with H > M > U.

G_N_ - Integrated graphics rating. All G3 cpus in the same gen will have the same igpu. This only exists on tenth gen/eleventh gen 10nm chips. These chips are more efficient than H/U series chips at the same perf, but don't yet reach the performance peaks of the H series due to lower clock speeds.

K - Unlocked overclockable CPUs, mostly desktop chips, but HK chips for laptops exist too.

F - No integrated GPU. Mostly desktop only.

T - Power efficient desktop CPU (thanks mehlmao)

These suffixes can be combined, e.g. HK for overclockable laptop cpus or KF for desktop cpus with overclocking and no IGPU.

---

So the rule of thumb version for a laptop buyer is H = high power, U/Gx = better battery life, within that, pick highest perf ranking number (ZZZ) within your budget. Deduct 1-2 positions per generation out of date. If you need to compare cross-gen or vs AMD, you need to go look at reviews.

The G numbers in particular are pretty meaningless to consumers. Basically all Intel iGPUs fall into a bucket that is "Good enough for windows or esports titles, not good enough for new or recent AAA games".

Other features are more of interest to DIY builders, nobody is going to sell you a laptop with no igpu and no dgpu, and if you're not interested enough to read up on this, you certainly don't care about overclocking.


Oh wow, this is great Macha! You've provided a concise solution to a real puzzle who's importance fit precisely in the space occupied by "important enough to know" and "not important enough to research it myself". Intel should put your explanation in a PDF and distribute it widely. (Or better, review their marketing approach.)


Well...this is incredibly useful information that is remarkably obscure.


They might not do it anymore but they also previously used P for GPU-less CPUs as well, i.e. 3350P.

https://ark.intel.com/content/www/us/en/ark/products/69114/i...


There's also the T suffix for desktop processors, indicating lower power usage. I put one of those in my NAS/Plex server.


Has anyone made a website that decodes the numbers like this?


I think Intel's marketing may have fallen down because:

- At one point everyone upgraded their PC every few years anyway.

- Just having Intel on the box was sufficiently impressive to make the sale.

Now they really need to push their latest CPUs as the latest and greatest.

I like the Lake names but I've never understood from a marketing perspective (what have chips got to do with lakes - were they designed near these lakes?) they feel more like code names that accidentally got leaked.


Many years ago Intel got sued for using code names that were things like "Hendrix" so they started using code names that were geographic features that could not be "owned" by others...


They are not real lakes, as far as I know. They might have been once but Skylake and Ice Lake are just made up.



Thing is, unless one is doing AAA games, surviving Android, Rust or C++ builds, or trying to see how much Docker instances it is capable to stuff into the machine, any 10 years old computer will do let alone a randomly chosen one at the computer store down the street.

Even ARM, if it wants to displace Intel, it will be selling cloud pizza boxes or phones to replace those that get stolen/broken.


This definitely feels like an engineer's approach to marketing.


what???

>The i3/i5/i7 makes it plain & simple to know that.

there are i7s that are terrible

there are i5s that are terrible

there are decent i3s

"i3" "i5" "i7" doesn't tell you anything itself about perf


Yeah and then they get surprised at why the mobile one suck.

That was me trying to play games on laptops with 'i7' about a dozen years ago.


I am not arguing that the product naming is perfect, only that is not completely absurd as others claimed.

Comparing the performance of laptop and desktop is a niche case. Most people know whether they want a laptop or a desktop and then choose among those categories.


> Comparing the performance of laptop and desktop is a niche case. Most people know whether they want a laptop or a desktop and then choose among those categories.

I think you will find many people thinking that is a very useful thing to compare.


That's already too low-level for most people. They care about the overall device model, not the components inside. This is why Apple has been so successful, everything has a simple model and version number or year attached, with generally more performance as numbers (storage, ram) get bigger.


When you can walk in to a store and be presented with laptops with "Core i5" that are 1 or more generations apart, and where a newer processor isn't always better than an older generation (different architecture, cores caches etc), I think having more than "it's an new laptop and it has an i5" is quite helpful. I think with the Apple chips, there is always a progression so far. If they keep that up, the marketing is on point and you'd always know that what you are buying is "one better than last year". I appreciate we can't guess what they will actually do, and the reality might be as diluted as the ia32/X64 marketplace is now.


Maybe not.

But if you're thinking "Should I upgrade my 6 year old i7 system?" or "Will I notice the difference from spending an extra $200 on my CPU?" these days it isn't easy to come up with a sure answer.


That’s intentional. It’s so that the salesperson can use messaging to guide you to whatever machine they have the best margins on.


Spoiling your customers with choice actually often has the opposite effect, they become apathetic and do nothing.


For me, the buying strategy these days is to decide on the features that are most important (screen, keyboard, RAM, battery life, etc.) Find a machine that satisfies those key elements and then pick a price point and buy. Doing anything more leads to the paralysis you mention.


The performance difference between i7s of 10 years ago versus today is more than 2x per core, and the number of cores have also more than doubled in that time period.

That's over a 4x performance difference in 10 years, it's not quite the good old days of Moore's law, but it's still an exponential improvement.


That sounds like an S-curve, not an exponential improvement. You can make any monotonically increasing function look exponential if you only have two points (f(n+x)/f(n)) and you get to pick both n and X. "It doubled over the past fifty years!"

If looking at all the points shows the time it takes to double is way longer than it used to be you're probably on the top of the S.

Which, ok, we know we're headed to an S-curve instead. There are limitations on what we can do on silicon, right?

It's just that the CISC/Intel S-curve is way lower than the ARM/M1 S-curve. Intel got used to leaving a margin of performance sitting on the table because it'll be faster next year. They got lazy. As chip gains have slowed those margins have started to add up.


This laptop from 2009, a core duo, has been maxed out to 8 GB and one 1 TB SSD disk, and running the latest W10 Pro version.

I still do C++, .NET and Java programming on it just fine, use Office, Adobe software, and it is docker free.


Not really. I had this exact case when modern i7 laptop had kicked the butt of my 10 year old desktop speed wise at the same number of cores. Not sure how's this trend would go when started now though.


The overall performance of Intel's range of processors may not have changed much, but what they called an i3, i5 or i7 sure has lately. For example, the popular i7-7700k is most similar in terms of core count, clocks etc to the current i3 chips, and I don't think there was any real equivalent to the current i5 or i7 chips back then. If I remember rightly, the generations prior to that were still seeing actual performance improvements generation-on-generation too.


Yeah the marketing effort is immense, and it's really surprising to me how many eagerly lap up the angle Apple is pushing without much reflection, even on HN. Like the good old Apple hype days.

Now to be fair, the M1 is an impressive iteration on the A* chip line.

But the Ryzen 5000 mobile chips really don't look bad at all in comparison, and a 5nm version of those would level the field.


> But the Ryzen 5000 mobile chips really don't look bad at all in comparison, and a 5nm version of those would level the field.

Not look too bad best to say.

Even if we add a one node advantage, we still have a ~15 watt chip making 20W-30W chips sweat.

If you downclock Zen 3 to match 15W TDP, you may well get M1 beating it by double digit margin.

M1 is simply a way more efficient chip than any X86 chip can be because of 40 years of architectural advantage.

- latest x86 chips have patently gigantic decoders metastasising into backend. They are very tightly coupled to pretty much every other piece of logic.

- x86 memory model, and blockage behaviour costing huge transistor count to work around

- better SMP efficiency because of laxer memory model, and SMP logic being deeper integrated into cores, rather than being an afterthought

- generally better register utilisation, at lower transistor count, and the upstream software ecosystem historically having better understanding how to work with large register count

- X86 cores prefetch from both main memory, and in between caches is more expensive, and less efficient because it has to rely on more complex logic, and do more guesswork than in ARM.

The list can go on for a few more screens.

Remember. Even if M1 is a SoC, it still manages to beat Ryzen which barely have much besides cores, memory controller, PCIE, and USB. If you give a 1 node handycap to Ryzen, and only compare cores, you will still get M1 having almost 3 to 4 times advantage at performance per square mm.

It's really sad to see X86 development digging itself into a ditch with "40 years binary compatibility at any cost."

With all regards to Su Lisa, I believe they very much understand all that, but nevertheless still signed under the idea that X86 market will never go anywhere.

I believe it's trivial for both AMD, and Intel to easily slash X86 transistor counts, while increasing the performance by double digits if they can go in err of the X86 ISA convention on just the few most egregious anachronisms.


>If you downclock Zen 3 to match 15W TDP, you may well get M1 beating it by double digit margin.

Nah, people have done that experiment and it is competitive


Trust me, it does not have 40 years of architectural advantage, just an advantage. It's not going to take Intel "40 years to catch up" give me break. I have an M1 mini anad it's a great little desktop machine for browsing and light development but it's not 40 years ahead of intel.


This is not the point.

40 years of architectural advantage is X86 electing to forego results of 40 years of advances, and improvements that every other sane ISA had, and instead trying to add them by increasingly complex "workarounds."

Giant transistors counts go to allow X86 cores to not to break ISA compatibility with a 40 years old chip, while trying make new ISA features to live along with it.


Nope. If that were actually true, Alpha, PA-RISC, UltraSPARC, or even Itanium would've killed x86 earlier.


All of above were beating X86 on transistor count/performance for their time. Beating X86 on that was their very point.

Their commercial demise had nothing to do with their hardware.


> great little desktop machine for browsing and light development

Oh come on. You’re making it sound like a toy computer that’s nice for little Debbie who can now watch YouTube videos a little faster.

While my MacBook Air is unplugged, I can edit 4K video in full frame rate for hours and it doesn’t even get warm, let alone hot.


That's not how downclocking chips work. Reducing the wattage reduces performance by much less.

In multicore workloads Zen 3 slaps the M1 even per watt.


Nope, about Zen 3.

5800U (Zen 3 mobile) and M1 are actually very close multithread perf wise despite one being a 8 core part and the other a 4+4 part. And with more power use for Zen 3 too.

What hurts Zen 3 there is while the M1 maintains the full clock with all cores busy, Zen 3 has to downclock away from its max turbo clocks.


Which benchmarks are we talking about?

The 5800u is a 15w part, you might be confusing the normal part with some overclocked implementation.

But the 5800u at 15w is still faster than the M1 in multicore.

Yes, Zen has to downclock, but it has more cores.


It's far from being as definitive as you say.

Trying to find with one of the best 5800U scores: https://browser.geekbench.com/v5/cpu/compare/7449909?baselin...

It's for 5800H which is at 45W where Ryzen can get a tiny edge.


Ok, but that's not what I'm talking about. I'm talking about the 5800U@15W.

On Cinebench, the 5800U gets much more than the M1 in multicore and slightly more in single core. It even edges out the M1 on Geekbench, though Gb is a poor benchmark.


Cinebench is a rendering load that isn’t that much optimised. (Doesn’t even use the newer AVX levels when available, and isn’t properly optimised for Arm either).

Cinema 4D, the program that Cinebench benchmarks, normally does the renders on NVIDIA GPUs, not CPUs. As such, it’s a very poor benchmark.

And those 5800U results are at probably much higher than 15W. (Because that’s the base config, OEMs are free to ship with higher TDPs)


The examples I took were 15W. The M1 is also ran at much more than 15W in some models.

Geekbench in general has very heavily biased for ARM as it does not run the same code on both. Cinebench doesn't have this issue. And while yes nowadays a lot of rendering is done on GPUs for C4D the class of programs is path-tracers and they are often ran on CPUs for many scenes.


Geekbench 5 runs the same tests on both platforms, and always did. What you say as heavily based for Arm comes with no evidence.

> The M1 is also ran at much more than 15W in some models.

Nope, it's the same M1 for both, same voltage/frequency curves and top clocks. It's the whole point of having a chip that's named the same. '

No modern laptop CPU has a headline power use number. They all try to use the headroom that they've been given.

Cinebench is very far from being the end-all of benchmarking that you say it is here. And there are far more optimised renderers around if you want to benchmark that. (RTX makes the matter moot nowadays anyway)

It does not use AVX-512, or older AVX levels much for that matter, for a workload that is SIMD-friendly. For Arm, they leave lots of perf on the table too. (Cinebench uses Intel's Embree renderer, with AVX-512 disabled)

Geekbench 5 is designed to be a composite index of multiple benchmarks to be more realistic to some extent than using just one. You can also access to the scores of the subtests too.


CMOS power = Leakage + Average switching energy * frequency^2


> It's really sad to see X86 development digging itself into a ditch with "40 years binary compatibility at any cost."

Probably some of that is because they did try to do something different once - and Itanium was an unmitigated disaster.


> ...you will still get M1 having almost 3 to 4 times advantage at performance per square mm.

400%? Do you have any sources to back up this somewhat extraordinary claim?


I used die shots from https://www.tomshardware.com/news/apple-m1-vs-apple-m14-floo...

And 119 mm² total die area claim

For Zen 3 I used https://cdn.wccftech.com/wp-content/uploads/2020/11/AMD-Ryze...

0.5mm² per core on M1 vs 2.2mm² on Zen 3 sans caches

If you add caches, it gets much worse

17.5mm² vs 65mm²

And I may have had inadvertently included non-core parts into M1 calculation because of no official die floor map


> Not look too bad best to say.

What does this mean?


This sounds like a Chinese proverb; there's a whole tradition of pithy groupings of four words.

"Didn't look, too bad" might be a mode of "buyer beware" -- if you are not careful at the market, nyou might come home with trash.


latest x86 chips have patently gigantic decoders metastasising into backend.

It's still a tiny percentage of the area. The caches are the biggest.


Area isn't what affects power usage, switching activity is. [1] Besides, the cache size doesn't negatively influence or constraint the design of the core logic, but highly complex instruction decoding absolutely does.

[1] Well, mostly. There is such a thing as static power draw, but I suspect that the transistors in the L3 cache are optimized to have lower static leakage than the transistors in the core logic, which are optimized to be fast.


Leakage is actually a significant part of idle power consumption, especially at smaller process sizes, so much that some CPUs have the ability to turn off parts of caches when idle.

I repeat my stance that x86 instruction decoding is a tiny part of a processor, and things like vector units (which are also often powered down when idle) and reordering logic take far more power.

There's a paper about this that compares the efficiency of different ISAs, and basically concludes that ARM and x86 are no different in that respect. Only MIPS is an awful outlier.

https://www.extremetech.com/extreme/188396-the-final-isa-sho...


That paper doesn't show what you think it shows. There are too many variables between these CPU implementations to support the conclusion that the ISA doesn't matter for power consumption.


I don’t think “impressive iteration” begins to describe it. I am very rarely impressed by new tech these days, and the M1 is the first piece of new hardware I’ve seen in years that seems downright magical.

My previous laptop was a 2016 MBP with an i7, and the M1 destroys it. I can build large projects like LLVM or WebKit fast and it doesn’t even get hot.

I’ve also been running Linux VMs with the new Parallels and they feel native speed.

Rosetta 2 is incredible, Intel apps are very responsive on M1.

My work computer is a high end 16” 2019 MBP with 64 GB of RAM and honestly if it weren’t for needing the extra RAM or the occasional x86 VM I’d trade it in for an M1 in a heartbeat.

I’ll agree the Apple hype is unrealistic at times, but the M1 is one that absolutely deserves it.


It's not a helpful comparison if you compare a chip from 2016 with a chip from 2020.

You would need to benchmark it against a current chip from someone else.


You're certainly correct for, say, highly-controlled benchmarks but don't discount subjective user experience. I've heard _far_ more M1 users talk about the subjective feel compared to even the previous generation MacBook hardware whereas it's been a long time since I've heard that about an Intel to Intel upgrade (basically since the Core -> Core Duo period) other than people with GPU-heavy needs and that seems like an interesting data-point to me.


I do wonder how much of that is just getting a nice new laptop? If the average user was handed the equivalent laptop with an Intel CPU but told it was an M1 would they notice? Would they also think it was nice and fast?


Here is why I think your hypothesis is not true: people were buying new Macs also before M1, but it did not generated the same reactions. So newness is not the cause of this or at least it is not the only cause.


Good marketing definitely impacts user experience. How much it is impacting the perception of M1? I have no idea.


Did Apple not market new hardware for a decade prior to M1? I don’t think this is sufficient to explain the difference, especially given the supporting benchmarks.


Possibly but it feels like that should have but has not happened to anything like this degree when people were getting shiny new Intel CPUs after the early 2010s.


Absolutely. Performance is very noticeable even between the current MBP Intel vs. M1


What other current chip provides this much performance while using so little power? Objective measurements also show the M1 far ahead.


yeah, especially given the leaps AMD has made.


Which Ryzen laptop currently competes with the latest macbooks in terms of form factor, performance, battery life and build quality? I'm honestly asking, I've been thinking about getting a Linux laptop in a while and I would be curious to see what the playing field is like


Have you used a recent Macbook? The Macbook Pro keyboard is unique in that is is the only time in my life I have ever experienced a keyboard breaking.

I think people are a little too nostalgic when talking about Apple build quality. There's a 2011 MBP here that still works well. But recent releases... no better than the competition really.


AMEN to that. I still use my fully repairable MBP 2010 for web surfing and light tasks (with 8GB and a SSD its fine). On the Keyboard note, I've bought an expensive Eurocom (Clevo) Xeon laptop that had keyboard issues after a few years. Its not only apple that did bad with bad keyboard on expensive products (Fortunately I could repair it myself for 50$). Today apple product should ship with rossman videos and a SMD workstation if you plan to use them more than 3yrs ;)


Are you aware that the latest generation has a new keyboard? It's not the fiasco that the previous super-slim one was.


Once bitten, twice shy.

I considered that keyboard to be designed to fail. I'm not interested in gambling that another component won't fail the same way. It's easier to use products that have not demonstrated such designed failures.

The good news however is that due to this garbage tier engineering I've discovered that desktop Linux is more stable for me than post-Catalina OSX - I'd have probably never found this out otherwise.


Designed to fail is preposterous. Apple lost a ton of money due to that problem, both directly in paying for repairs, and indirectly in damage to their stock price.

It's pretty clear what happened, and it's unfortunate but banal: someone went too far with making the key mechanism as thin as possible, and all plastic to make it easier to manufacture. Ends up the plastic wasn't strong enough for the little pins in the mechanism.

That's it. Just an ordinary design mistake. Embarrassing but no conspiracy.

FWIW they've resolved it. The keyboard on this M1 MBA is perfectly fine.


> It's pretty clear what happened, and it's unfortunate but banal: someone went too far with making the key mechanism as thin as possible

I see it as a symptom of the Jony Ive ideology, untethered from realty by Jobs, poisoning Apple in the early 2010s. iOS7 flat UI, overthin devices, keyboards.

Signs are pointing that Apple is recovering from this (new aTV remote).


>Apple lost a ton of money due to that problem, both directly in paying for repairs, and indirectly in damage to their stock price.

https://finance.yahoo.com/quote/AAPL

I don't see any evidence that Apple's stock price was affected. Apple can ship any garbage (they shipped a keyboard they knew was defective for three years) and people will buy it because they're locked in to Apple's services, another reason they're pushing services so hard at the expense of the rest of the company.


The new keyboard is what they should have had the last few years. A refinement of the scissor switch keyboard from the glory days of the MacBook. It's great.


> Have you used a recent Macbook? The Macbook Pro keyboard is unique in that is is the only time in my life I have ever experienced a keyboard breaking.

Given that you know the keyboard has been fixed, is there a reason why you didn’t make it clear that you were talking about older machines and not current ones?

You make it sound like the current machines have the flawed keyboard, which is not true as far as I am aware.


I'm typing on a current-gen macbook, and the keyboard is fine. Butterfly keyboards were flawed, but I think it's also a bit disingenuous of the parent comment to imply that this is a reflection on overall mac build quality. Those generations of macs were bad products for several reasons, but the general build quality of those machines, and everything in Apple's line is quite high comparing to competitors


The keyboard has been fixed since 2019


Which competitor has a touchpad that’s equal to or better than apple?


I prefer the touchpad on my Surface Book 2 to the one on my Macbook Pro (2019). The one on my Macbook is comically huge, and I also am not in love with the feel/click noise compared to the Surface Book. YMMV.


The bigger the better as far as I’m concerned that’s half the reason I went with a 15” model.


Ideally there’s be no feel/click noise on an Apple trackpad. “Tap to click” is a revelation.


You know when you say stuff like "revelation" it really doesn't help your argument that you're not just fanboying out for apple?


Oh no, I much prefer the tactile response and sound of a physical click. But again, that's maybe just me.


I have both an XPS 13 Developer Edition from 2018 and two MacBooks from 2019. The touchpads on both 2019s are not good. They both have difficulty inertial scrolling and the touch rejection is far too sensitive.

Coming from lifelong Linux-land it's actually comical reflecting on how poor these things operate in day to day. The most prominent issue for me is how poorly they operate with my USB-type C dock I use for multi-monitor.


The very latest Lenovo laptops have touchpads made by Sensel. That should roll all over Apple's stuff, hardware-wise.


It’s not all about the hardware, how are Linux drivers for that thing?


No idea, it's very new so the driver side is probably flaky. This being said Lenovo is shipping it on Thinkpads and the likes, so Linux support shouldn't too far fetched.


I have a dell XPS and it's a dream. Probably the closest thing to a good trackpad. That being said, Apples keyboards, even on the MBP are below par. They don't feel like they're supposed to be put through their paces. At work they make us use Macs and iMacs and to be honest, I damn near threw the wireless keyboard in the trash but Apple will win the screen wars till Jesus comes back though.


I'm probably not the person to ask, I only use the touchpad as a last resort.

(Honestly I never even noticed that the apple touchpad was better than any other touchpad, but I understand that touchpad users will notice things I didn't.)


That is certainly one area where I think Apple has no peer.


You have to test new keyboard on M1 MBA/MBP


Test it for what exactly? All ultrabook keyboards are garbage, but they shouldn't out-right fail. "Testing" one isn't going to tell me if it will fail, it's just going to feel as crappy as the other keyboards.

If a product fails within the first 10% of expected lifespan I'm no longer interested in that product. If it fails _AND_ the manufacturer refuses to take responsibility until months of bad press and class action lawsuits then I'd have to be insane to trust them with another device.


The point is the keyboard on a 2018 Macbook when Jony Ive was allowed to push his nonsense is just not the same keyboard as on a current Macbooks.

It's a different keyboard. Different mechanism, very different feel, larger key travel, much more comfortable.

I have a 2018 i7 MBP and a 2020 M1. The keyboards feel incredibly different. On top of that the 2020 M1 has a real Escape key which is a huge upgrade for anyone in unix land/developer land, etc..

The 2019+ keyboard has had no controversy or reputation for failing.

My 2018 one does feel like it's on it's last legs FWIW. It's a work machine.. just hoping to get a refresh before it dies.


Dave2D just reviewed the new surface laptop comparing it to the new MacBook Air: https://www.youtube.com/watch?v=WKN9nvXTGHE ...and he got pretty good battery life (11 hours on his own benchmark).

edit: form factor, performance, build quality.. is also up there, I just didn't mention it, because I think the battery life is the killer feature of the M1.


It is a good battery life for a PC laptop but it is still significantly behind of M1 based Macs while also having significantly lower single core benchmarks compared to M1. And the performance difference is even bigger when both laptops are on battery power.


Microsoft is almost always a generation behind with tech in surface products it's ridiculous because they look so appealing otherwise. I just feel like I'm buying last years model on launch.


That is really a problem only if you watch tech news. In reality people use computers for years, so having a 4 or 5 year old tech doesn't really matter that much.


Developers should be upgrading on 2-3 year cycles max - the leap in performance of 2 generations noticeably impacts performance and unless you're doing some really trivial stuff - performance matters.

Check out Chrome build time benchmarks between 3/4xxx and 5xxx series (or any dev tool related benchmarks). 10-20% off your iteration cycle is noticeable - and that's a single generation jump.

Increasing iteration speed is a big productivity boosts, seeing professionals with 5+ year old devices is just ridiculous to me, you spend 8+ hours on that device - investing 2000$ for an upgrade every two years is not that much to ask.


Developers may have a reason to upgrade (many don't as ssh isn't going to get any faster-er with a fancy new CPU), but they constitute the part of people that read tech news. This is a minuscule part of general population.

Compiling Chrome is a nice benchmark but in reality dev cycles are quite smaller (or you might want to optimize your toolchain).

Finally, it's quite common for developers to forget the kind of hardware their users are running on. Then they produce software that only runs well on the next gen hardware. If you develop on a beast, then at least test whatever you are producing on a 3-4 year old machine to see how it will work for the clients.


Personally I'd like if developers stayed a couple of generations behind on their computers, so the rest of us aren't forced to upgrade regularly to keep software running at a decent speed.


It means the usable life is shorter, so even if it’s fine for a while, you need to think of it as a much more expensive device per useful year.


Sadly, with the current trends I think the device will be more likely rendered by an irreparable hardware failure before it reaches end of life due to speed.


I mean that seems inevitable.

I have a 2014 retina iMac. At some point in the next few years, I’ll probably replace it with an M series iMac, and then put Linux on it. It’s hard to imagine that it will ever really reach end of life due to speed at that point.


I haven’t really felt this way with the Surface Book. The first gen was rough, but I’ve been using an SB3 for several months now and have nothing but praise for it.


Probably a Thinkpad. But depends on completely the type of work you wanna do. With anything you end up buying it's a long term investment. So whichever gets your job done and whichever has a good build quality (also something which you can easily upgrade yourself and repair easily) is the one you should buy.


Thinkpads feel well made but the screens are mostly terrible and the trackpad design is very dated.

I’m not sure why Lenovo doesn’t use better IPS panels, they can’t be saving that much margin from them.


Not a huge sample size, but the screen on my X1 Nano looks excellent. Competitive with my 2018 11” iPad Pro’s screen visually, if a bit lower PPI.

That’s one of the highest end ThinkPads though, so I guess that’s to be expected. Have heard that getting a good panel on more mainstream models like the T-series can be tough.


I love the dated trackpad design, specially when combined with the rubber mouse.


Form factor and build quality are not something AMD can control by making a better CPU architecture.


Not entirely for sure, but form factor is limited by thermal requirements (well, mostly, Apple and other OEMs have gotten into trouble by ignoring this in the past, like the first i9 macs), and that is influenced by cpu architecture.


True, but if I am considering an alternative to an M1 mac, I have to look at more than processor specs to decide if it's worth it


I had an Ideapad 5 (4800u) for a few months, it was a great little Ryzen laptop. Battery life was 7-10 hours of "real" use, CPU performance is righteous, and build quality is pretty surprising for sub-$500. Great little Linux laptop!


I don't know why you're being downvoted. A 5nm version of the AMD stuff should be reasonably close to the M1.

The main bottleneck is the production capacity, we badly need more manufacturers of high end chipsets.


So a future hypothetical AMD processor could be as good as the M1, and that shows how M1 is just marketing hype?


I think the OP is saying that it could be a mistake to focus on the processor so much when it might turn out that worthy competition is just around the corner. Apple have ever focused on processor like this before, instead opting to focus on the whole product, fit and finish, etc. Which arguably still very few rival them on even today.


Yes competitors may have worthy opponents to the M1 "just around the corner".

But do you think that Apple stopped innovating after releasing the M1 last year? Don't you think the M2 (or M1X whatever it's called) is well into design and prototyping at this point?


The broader point, I think, is that Apple has never really competed on specs before. Not meaningfully, anyway. They’ve competed on whole product. Competing on specs opens them up to new challenges.


This is like people obsessing over the amount of RAM in iPhones.

Just because it matters on Android, which is a COMPLETELY different system, doesn't mean it matters as much on iOS.

Yes, it matters a bit when switching between multiple apps, but app suspend is handled in a completely different way on iOS than on Android.

Apples to Oranges.


While the competition might have M1 competitors (perf/power) "just around the corner", Apple has the M1 out right now. You can go buy an M1 Mac today. If I need to wait until next quarter or the second half of the year to get an M1 competitor that doesn't help me much today.


> Apple have ever focused on processor like this before

Eh, I'm not sure that's true. They did with the original PPC, and also the G3 and G5.


PowerPC was also IBM and Motorola.

It's difficult to compare the Apple Silicon versus PowerPC, because the new Apple hardware is far more ambitious, the whole platform. I would expect that the memory SSD controllers are to some degree licensed from other industry players. The CPU architecture is something else again. Issues eight instructions per clock -- per core. The M1 core design has more in common with IBM mainframe POWER9 than with a typical x86 core. It's remarkable to see this design ina low end consumer device. A mobile.


But by the time "just around the corner" happens, we will be comparing it to the M2.


This has always been the line people use against Apple, at least since the days when it was no longer credible to use the "beleaguered" line on them.

There have been just so many genuine tech news articles of the form, "Future Apple Competitor Product Beats Current, Widely Successful Apple Product".


> There have been just so many genuine tech news articles of the form, "Future Apple Competitor Product Beats Current, Widely Successful Apple Product".

It's the twin brother of all the "Currently, successful Apple product feature technically existed on [not-Apple] Product years ago (even though basically nobody used it)."

People read those articles and conclude it's all marketing hype but never ask themselves why it seems to happen again and again where simply checking off the box for having a specific feature never seems to actually lead to broad adoption. And if it truly is ONLY marketing hype leading to those ecosystem effects, then maybe you should start thinking of marketing hype as a feature?


It's only "future" because Apple strongarms the whole market with their market position buying whole 5nm TSMC capacity


By “strongarms” do you mean “places the highest bid”?


A fist full of money is a strong arm isn't it?


> strongarms the whole market

I see what you did there. Really casting shade on Intel!


> I don't know why you're being downvoted.

I can answer why _I_ downvoted the comment. It was this part:

> [...] it's really surprising to me how many eagerly lap up the angle Apple is pushing without much reflection, even on HN. Like the good old Apple hype days.

You can make a point without implying that anybody who doesn't agree with you is incompetent (the "Apple hype" meme is really stale).


But let's say that a 5nm Ryzen comes out ahead for mobile usage. (But not necessarily by very much.)

Chips like the M1 (or ideally, something similar made available to other device manufacturers) is still really interesting. The ARM architecture can scale down better than x86/x64 can, allowing a device maker to hit low end performance targets with less cost than x86 chips can manage. For the low end, it also tends to provide better power consumption for a fixed performance target than the x86 offerings.

This makes M1 type chips really interesting for any manufacturer who want to make devices that span a huge scale of performance requirements, from slow kiosks performance range to the decent laptop range. All being the same ISA makes sharing firmware and applications across the lineup easier than having to switch to x86/x64 above some threshold.

It also does not hurt that M1 means that the really popular for development Apple devices are now running on ARM, which helps ensure that more software has been ported to (and/or tested on) ARM devices. ARM's weak memory model for example is something that can trip up a lot of software that tries to implement low lock algorithms, but whose designers were only familiar with the x86/x64 memory model.


> This makes M1 type chips really interesting for any manufacturer who want to make devices that span a huge scale of performance requirements, from slow kiosks performance range to the decent laptop range.

This is purely theoretical. Apple will <<never>> sell its chips to other companies.

So if you were expecting Apple Silicon on its own to start a revolution, that's never going to happen.


The poster stated "M1 type chips". That is not setting an expectation that Apple will sell their chips.

Microsoft already have a Surface product with an ARM chip that didn't get updated in this round AFAIK. It already has solid ARM performance, but it's not as good as M1. It was let down with how well x86 emulation worked, but there seem to have been improvements including x64 support. If they focused on improvements with the chip design then they could see improvements along the lines of Apple. It's a question of whether the investment is made rather than whether it's possible.


While a lot of the M1 advantage over Intel and AMD is process node, the advantage over the rest of the ARM and RISCV ecosystem is substantially more than that.

The team that built the M1 is a collection of top tier ASIC designers that came from several acquisitions, from PA Semi forward. The Qualcomms and Caviums and so on just aren't in the same league. Lord spare me another tarball of garbage and random kernel patches called an "SDK." They don't really have the same bench and don't pay top dollar and it shows.

It is unlikely that there will be an M1 equivalent from any of those guys. Intel? AMD? Absolutely.


No other company has the volume to economically build on TSMC's leading node.


Honest question - why not?


Why would they? Apple likes vertical integration, it's not advantageous to them to sell off a differentiating part of their product, the hyped processor, so that others can use it, often competitors.

Back in the 90s, when Jobs was gone, Apple licensed its OS and almost went bankrupt as you could simply get a cheaper clone of Apple's computers with the OS on them. They won't repeat that mistake again.


Not really in any other way than speculation. No one can use M1 tech unless they steal it. Sure they can build what they think are similar designs to it but they don't know if they're going down the right path. M1 isn't just another ARM design, it's solely done by apple.


I think the issue with the comment is that it comes off as biased or distracting (regardless of the intent). The conversation is about Apple Silicon vs Intel, and then it veers off topic with discussion of Ryzen.

Also, saying that people are "lapping up" the Apple marketing suggests that it's more sizzle than steak. But it's completely undeniable that the A chips, and by extension, the M1 is a beast.


> A 5nm version of the AMD stuff should be reasonably close to the M1.

Reasonably close only counts in horseshoes and hand grenades. Until a 5nm version of the AMD stuff can run MacOS, there’s no point comparing the 2.

If the AMD stuff blows past the M1 (or the M-whatever), Apple’s marketing will turn on a dime.


Apple Silicon is NOT just an ARM CPU. It's also a deep integration with macOS, and a unified memory architecture (less unnecessary copying on the bus). For example on the M1, certain very common operations for macOS apps, like freeing Objective C pointers, which happens millions of time per second, are optimized at the hardware level. The M1 is not a CPU.

So no, the Ryzen 5000 with 5nm would NOT level the field.


But from what I understand, there's a much smaller gap between ryzen and M1 than intel


> For example on the M1, certain very common operations for macOS apps, like freeing Objective C pointers, which happens millions of time per second, are optimized at the hardware level

I mean, this is _kind_ of true, but it's not unique to Objective C, the M1, or even particularly intentional. The _ARM_ memory model is more conducive to reference counting in general than the x86 one. Nothing special about Apple's chip in that respect, tho.


> and a unified memory architecture (less unnecessary copying on the bus).

So does every other Intel & AMD laptop CPU from the last 5+ years. So does every Qualcomm SoC from the last 5+ years. This has been incredibly common for years and years now.

This does a lot less than you think it does, particularly since there's very few consumer workloads that switch between CPU & GPU.


How are objective c pointers handled?


From what I understood, ARC operations are cheaper on M1 because of weaker memory ordering constraints. But I thought this was and ARM thing, not M1 specific, but I could be wrong


That's right. M1 doesn't have specific Objective-C optimizations.


An interesting realization that it could, though.

Overwhelming majority of MacOS/iOS software is written on Obj C and Swift, which have the same underlying semantics.

That said ARC doesn't have such a great impact on performance as some people (esp. fans of Java's GC etc.) would convince you. Most ++/-- operations on the refcount are eliminated by the compiler.


Some fans do measure, and know the difference from marketing and reality in regards to tracing GC versus reference counting marketing material.

https://github.com/ixy-languages/ixy-languages


Wallclock time isn't the only performance metric to care about. ObjC had a GC in the past and lost it in favor of ARC for a reason; it's a poor fit for iPhone-sized devices.


That was the marketing reason, the official reason was that fitting a GC into a language with C memory model wasn't never going to work, and there were plenty of crashes and memory corruptions coming out of that.

So they made the only sensible decision, and just like Microsoft did with COM, they added support to the Objective-C compiler to automate retain/release messages in OS X Frameworks.

Naturally being Apple, that pivot had to be sold in a way that the reason was being of RC being better and not because of the technical failure to have a tracing GC in Objective-C , while remaining compatible with C memory model.

Apparently, moving the GC document with all its caveats regarding possible programming issues, out of search index also helps to keep the story as well.

https://developer.apple.com/library/archive/documentation/Co...


No, the performance issues are real. GC programs have more page demand and higher peak memory use, which is specifically what the iOS memory model (jetsam and memory compression) can't deal with well. They can have some wins due to compacting.

There is of course a GC language on iOS (JavaScript) but expensive webpages get killed quickly.


And yet Microsoft was able to do very good phones with GC languages with very good performance for 200 euros, maybe Apple should talk to their runtime engineers.


Wasn't most of the system stack in C++ since it was shared with either Windows or Nokia? Either way, they went out of business, so we can't tell if they can support a phone 5 years back with new OS features.


The worst case performance is very bad though. I have done some playing around with pushing Swift in terms of performance, and it's very hard because of ARC.


Code in the fastpath must not generate substantial garbage whether using ARC or a gc. That's just a coding error.


It's not about garbage creation, is about reference cost. With ARC, you pay a penalty every time you pass a reference type, not only for memory churn. You can avoid this using value types, but this often results in a lot of unnecessary copying.

It's not unavoidable, but Swift doesn't give good tools for avoiding ARC-based performance cliffs, so it takes a lot of profiling and effort compared to other languages.


It's ok to generate garbage if it goes away immediately; that's essentially a stack allocation. Performance problems come up when you have a mix of lifetimes. (that slows GCs down and causes heap fragmentation outside copying GCs)


I can't find the interview now, but I distinctly remember someone at Apple saying that Cocoa apps benefit from hardware-level optimizations of common Objective C operations in the M1.


I didn’t buy the hype until I tested an actual M1. It exceeded the hype. It feels like when I was a kid and went from a 386SX to a Pentium at 4X the speed, except the M1 was also a quarter the power consumption.

It’s the first time in a over a decade I’ve felt a revolutionary step forward in a chip and the only time it has ever come with less power.

Process node is a factor but I find it hard to believe it’s the only factor. I know enough about the X86 legacy tax to know that is not the case.

Apple didn’t do black magic. They just took a cleaner superior CPU architecture and made a desktop class muscular chip with it. Other ARM manufacturers could equal or exceed the M1 if they wanted. X86 has been delivered a death sentence.

IMHO the greatest threat to AMD is ARM not Intel, and vice versa.


> "Apple didn’t do black magic. [...] Other ARM manufacturers could equal or exceed the M1 if they wanted."

I see lots of people saying "If only $company had an ARM device", and I don't understand it. My experience of ARM device is slow laggy janky products - from competing smartphones, tablets, router and network device management interfaces, year on year, the pairing is always (arm + fucking slow).

It's only Apple who have managed to get "desktop class" performance, and at this point I'm more willing to attribute that to Apple doing black magic than to ARM being inherently superior.

Microsoft have piles of money, thousands of developers, they make hardware, they've been working with ARM since the likes of the Compaq IPAQ PDA and Windows CE 21 years ago, and the Surface RT in 2012, their ARM devices are nothing to speak of. Google, industry gorilla of tech with the finances to match - see the famous Gruber/DaringFireball about iPhones far ahead in single core JavaScript performance year after year after year. Tech giants like Samsung, Qualcomm, Sony, who have been designing and building chips for decades, with their own drivers, firmware, and customized Android builds, and building their own flagship products.

Apple whomps in from "nowhere" with the fastest ARM device anyone has ever seen, and the critics response is "anyone could do that if they wanted to".

Then why don't they want to?

[Edit: Consider also that Apple spent $278M on acquiring PA Semi chip design company in 2008, and $600M on Dialog Semiconductor in 2018, and Microsoft spent $2.5BN on Minecraft. I say these companies "are tech giants with finances to match", but it's not like Apple had to lean hard into their hundreds of billions cash pile to do what they've done. Instead they had to do something that appears to be "black magic" (i.e. desire + leadership + execution + long term planning + ???)].


Microsoft has to more or less take what Qualcomm will give them. Qualcomm's virtual monopoly on high end non-Apple phones has made it complacent.

> Apple whomps in from "nowhere" with the fastest ARM device anyone has ever seen

Not really from nowhere; they've already had the fastest (non-datacenter) ARM chips anyone has ever seen, fairly consistently, in their phones, for years.


Apple has had a vision for this for at least 12-13 years. Most likely, ever since (before) the transition to Intel.

The first A-series chip they officially released and named was the A4 in March 2010. That is 11 years ago. That is the beginning of M1. And the reason is originally in this amazing Ars Technica article [1]. Apple has always wanted to fully control the entire stack.

Any competitors in the same space lack either the vision, or the products, or the money to do the same thing. In the case of Microsoft and Google, their internal politics don't allow them to do the same thing (Google couldn't care less about hardware because the Web is where Google's money at; MS cares about hardware but traditionally they have relied on throwing money at partners to make them come up with solutions).

[1] Owning the stack: The legal war to control the smartphone platform, https://arstechnica.com/tech-policy/2011/09/owning-the-stack...


Indeed, which is two things against "anyone could do it" - it took Apple 13+ years to achieve - "using ARM" isn't enough - and competitors haven't been taken by surprise, they've had 10+ years to mount a response and haven't.


Apple is pretty uniquely positioned to make the architecture change as smooth as it has been. If you look at competing smartphone SOCs it's not hard to believe that a technical M1 competitor could be put together, but getting it adopted would be much, much more difficult.


It seems so obvious to me that Apple's chief strategic strength here is that they can essentially force adoption that it's very strange to me that your comment is the only one I've seen on this whole post mentioning it.

One of the ways we already know that beefy ARM processors can work well (and with better power efficiency than Intel CPUs) for heavier workloads/at higher scale than mobile or embedded devices is that it's already being done in the server space. Is scaling ARM processors up for the desktop radically different? (If so, I'd love to learn a bit about how )

The problem that remains for anyone who wants to sell PCs with something other than x86 is that desktop users want their apps, and for Windows users those apps are distributed as x86 binaries, and users aren't in control of that (especially for unmaintained or legacy software). Software publishers won't build or optimize for a new architecture if they suspect it's just going to be a flash in the pan, and this creates a chicken-and-egg problem.

But Apple can just declare that their new architecture is the only option for you if you're buying a new Mac, and a ton of their users will come with them no matter what. App publishers will just deal with it, knowing that they have to play along if they want their application to have a first-class experience on Mac. This is a huge deal, and not a position any particular PC manufacturer could hope to be in!

I'm sure Apple has done lots of technically impressive stuff that I'm not even competent to really appreciate in order to make this architecture change happen. But other vendors are more capable of technically impressive stuff than they are of forcing adoption for enough users to reach critical mass to get people who sell/distribute proprietary software to come along with them.


Who would buy it?

Apple is in a position where it can move swiftly and decisively. They have an army of software engineers who can port Apple's OS and associated Apps to a new architecture. They can force their entire ecosystem to move to a new architecture, and are emboldened by the fact that they've done it twice in the past (the first time they were more hesitant).

If Samsung wanted to pull down the latest Qualcomm ARM chip, slap a Macbook-sized heatsink on it, and go head-to-head with Wintel and Apple, they would fail. What are you gonna run? Linux? Some abortive version of Windows on ARM that doesn't support the vast number of x86 apps?

Even ChromeOS is missing barebones necessities like Photoshop, so professionals won't touch it. So the ChromeOS market is doomed (at present) to have "thin & light" and "long battery life" as it's only selling point.

I think we'll see M1-style ARM devices come out of the woodwork in 2021-2022, running ARM Windows and Linux and ChromeOS, simply to attempt to copy Apple. But I don't think they'll be as successful, simply because the apps aren't there, and no one can force developers to port to the new architecture as Apple can.


Microsoft built Windows kernel to have different backends (Itanium, ARM) and different front ends (Win32, Windows Subsystem for Linux), they make Hyper-V and did a "run your old apps in a Win 7 VM" for a while, they've pushed .Net since the early 2000s where the intermediate code isn't so tied to hardware. They could plausibly have done something for compatibility like Apple does with Rosetta.

Thing about your comment is "Who would buy it" assumes it's basically the same and people are indifferent. My griping is assuming ARM is generally worse. But if we take this alternate world seriously then the reason for ARM is superior performance, so the answer to "who would buy it" and "why would developers switch" is driven by that - like when Apple was falling behind with PowerPC - it wouldn't be Microsoft pushing it on an unwanting market, it would be a demanding market pulling it with demand.

If Samsung could push a better-than-Intel chip, would Microsoft not want a piece of it?

If Samsung Chromebooks were suddenly more powerful than an Intel i7, would developers not sit up and take notice?

My contention is they can't, because ARM isn't that different, and it's Apple's black magic that is the real difference.


> It feels like when I was a kid and went from a 386SX to a Pentium at 4X the speed

I remember the jump from the 486 chips to the Pentium being that dramatic. Heck, 486DX2-66 to P100 brought Linux kernel compile times from 45 minutes down to like 5 minutes.

Back then, I think a big part of these jumps wasn't just CPU design. I think a lot of it was that those CPU design changes were often accompanied by massive improvements to overall system architecture. So the I/O buses, memory, peripherals, all of it got a lot faster as part of the same upgrade.


Excellent analogy, I'm old enough to remember those old generational jumps and the M1 feels like that to me.

Last time it felt like there was a jump this big was around 2005-2006 when the Core 2 Duo started becoming common.


What about it is so amazing? Did you test the 8gb or the 16gb version?

Is it really a gamechanger for surfing, coding, videos, etc?

Curious because I might grab the base mac mini 8gb for music production, surfing, Java coding etc.


It's a game changer if you can wholly devote yourself to the ecosystem. I can tell you that even third party applications with M1 support (ie. Google Chrome) can be a mixed bag and still run kinda clunky.

Also you said music production so verify before hand that your DAW, etc of choice will work fine. I use Reason and the visuals in the DAW itself are still laggy, but it isn't M1 native yet.

Other things like the fact that World of Warcraft runs at medium settings on a 1440P monitor feel kinda magical given I can barely feel any heat leaving the M1 Mini.

In the end I returned my M1 Mini (8gb/256gb) because my personal experience still felt half baked at this time. Once we hit the next generation a lot more software will have caught up.


I'd get 16gb. The memory options are disappointing. I am waiting for 32gb to switch my daily driver, and also giving the ecosystem a bit of time to catch up. I have an M1 Mini but that was for us to test our software on and make sure our Mac ARM builds were correct.

Apple's memory and storage options tend to always be on the disappointing side.


From what did you upgrade? Apple had no competitive product with modern processors. Where is the Zen3 macbook with which you compared that? Did the laptop you compared with had an equivalent fast SSD?


Top-end Ryzen chips may be as fast or faster. Are they also as low power?

The impressive thing about the M1 isn't just raw performance but performance / watt. There are obviously faster high-end many-core x86 chips, but the power use difference is wider than the speed difference. The M1 destroys mid-range Intel and AMD offerings at a fraction of the power consumption.

Check out this madness:

https://browser.geekbench.com/mac-benchmarks

The MacBook Air trounces the top-end Mac Pro on single core performance. No this is not the top-end Intel you can get, nor is it as fast as AMD, but it's a MacBook Air and consumes a small fraction of the power.

Again... the power efficiency is the thing that really blows me away. 5nm accounts for only some of that.

On multi-core the same MacBook Air is just below recent Mac Pro models on total computational throughput. To beat the MacBook Air you have to go up to newer-generation Xeon chips with 8 full-speed cores. The Xeon is branded as a server chip and uses as much power as probably eight to ten M1s.

It's just insane. If Apple puts, say, 16 full-speed core in one of these chips it's absolutely over for all other vendors. Speculation is that the 16" Pro will end up getting 8 performance cores and 4 low-power cores. I wouldn't be surprised if it ends up beating the top-end Intel Mac Pro.


Just keep in mind that scaling things will change power efficiency. Like the Vega gpu architecture showed so clearly - which could be quite power efficient, but the Vega graphics card released were ridiculously power hungry, it seems because they missed their performance target and thus got overclocked out of their power efficiency range.

I'm not sure that they can easily add more cores to the M1. If they can, that would also raise the TDP. And adding more cores does not increase performance linearly, even if the architecture is made for it.

Also keep in mind that comparing it to 14nm Intel processors with an architecture from a decade ago is a fair comparison when comparing M1 vs prior Apple products, but is not a fair comparison when comparing it against what x86 can do in general. You would need a modern x86 processor targeting the same watt usage to have a completely valid comparison. As discussed in the thread above.

So yeah, it's a great processor for its target. And it likely will also be a very strong processor when scaled up - it bodes well for the future of ARM. But don't extrapolate the performance too linearly, that is likely to be misleading.


Sigh. I'm typing this on an Apple "prior" product which has a two year old microarchitecture and is built on Intel 10nm (roughly the same as TSMC 7nm) - it's absolutely a modern x86 processor and a valid comparison - and it's left in the dust by the M1.


And you swapped the SSD and the swapping algorithm/memory usage model so that the IO performance is equal?


Feeling a bit sad now as I had to buy a 2020 MacBook Pro to get AVX512!


I currently use a MacBook Air with the same 10th generation Intel chip in it and it's not bad. The only difference between the Air and the Pro with this chip is that the Pro can sustain high clocks longer and has a Touch Bar.

I expected the M1 to be like a much lower power and maybe a bit faster version of this chip, and was really blown away by it being far beyond that.


I'll probably be in the queue for the M2!


Given Apple had/has competitive products with 2020/21 Intel CPUs this is patently not true.


Hm, I'm surprised by that statement. The Ryzen 5000 mobile CPUs were a lot faster than the Intel competition and I am pretty sure Apple does not have them in any laptop. And the very fast SSD is a new addition, isn't it? Did I miss something?


You implied that they only saw a difference because Apple wasn't using "modern processors". You can't exclude everything but Zen 3 from any sensible definition of "modern processor".


The parent comment talked about the implications he thought this had about the architecture. Like half of the thread seemingly ignoring that a) there are x86 processors that are more than competitive and b) that the big leap he described is more likely coming from the faster storage and from comparing it with the very much older and weaker processors most people that bought an upgrade now had in their old laptops.

Heck, he might have used an old dual core with an HDD from all we know. Of course the new M1 models would make a big difference then. So would all other modern laptops.

But in that context of x86 and ARM Zen3 is absolutely the only sound comparison.


I know what you're trying to say is that you have to do a comparison of CPUs built on similar process nodes - and that maybe the parent has overstated the x86 vs Arm comparison.

I would have some sympathy with that - but that's not the same as discounting the parent's perspective because Apple somehow weren't using "modern processors" previously - because that simply isn't true.


Similar process nodes, or at least when making general statements of the architecture to compare the best modern candidates. Otherwise you just can't make statements about the architecture based on that. And Apple just does not have Zen 3 laptops (unsurprisingly), so there is that.

And I still think it's very valid to not forget how old hardware used for subjective comparisons will be. E.g. the old Macbook air sold very well all that time.


Are there any x86 processors that are competitive in performance / watt?


The processors that come closest are the Zen 3 processors. They beat the M1 in total performance, see https://www.pcworld.com/article/3604597/apple-m1-vs-ryzen-50..., but are made for a higher watt usage. There is no direct performance/watt comparison I am aware of.

It's likely the M1 is better in that category, but it does not look like a huge difference if you factor in the higher performance you get from the Ryzen 9 5980HS.


Single core Ryzen 7 5800U Geekbench scores are c20% lower than M1.

Multi core better but still lower but not too surprised as it's got 8 large cores vs big.LITTLE 4/4 for M1 - not clear when those cores will start throttling down.

Suspect single core performance is what drives apparent responsiveness so I'd say not too surprising that the M1 gets rave reviews from users.


I've been trying to get to the bottom of this here: https://github.com/solvespace/solvespace/issues/972

With my 2400G the benchmark takes 5.5 seconds. With someones M1 it takes 1.3 seconds.

We still need to verify that was run at the same settings, but it looks like the M1 is going to be faster than the 5000 series APUs regardless, possibly by a large margin and I'm assuming the 5000s are between 1.4x and 2.0x the speed of the older 2400G.

It will still be interesting to see AMD on a 5nm process, and Zen 4, and with 8 core, but by then Apple will probably have an M2 on the 3nm process. You gotta compare what can be obtained today.


The 2400G is a relatively low end part that is four generations old. In the past AMD did not release high end desktop APUs, preferring customers to buy CPUs + GPUs. This is already announced to change with a 5700G being produced. https://www.amd.com/en/products/apu/amd-ryzen-7-5700g - On paper this is a 5800H in a desktop form factor with a higher max TDP and higher clock speeds.

The 5800H gets 2.5x on cpu benchmark's rating vs the 2400G. This is composed of a 50% single core and 300% multi-core improvement.

2400G: https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+5+2400G&i...

5800H: https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+5+2400G&i...


>> The 5800H gets 2.5x on cpu benchmark's rating vs the 2400G.

That's in large part because its 8 cores instead of 4. The test I ran makes good use of multicore, but going from 4 to 8 won't cut the time completely in half. I suspect the M1 will still beat it by a bit.

I take back my comment on availability. We ideally need to compare things on the same process node to see which design team has done a better job ;-)


The M1 is also 8 cores and benefiting from that.


By the time there’s a 5nm version of Ryzen, there’ll be a 4nm M2.

AMD’s 5nm shift is scheduled for 2022

iPhone 13 (2021) rumors indicate a 4nm SoC


Node sizes are pretty meaningless at this point.


If it is the same foundry (and it is in this case) you can compare them and you can assume that 3mn is better than 4mn and better than 5mn. Across foundries, no so much.


True, but that's what OP claims.


4 is not less than 5?


With the M1 in the latest iPad Pros I’m not even sure if Apple will keep the Ax line but instead to upgrade their low to high end with just Mx series processors .


Isn’t the M1 a bit big to put in a phone? Presumably, both Mx and Ax SoCs are already very similar so the distinction could be mostly marketing. But they’ll probably want to have different names for the lower-power, fewer-cores phone SoC and the things they put in their Macs. If only to avoid the perception that Macs run on “low-performance” phone parts.


What if Apple didn't care about the size and just put in an M SoC? iPhones have been getting thicker and it seems like their design language is going to have devices under certain thicknesses.

Option two would just be to make an A15 be only a die shrink and surprise ! It fits.


No reflection? The iPad Pro running the Axxx soc was already insanely powerful. x86 is a monster with this huge legacy. Sounds to me more like the usual anti-apple speak..

The whole article is about not having to choose between performance and powerusage. With "5000", which 5000 do you mean? A 100W cpu of about the full cost of a macmini?

This is the whole point of the article. Apple made a monster move. Next up: Apple Cloud. M1 based cloud computing. Will save them a couple billion per year on aws which expires in '22 or '23 iirc.


I had wondered what their first Apple Silicon powered data center would be.

I thought the company would wait and deploy a later gen chip. Though with the standardization of M1 across such a wide array of products makes me think it is going to be the M1.


I was bit surprised to see M1 in the iMacs. I thought the M1 was going to be a very capable proof of concept. But now we've got it in iMacs, iPads, and Macbooks. So I wouldn't be shocked if they spread it far and wide.


Ya, part of the reason I presumed there would be at least one rev was because there have been reports of system hangs and restarts on some M1 machines.

So I had thought there may be something learned in the mass deployment that would trigger even minor design changes.

Perhaps those are software issues. Or maybe the M1 as a product name should not be taken literally to describe the SoC in the current lineup.

We know the Secure Enclave component appears to have been updated mid-production this past fall for a host of A-series chips.

Perhaps, if light changes were needed Apple would not see them a sufficient to designate a new moniker.

Or perhaps they are but the iMac and (theoretical) apple silicon-based data centers are intending to build consumer confidence in this bold foray.

https://9to5mac.com/2021/04/12/apple-security-a12-and-s5-pro...


Same. With only 16GB of RAM, many of the tasks that upper-end iMac users perform are rather hindered.


Given the competition for fab capacity at TSMC, I'm not so sure Apple would want to use that capacity to supply servers. If things were less tight in that department I'd agree the time is right.


I can see this playing into the constraints, to this end, perhaps there is efficiency to making the most out of a single system type and assembly?

Also do you know how many machines it would take to outfit a reasonable demonstration scale facility?

Is that number so large that it constrains the ability to just go that much further?


I'm not really sure about the answers to your questions, but it vaguely seems like the type of thing they'd want to go big on or not at all, for economies of scale. I also suspect they can get a better margin on each M1-based device selling them in consumer products.


Sure, if we don't care about performance per watt.

(Hint: we do)


But there isn't a 5nm version of those... And laptop battery life is vastly different between these two.


The difference isn't that big: https://www.youtube.com/watch?v=WKN9nvXTGHE

And for 5nm, I'm quite sure AMD already has contracts for that with TSMC. So the gap should be even smaller.


His “as tested” was 11 hours for AMD and 13 for M1. His “max time use” for AMD was 14.5 and 20 for M1. That’s a 38% greater time for the M1 in a best possible outcome and 18% on “average”. Substantial differences that users would perceive noticeably.


I’m not so sure. I think it reaches a meaningful maximum. Can I do an entire days work on the laptop? Yes to both. Can it last an entire flight, even a long one? Yes to both.

It’s not that it doesn’t matter at all, it’s just that it matters a lot less once you’ve checked boxes like that. The effective experience of both would be “you only need to charge it overnight”


For me it's always been "how desperately do I need to find an AC outlet" with my laptops. I've been pretty happy with my M1 MBA in that regard because the answer has been "not desperate at all". I've done a couple weekend trips since getting it and I never even thought of getting the power adapter out of my bag the entire trip. This was despite doing a bit of work.

The same can't be said for the MBPs I've owned over the past decade. They get ok battery life but I always needed to know where the nearest outlet was. While an 11 hour Ryzen might be close to a 13 hour MBA, those extra two hours is the difference between a full weekend worth of work or just a full day.

For anyone wanting long battery life in a laptop those extra couple hours are important.


It matters to me a lot. I was constantly out of charge with my previous laptop, it's exactly these extra hours that fixed it.


What the sibling says, but you've also got to take battery degradation into account. Even at 80% of its original battery life, an M1 Macbook Pro/Air will still give you a full day's usage.


Just to admit something odd I do in public. I've started using my lithium powered devices between the 40%-80% charge range. i.e. I plug it in when it's at or around 40% left charged and unplug it when it's 80% charged or around there. I'd read that's what Tesla drivers recommend for maximizing the life of their batteries. Don't know how much it translates to phones/computers. 40% of 20 hours is 8 hours...


In the video, he mentions the 'max time use' is on somewhat standby. So, the second value is more representative (11 vs 13).

On the other hand, I think if you can go a full day on a battery, that should be enough. You won't be using your laptop for 20 straight hours.


It has less battery life and less performance (judging by Cinebench, at least), so it is a fairly substantial difference. And it's quite possible that Apple will remain one TSMC process ahead of AMD for the next few years, even if AMD do put out 5nm chips in the near future.


The surface laptop has a Ryzen 4000 CPU, not Ryzen 5000.


I assume the person I was responding to was talking about the laptop in the video that they linked to.


Can you link a good test result that proves this?

The 7nm Ryzen 5000 mobile chips are not very power hungry either.



This doesn't contain any comparison graphs relevant to my question, and couldn't, because it's from November. The Ryzen 5000 mobile chips came out this year.


Here’s the AMD benchmark and it’s using up to 3x the power of the M1.

https://www.anandtech.com/show/16446/amd-ryzen-9-5980hs-ceza...


Isn't it 35W for the 5980hs vs. 26.8W for the M1, so up to 1.3x? https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


That’s not the right part, the 5800U is the low power Ryzen competitor.


I don't see any reviews for laptops with that CPU. It's hard to compare against something no one can test.


Then why did you compare them?



Let's look beyond the hype:

- With M1, Apple has finally closed the loop and created a fully closed system that it has control over vs an x86-64 platform that is more open and gives us users more choice and control.

- All new devices with M1 SoC score very poorly on repairability and upgradebility vs the x86-64 platform that still allows you to repair / upgrade RAM, memory, CPU's and other parts in an affordable manner.

- And because of the above two, all M1 devices with low ram and storage, have planned obsolescence of 3-5 years built-in, along with a kill switch to totally disable the device.

Apple has a found a great fit for their business model with the ARM SoCs. Even when (not if, when) AMD and Intel bring out better processors, Apple will stick with their ARM SoCs because todays computing power has far outpaced the demand made by the common softwares used by all. And Apple can cater to the common denominators with their own OS + CPU + 8GB + 256 GB NvME quite well for maybe another decade.

I am happy there is some good tech development and competition in computer systems. But my Intel Mac Mini will be the last mac I will buy from Apple. macOS Mojave still runs fine on it. Otherwise there's always Windows, Linux and FreeBSD. That's the real advantage of the x86-64 platform that people like me will never sacrifice for closed systems like the M1 devices.


> the x86-64 platform that still allows you to repair / upgrade RAM, memory, CPU's

Have you opened many x86 devices of a similar form factor to the current M1 devices? I haven’t seen a socketed CPU in a laptop in over a decade, and the vast majority of ultrabooks and thin-and-lights have soldered RAM now too. The latest trend has been to start soldering SSDs as well.


True, other manufacturers have been trying to ape Apple and solder RAM and other parts on the x86-64 platform too. Thankfully it hasn't yet fully spread to the desktop platforms - I can still build my own PC. I pin my hope on government regulations and right to repair to stem this - EU has already emphasised that they are serious about their "right to repair" legislation. I also like the attempt being made by indie engineers to create more repairable phones and laptops.


The right to repair has nothing to do with socketed components. You already have the right to own soldering equipment.


Technically, yes, the "right to repair" isn't specific to socketed components. But socketed components do make it easy to repair a device and is the obvious way forward to make devices easy to repair and reduce waste (in fact the EU actually funded a project to create a mobile phone with more reusable components - http://www.puzzlephone.com/ - and some startups are also trying to do the same with laptops - https://frame.work/blog/introducing-the-framework-laptop ). As for soldering things on your own, today's modern electronic manufacturing techniques make it a very difficult task.


> kill switch

can you elaborate?


In all its iDevices, and mac with T2 security chip (so all the M1 devices now), Apple offers an anti-theft feature called the "Activation Lock" - https://support.apple.com/en-us/HT201365 - once activated, you can use it to erase all data on your iDevice remotely and ensure that nobody can use it without knowing your Apple ID and password. It's a useful feature.

But if a government (or Apple) wants, this can be abused - for example, tomorrow if your country goes to war to the US, the US government could ask Apple to disable all Apple devices in that country, and Apple could do it, whether you like it or not.

(The conspiracy theory part that others are hinting at is that many believe that this can be easily extended to totally cripple the device and make it unusable.)


I mean technically there's the T chip which could switch off everything. Parent poster seems to be posting conspiracy theories that apple will maliciously kill machines in 3-5 years at their whim. I would guess that would not go over with the government or customers and be the end of Apple PC type products though, so it's not going to happen


> Parent poster seems to be posting conspiracy theories that apple will maliciously kill machines in 3-5 years at their whim.

I didn't insinuate anything like that. I was just pointing out that it exists.


Do you want to buy my 2020 Intel MacBook Pro?


It could be clearer but I suspect they do it on purpose for a purpose I've never understood.

I've always thought that a system like

    i7-4-8-10-M (4 cores, 8 threads, 10th gen - Mobile)
    i5-8-8-11-H (8 cores, 8 threads, 11th gen - High perf/Desktop)
There has to be a reason they haven't just been explicit with it.

When I shop for a cpu I care about a handful of things, manufacturer, cores, threads, generation, model


I like to read news and keep up to date on silicon development but when it comes down to buying a laptop with an Intel CPU I really can't tell how good it is from the name.

I usually end up putting it into cpubenchmark to get any sort of comparable numbers.

Nowadays a chip from 2014 and 2019 can both be named i5, and the 2014 can have higher clockspeed while being half as slow... You just can't tell, and maybe that's the goal.


Just be aware those kind of sites aren't always impartial, particularly in how they weight actual metrics to come to the composite single score, e.g. userbenchmark: https://ownsnap.com/userbenchmark-is-not-trusted-by-tech-ent...


Keep in mind this is all fallout from the 90s/really 00's where "clockspeed was king" which AMD worked hard to disabuse the market of.

We've never really recovered from that.


Can you elaborate? I do remember clockspeed being the main thing you looked at and then being surprised that changed, but didn't really have any insight.


Parent is probably referring to ~2000 to ~2010.

Generally speaking, AMD had some more advanced microarchitecture features than Intel, leading to better per-clock performance.

Intel marketing struck back by emphasizing GHz, GHz, GHz. At least until Pentium 4 scaling hit a brick wall.

Then, a chip was sold as a "Pentium 4" "2.8 GHz" (or a "Pentium II" "300 MHz"). No other codes.

By the end of that range, the multicore era had started, and both Intel and AMD had moved to new systems of processor labelling.


Pipelining was really kicking off in a big way and suddenly IPC made as big a difference at a time amd led in that regard as clock speed so amd started numbering their cpu models with a number that was based on the MHz they'd expect an Intel CPU to need to reach to match it.

Of course intel also released new CPUs and AMD didn't want their newer CPUs to have lower numbers than their old one, so that number eventually got inflated.


As others mentioned, Pipelining resulted in scenarios where Other manufacturer's CPUs had a lower clock rate but 'comparable' performance.

There's two main Eras of this;

During the P5/P6 Days, AMD, Cyrix, and NexGen made CPUs with 'PR' ratings, based on what they felt their CPU's compared to.

Ironically, this first era is probably why folks got so soured on 'PR ratings'; As far as the AMD K5 and Cyrix 6x86 went, These numbers were based more on integer performance, additionally Intel's P5 had a very novel (at the time) pipeline that some Game developers were optimizing for (Quake comes to mind here.) For NexGen the situation was even worse, in that some of their models completely lacked an FPU.

All those factors together made consumers a bit more wary of PR ratings for quite a long time.

Thankfully, AMD Bought out NexGen, took their arch and made it into the K6, which was very competitive with the P5 Clock for clock, and PR ratings went away.

They came back in the days of the Palomino K7s, but what a lot of people might not remember is that the Athlon XP's PR ratings were technically supposed to relate to a Thunderbird core. IOW, an Athlon XP 1800 was supposed to be 'equivalent' to an TBird Athlon running at 1800Mhz.

But, of course, PR ratings drifted again, as they tended to... and now we are in model number hell.


Just remembered a bit of history: the first (4 bit) Intel microprocessor would have been called the 1202 according to Intel's chip naming convention at that time until Federico Faggin pushed to get it changed to the 4004.

The 4004 was a hit and the rest is history!


If it's complicated for most customers to choose a processor/pc, than often sales people help them choose. That's good for those sales people and good for Intel.

And it's not that like Intel is really competing with anybody.


> I've always thought Intel's marketing was a bit confused - i7 stays the same over 10+ years with only the obscure (to the general public) suffix changing from generation to generation.

It's been a mess for a long time. An i7 may be less performant than an i5, which may be less performant than an i3...all depending on which specific processor is chosen.


I ran a gadget shop for years. The amount of times people would say ‘What I have must be better because it has an i7 and that is an i5’. Then I’d have to explain that their i7 is ten years old and that they aren’t even in the same realm. Confusing marketing indeed.


The gadget shops around here turns this to their advantage. Trying to sell a five year old i7 at premium prices. Usually hiding the generation in very small fine-print.

Makes it next to impossible to give advice to family members what to look for when buying a new laptop. Simply Macbook M1 becomes much easier to recommend. As bonus you don't have to worry about underpowered ssd, noisy fans, weak backlight or what else the oem might have saved on.


The i7 name was ruined by dual core mobile parts

It meant at one time “quad core”


I mean......the very first generation of i7s already had a dual-core mobile part, i7-620M was a dual core cpu.

So I'd argue you can't really "ruin" something you've just introduced. I do agree that for a while it was standard that a desktop i7 was a quad core and a mobile one was dual core.


At one point I think hyperthreading was i7 only too?


Mobile i5s always had hyperthreading from the beginning.

Edit: nope, actually, I'm wrong - Core i5-430M was a straight dual core, no hyperthreading part.


Originally, my memory says "i7 meant 4+ core & hyper threading", "i5 meant 4 cores w/o HT", "i3 meant 2 cores (with or without HT)".

But I'm sure even at the beginning there were SKUs that broke the coding due to customer demand.


I think the lesson here is that Intel's marketing for these chips has been a mess from day one.

Some of it is inevitable, they have too many aspects to contain in a reasonable product name, but letting the names become divorced from any sort of reality helps nobody.


I'm writing this on a dual core i3 with hyper threading.


Yes this was so annoying... We optimized software for multi core, but the laptop builders label their products as 10th gen i7.... which says little about it’s performance. GPU labels are easier to decode by comparison.


There's a value in that too - people know immediately that i7 is powerful, and everyone [that cares] already knows that processors are renewed frequently. Though yeah, perhaps something like 7i1, 7i2 would've been better. Apple solved this nicely (given the naming is confirmed) with M1X.


That's fine if you're buying new. Is there an i5 that beats an older i7 in performance? When buying a used laptop or desktop, how much extra research do I need to do to figure out which processor I'm actually getting? There's been 13 years of i7's.


>>Is there an i5 that beats an older i7 in performance?

Of course. I got a new laptop last year with a Core i5-10300H, and it easily beats my desktop i7-4790K, despite the same number of cores, in every workload.

UserBenchmark is your friend for this kind of thing really:

https://cpu.userbenchmark.com/

I do have to say that nowadays buying an Intel CPU is a really bad idea though.


I can't ever imagine why manufactures of <<everything>> don't use dates.

Product lines 1 - 10. Bigger is better.

A minimal number of qualifiers next to the numbers, ideally just 1 of them next to a product line number (Pro/Home/Hobby/Enterprise/whatever).

Date, month and year format if you really want flexibility.

OblioCorp Oblio 1 Home 2020.06. Oblio 10 Enterprise 2021.04.

How hard is that? Not that hard, from where I'm standing.


It makes sense for software release versions, but I think this lacks marketing prowess.

Consider; “hey man check out my new M1 machine!” Versus: “hey man check out my M1 2021.04”

Even with the Core series it’s short and sweet.

“10th gen. Core i7” still sounds futuristic and always will.

Ryzen 7 also.

Honestly I think folks here are overthinking this stuff. It’s easy enough to compare the specs on a 10th gen cpu vs 8th gen. I think that’s easier than comparing different product names.

For instance, just by the names: Pentium 5 versus Core i5 it’s not quite clear unless folks were old enough to remember Pentiums. Whereas, everyone knows current generation has better processing than previous generation.


> Versus: “hey man check out my M1 2021.04”

Noone would ever say it like that. They would simply say: “hey man check out my new M1 machine!” And if it happened to come at a generation shift the context would make it obvious if it was the old generation at a bargain or the brand new generation. (or that the person saying it couldn't care less about specs and generations)

Now if you would buy a used computer, or would like to compare your current i7 to the latest i7 - then you would take notice. How much difference is there between a i5 2021.04 vs. i7 2016.8 ?

That is dead simple to google and reason about.

Compare that to: So what computer do you have? Oh, it is an i7 with 16 GB of RAM. I have no idea what decade that machine is from. And noone remembers the specific version, and if they do I have no idea how to parse it anyway.


Speaking of Ryzen, I see everyone attacking Intel for their naming especially now that the generation is 2 digits, but AMD's situation is a mess too. The Ryzen 7 5800x is built on Zen 3 architecture. I know the numbering scheme and still sometimes mix up the generation of processor with the generation of architecture.

Also, given that AMD copied Intel's 3/5/7/9 numbering, I'll be interested to see if in a few years they're selling a 10800x.


Because approximately no one in the whole supply chain wants to be stuck with a part dated last year.

And product lines are speced 12 months in advance.

So you need precise demand prediction all the way along your supply chain for this to work.


Who's ever going to use the full date in the marketing material?

Apple never says iPad 2019 on their website, until you start digging really hard.


Exactly. This proves the point I'm making. No one - even Apple with the best supply chain management in the industry - wants the version scheme you suggest.

Vendors want version numbers to push upgrades to consumers. They don't want to deal with demand forcasting for something with a limited time of life though.


Ummm... nice turning of my point on its head.

Nobody said that they have to make that versioning public and shout it from the rooftops.

Users do use the informal versioning that I mentioned, because otherwise they wouldn't be able to differentiate the products. I've heard "MacBook Pro 2015" a million times.

And a date <<is>> a version number. It also nicely auto-increments to prove to your customers that the latest thing is better than the old one.

Doodad Pro, 2021 edition. It's not that hard, as I was saying.

The alternative is the garbage that everyone is doing. WH-1000XM3, really?!?


Cars do this. BWM 3-series or WV Golf has existed with that same name since forever, despite getting smaller or generational upgrades every year. Enthusiasts refer to them by model year. Of course you also have the engine size as the final variable.


Nah. BMW used to have sane numbering, as did Mercedes. Nowadays its all made up bullshit. 28i? you might thing straight six 2.8 Liter, well its a 2.0 Turbo. M550d? 3 Liter. Merc E63? 6.2 Liter. Dont even get me started on Porshes non-turbo/electric Turbos, or Mercedes 4 door "coupes".


This was a situation I ran into a lot when buying simple Intel-based nuc's for family. It got more complex when the difference between and i5 and i7 got really close due to thermal constraints.


I don't think this is a solveable problem. with a wide range of skus, inevitably there will be an nth gen part that outperforms an n+1th gen part.


But then you have different i7 variants with one having a different letter in the name which makes it just half as powerful because it's an extra energy-conserving mobile CPU.

I'm not into hardware but I have a rough overview, though whenever I'm supposed to help someone choose a laptop I only see a whole page with dozens of products and the only difference are a few numbers in the CPU serial number and some other weird product names and I simply have no idea what to say. I fully understand people who just go "I take this one because it has a nice color" - a layperson simply has no chance at understanding the difference, and good luck finding a store where the employees know more.

I never owned an Apple computer and I can still tell you on the spot which category of Macbook is better and more expensive, and why. Might have to do with the devices having normal names instead of alphanumeric strings that probably came from a password generator.


It is very confusing because typically the previous generation’s i7 performs the same as the current gen’s i5, within the same tdp tier, and i5’s from a higher tdp can vastly outperform i7’s from a lower tdp within the same generation. Typically at any one time there are two or three processor generations in actual laptops being sold, as well as three or more tdp classes, so there is basically no good way of knowing how different laptops in the store compare performance-wise except by looking up benchmarks.


But selling previous models isn't interesting to Intel.


> i7 stays the same over 10+ years with only the obscure (to the general public) suffix changing from generation to generation.

My wife was confused about that earlier this week. She was like: 'you mean i7 is not newer than i5? And if i7 is better than i5, why can an i5 be better than an i7? (when it's an old i7)'

I get they have the generation. But:

1 - it's not easy to compare different models of different generations. How can a non-technical person compare 7th gen i5 with 5th gen i7?

2 - Many places that sell computers don't put the generation on the 'headline'. You have to dig in the specs to discover that, if you can find it at all


Words of mouth is not just marketing. Whilst I still wait for the 32GB version :-) I may still jump in after having holding those hot macbook, MacBook Air and not to mention i9 MacBook Pro with turbo off already. Even MACmini is hot air. Only the decade old Mac Pro is hotter (but other than noise, it is not on my hand, my lap etc. And hence it is fine for a decade old mac).

It is not just “advertisement”. Btw I still confuse marketing meant segmentation, target selling, ... may be they are as so far only on low end. As said by a YouTuber it is so much better if they do Color on macbook aid or macbook 12”.


Yeah, it's arguably quite weird that Intel don't market this way anymore (they used to, particularly with the Pentium brand).

EDIT: Though, thinking about it, I can see why they might have been reluctant. Of the big releases over the last 15 years or so, Core 2 was, while a massive improvement, also essentially an admission that P4, and their whole strategy around it, had failed, and Skylake had a lot of teething troubles. The only big microarch shift that was an unqualified plus was Haswell, and I don't understand why they weren't louder about that.


Interesting. Marketing about "computer guts" has never been an Apple thing,so it's interesting to consider the motivation here. It could be that the M1 is that revolutionary, but Moore's Law et al makes me think this is not about new chip technology. More likely competing against Intel or whoever.


Remember when Intel was 8086, 80286, 80386, 80486, Pentium, Pentium Pro, Pentium II, Pentium III, then ... insanity.


The insanity started with the Pentium 4, specifically the HT versions. HT made the CPU slower, it was quite the anti-feature.


I keep seeing an Intel ad on Twitter that is a picture of a MacBook Pro and says "Can this computer turn into a tablet? Hint: no"

It honestly just feels desperate to me.


That's a good point. I think this will finally be the thing that makes Intel move away from that marketing approach too. Its outlived its usefulness.


You get an M1 with the new iPad Pro as well! I hadn't thought of the situation as the article presents. When shown in that light, it made me pause to reflect. The M1 doesn't make sense when its in every darn product. The only differentiator is screen size, RAM, and OS?

I will admit that I switched to Thinkpad and Win10 about two years ago when I had to return my butterfly for the 5th time. I am not looking back either. If anything, I am more focused on AMD Ryzen and Nvidia 30 series chips in MSI, Lenovo Legion and Asus offerings. There is nothing I can't do with one of those machines. Going Apple is a backward move for me as I like to program, design in CAD, play steam VR, and run blender sims. Can't do any of those well with Apple hardware.


Apple differentiates the majority of their products by generation rather than binning. If you buy a low-end iPad, you get an A12; the iPad Air steps you up to an A14; and the iPad Pro gets you an M1.

This is less evident on the Macintosh side of the business right now because they're just trying to get M1 silicon into as many product lines as their fab capacity will allow. They don't actually have an M2 (or even M1X) to sell high-end products with yet, which is why they're starting with low-end products first. When they release upgraded chips, they will almost be used to transition the high end models with lower-end product getting it later.


> Apple differentiates the majority of their products by generation rather than binning.

This simplifies things for consumers, but how do you make chips without binning? Are they all just that reliable? Do they all have extra cores? Maybe Intel bins more because they can squeeze out a significantly better price for more core and cache, but Apple's margins are already so high they don't care?


So, there are a few instances where Apple does bin their products:

1. "7-core GPU" M1 Macs, which have one of the eight GPU cores disabled for yield

2. The A12X, which also had one GPU core disabled (which was later shipped in an 8-core GPU configuration for the A12Z)

3. iPod Touch, which uses lower-clocked A10 chips

It's not like Apple is massively overbuilding their chips or has a zero defect rate. It's more that Intel is massively overbinning their chips for product segmentation purposes. Defect rates rarely, if ever, fit a nice product demand curve. You'll wind up producing too many good chips and not enough bad ones, and this will get worse as your yields improve. Meanwhile, the actual demand curve means that you'll sell far more cheap CPUs than expensive ones. So in order to meet demand you have to start turning off or limiting perfectly working hardware in order to make a worse product.

Apple doesn't have to do this because they're vertically integrated. The chip design part of the business doesn't have to worry about maximizing profit on individual designs - they just have to make the best chip they can within the cost budget of the business units they serve. So the company as a whole can afford to differentiate products by what CPU design you're getting, rather than what hardware has been turned off. Again, they aren't generating that many defects to begin with, and old chips are going to have better yields and cost less to make anyway. It makes more sense for Apple to leave a bit of money on the table at the middle of the product stack and charge more for other, more obviously understandable upgrades (e.g. more RAM or storage) at the high end instead.


I believe that Apple is effectively doing some binning with the M1: The intro MacBook Air has one less GPU core than the higher-config version or the MacBook Pro.

It would make sense if those were identically-made M1s where one GPU core didn't test well and thus had its fuses blown. Between the CPU and GPU, the GPU cores are almost certainly larger anyway; the GPU cores would therefore have higher probability of defects.


It should be possible to identify this from infrared images of the cores in operation.

If you find different sets of cores in use, you can be pretty sure it was binned...


Binning requires more design work for the chip. I would guess the M1 was designed rapidly and probably they decided that hundreds of different bins for different types of defects wasn't worth the complexity if it meant delaying tape out for a few weeks. It also leads to extra product complexity (customers would be upset if some macbooks had hardware AES and others didn't, leading to some software being unusably slow seemingly at random).


> I would guess the M1 was designed rapidly

How rapidly? And so how come it has such spectacular performance? Or the shortcomings of the x86 arch were so, so soooo obvious, but nobody had the resources to reaaallly give a go to a modern arch?

Or maybe, simply the requirements were a lot more exact/concrete and clear? (But the M1 performs well in general, no?)


Apple has always binned; they just don’t publicly announce it all the time. For example, iPod Touch has been historically underclocked compared to the equivalent chip in iPhone or iPad.


It seems like they do binning. For example, the M1 with 7 gpu cores vs 8 gpu cores.


That struck me as an extremely odd metric to differentiate products by, given its low relevance to non-technical users who don't know what a GPU core even is (except gamers, but they are not buying iMacs). Additionally, most people are going to think adding one more core is hardly worth the price upgrade, and its quite strange to see an odd number of cores.


Apple dont bin them because of price / product differentiation as in Intel / AMD.

Apple bin them only because of yield.

I also think Apple doesn't want to use their M1 as differentiator. They use RAM and Storage instead which is an easier concept to grasp by consumers.


Depends, remember that the 5nm processors overall are fairly new so we have no idea of yields. But if 1/5th of the processors are flawless, 1/5th have one faulty GPU and the rest have more errors (in either CPU or two or more GPU failures) then having twice the amount of CPU's available to sell (at a slightly lower pricepoint) might make perfect sense.


TSMC claimed last year that 5nm yields are better than 7nm.

https://www.anandtech.com/show/16028/better-yield-on-5nm-tha...


It will have a far greater than 1/8th performance impact.

When data structures are power-of-two sizes, having 7 cores instead of 8 could halve performance, since the work gets split into 4 pieces and 3 cores sit idle.


GPU schedulers are much, much more advanced than you give credit.


Well GPU data structures aren’t always a power of two, right? There’s more than textures. For a fact, I know vertex count (vertex shaders?) and screen sizes (fragment shaders?) will rarely be exactly a power of two.


Nope, AFAIK the actual performance impact is < 10 % for most workloads.


Isn't false sharing (as in all your addresses hashing onto the same cache lines) still an issue for power-of-two sizes? You'd have to mess with padding to figure out what's fastest for each chip regardless of core count.


> The only differentiator is screen size, RAM, and OS?

Apple is iPhone-izing (for lack of a better word) the rest of their product lines. If for the last ten years, the market hasn't really cared about the speed of the phone's processor within the same generation, but rather about physical differentiators (e.g. screen size, number of camera lenses, adding facial recognition), and the non-professional market is overwhelmingly characterized by light-usage applications, then why, pray tell, should laptops and desktops be so different?


> the non-professional market is overwhelmingly characterized by light-usage applications

I'm not sure social networks and most modern sites or apps qualifies as light-usage nowadays. Browsing Reddit or Facebook put my pro to its knees. The difference of CPU speed in phones is clearly visible for consumers (but maybe you don't notice it if you only use iPhones, because an SE from 2015 is still fast, but try an Android phone from that period).


That’s because new Reddit is laden with ads and JS and crap. Try https://old.reddit.com


Or one of any third-party native iOS apps (Apollo[0] is my preference) that are both faster and feel better than the official app and mobile site.

[0] https://apolloapp.io


I love Apollo but I am very annoyed that they don't allow side scrolling between posts. I just find that much easier to do and it somehow bothers me when I don't have it.


I think this is what BMW does with cars to a large extent (maybe all car manufacturers do?) - an example: https://en.wikipedia.org/wiki/BMW_B58

They combine standard parts into a number of configurations that aren't as different from each other as a consumer might imagine, then slot those engines into a whole host of different car models.

From a production stand-point, this makes sense to me. I imagine that it makes high quality at a lower price much easier to accomplish for Apple.


> The M1 doesn't make sense when its in every darn product. The only differentiator is screen size, RAM, and OS?

Why not? It's basically back to where we were in 1980 when "everything" had a Z80 or 6502 (or both!) in it, and the major differences were in what else was in the system.


6809.


> The only differentiator is screen size, RAM, and OS?

I think that's the point. Until now, buying a computer has always been focused on the CPU and RAM stats. If you wanted faster/bigger you had to spend more. With Apple's new strategy you almost don't even care about CPU/RAM stats. They are focusing on providing value in other ways; larger screen, lighter weight, different colors, more ports, etc. I think this is the biggest shift in computers in quite a while and makes it much more akin to purchasing a phone or tablet than specing out a computer.


In high school, I worked a retail job for an office supply store, and one of the PC brands we would sell was Packard Bell.

We sold many different models with similar specs, and the main feature many of the potential buyers would be concerned with:

Will it fit with my current office furniture?

We sold both desktops and tower configurations, and the type of desk the computer would be used with was the determining factor in which model to buy.


And essentially, why not? If I want to work on a machine, I want it to be fast. Not 1.33x of the baseline benchmark CPU when equipped with X GB RAM. Just fast enough it doesn't annoy me.

And with today's PCI-E-based SSD's, waiting for data to be written to "disk" is a non-issue, so the system feels much faster.


Apple’s goal is to eliminate technical specifications from marketing: tech specs are an excuse - make it good enough that few care what those specs are.


I can't think of a technological product where that mentality applies. Can you?

In fact, it seems that the opposite is true; as a product gets better, people care more about the specs. Whether you're buying a Wusthof knife or a luxury car, you want to know what makes the product good enough to justify its price and position in the market.


Vanishingly few people buy phones based on screen resolution, RAM, bandwidth, etc. Ditto computers used mostly for mundane email, web browsing, games (not hardcore), and such. Few buy cars based on horsepower, range, etc.

Insofar as people do consider specs, it’s usually because the specs are injected into the conversation, customers being taught to care by salesman trying to baffle them into choosing their product.

Most customers want it to just work. Apple is pursuing that.


It's like how no one cares about which chip is in their smart TV, regardless of price range. They just want to watch Netflix without UI bugs.


TVs are sold on refresh rate, color accuracy, contrast, backlight localization, and panel resolution. This stuff is written on the box and promoted in marketing material. They are unavoidably differentiated on technical specs, even in the eyes of a layperson. Some of those specs depend on the panel (itself a semiconductor product) and others depend on processors/ICs.

I'm not suggesting that no parts can ever be a commodity (like capacitors in a laptop), but as I spend more, I increasingly tend to look for a technical advantage that justifies the marginal price.

I'll give you an example. If you want to buy a docking station for an Apple TB3 enabled MacBook, you have a couple of controller options at the higher end: Alpine Ridge and Titan Ridge. Better chips exist but they haven't found their way into truly well engineered consumer docks. I have a multi monitor setup with one superultrawide screen that can do 120Hz, so I opted for the Titan Ridge dock. It was buggy, so I ended up returning it and buying an Alpine Ridge dock that lacks the ability to push my big monitor at its best-looking resolution. And that's for a sub-$300 peripheral.

Like most of us here on HN, I'm one of the "few" that GP mentioned. But these are consumer products and they are mass marketed based on their technical specs.

Apple itself markets technical innovations in its 6k IPS monitors. The back is designed to dissipate heat, the monitor works with TB3 (ie, an Intel chip)...they even market the glass treatment in a deeply specific way.


Most TV customers can’t articulate the difference. Many may state a preference but only because such numbers are prevalent and associated. Were the numbers not advertised, most wouldn’t ask.


That was the game change with the original iPad. When you strip all the other specs away and put an original iPad next to a 2010 laptop, you realize just how awful the mainstream LCD panels were at the time. It's an exciting change I think will benefit consumers. Less being upsold on questionable i7s and more nice displays please.


Why doesn't the M1 make sense in a variety of products? I don't follow the logic. It is a processor that can scale to meet the demands of mobile computers including laptops, tablets, and designer desktops (iMac). In each use case it fulfills the its computational role regardless of I/O or even operating system.

Based on your own description, you are self-selecting as an enthusiast that prefers gaming-like PCs. Isn't that exactly the sweet spot that Apple doesn't support?


> The M1 doesn't make sense when its in every darn product. The only differentiator is screen size, RAM, and OS?

They've only replaced the lower tier of Macbooks and iMacs with the current M1 board, which suggests to me that they're working on a variant with more CPU and GPU cores that will go into the higher tiers of those machines.


I don't think they have the fab capacity to be able to do that.

Since M1 products are selling very well, they likely have reallocated fab capacity they had planned for the successor to the M1 back to making the M1.

A higher performance M1 would have a bigger die size. It would have a lower yield and get fewer dies per wafer. The same capacity would sell fewer products.

Apple probably earns more profits by simply selling more M1s, unless they can reserve substantially more fab capacity.


> they likely have reallocated fab capacity they had planned for the successor to the M1 back to making the M1

Got any proof or something? This is a bold claim to make.


Some benchmark sites claim they saw performance reports. Whether they make those up to see more traffic, I cannot tell. If I worked at Apple, I would not tell :)

For example https://www.cpu-monkey.com/en/cpu-apple_m1x-1898 talks about a 12 Core CPU but with only 16GB RAM.


> I don't think they have the fab capacity to be able to do that

Do you have a source for there being any fab capacity shortage for Apple (or anyone, for that matter) at 5nm? Older nodes are a different story because that's still where most of TSMC's customers are doing their high-volume production.


Plenty of people would like to die-shrink their 7nm designs down to 5nm if only capacity existed.


I think plenty of people will skip it. Even ignoring the high cost, not everything scales as well with every new node. There're plenty of customers who will have no interest in this node right now and some that will have no interest in it ever. I've not seen anything that suggests capacity shortages at 5nm yet, especially for Apple who've already booked all of 3nm for next-gen.


SRAM scaling from 7nm to 5nm is pitiful. While the CPU transistors see 50-70% increases, the SRAM is only shrinking 20-30%.

For chips with massive cache, that isn’t super cost effective (I suspect as a cost saving measure that we’ll see L2/L3 cache moving to a separate chip on a larger process while the rest shrinks down).


There’s already signs of multiple “next generation” chips in the works.


>The M1 doesn't make sense when its in every darn product.

Would you feel better if they called it A14X?

It is basically the same thing as what Intel is doing. Same Die, different binning, different naming. Same with Core count on AMD, same die, different binning on Cores and Clock Speed.

Apple doesn't bother do any of that because well it is complicated for consumers. I call it TDP Computing. You are limited by the TDP Design of the product. Not the Chip.

I am waiting to see Apple absolutely max out their SoC approach for Mac Pro.


Apple is doing binning, at least to some extent. e.g. some Macs have only 7 GPU cores rather than 8. Presumably that's done to a greater extent for the iPad Pro


>I am waiting to see Apple absolutely max out their SoC approach for Mac Pro.

But they already maxed out their SoCs in the benchmarks because Geekbench doesn't care about TDP. The TDP makes a difference in real world performance but not in the benchmarks.


They are only maxed out in Single Thread Performance with its current power designed constrain per Core using current low power node.


I agree that the M1 doesn't look good for high performance computing right now or anything with poor ARM support. But strategically, I think Apple is in a good spot. For heavy computation these days, I always remote into another machine. With increased bandwidth and more efficient remote desktop protocols, I even do all my graphics-intensive 3d work remotely now. By focusing on low-power processors, Apple is making the laptop/tablet/phone experience better, and I could see them handling the performance issues via remote compute. It could be a very effective strategy (if it is their strategy).


I think what apple may be trying to do is reduce macOS sales and full throttle on iOS sales. It no longer makes sense for a macOS if it is going to have the same chip as iPad Pro.


How's that any different than having a single Intel generation scale all the way from low powered laptop SoCs to 12 core i9s though?

After all, the M1s across devices aren't the same either - iMacs have different configuration from iPads, those have different amount of cores and clocks from Airs and MBPs as well.

It seems like the difference is only in Apple vs. Intel marketing blurb.


Power consumption, number of cores, clock speed, etc.


I'm not sure what you're trying to say - M1s are differing in clock speed, number of cores and number of GPU cores across Apple products as well.

"M1" is basically equivalent to "Core i" designation from Intel - except that Apple just doesn't tell you about the other part.


No? That's the entire point of the article. The only difference is whether you get a 7 core or 8 core GPU.

> If you want to buy a MacBook Air or MacBook Pro, Apple will sell you an M1. Want a Mac Mini? You get an M1. Interested in the iMac or the new iPad Pro? You get an M1. It’s possible that the M1 CPUs inside the iMac will have different thermal or clock behavior than those inside the systems Apple has already launched, but the company’s decision to eschew clock speed disclosures suggests that these CPUs differ only modestly.

> with the M1, is that its custom CPU performance is now so high, at such low power consumption, that the choice of chip inside the system has become irrelevant within a given product generation


Exactly, so what is the point here? How exactly is that different from Intel generations?

Why does Apple put different amount of cores and clocks into different products if the choice doesn't matter? It seems like there are performance differences there if they choose to install differently configured SoCs in different devices and even split them by pricing on the iMac.

So, please, explain where's this big difference? (With more than a single sentence of your words if possible.)


I've seen nothing to indicate different products have different clock speeds?

So far we're seeing two differentiators. One is the use of the 7-GPU bin to hit the bottom-of-the-range price point for each product family. The other is simply the different thermal characteristics of different products - the passively cooled Air & iPad Pro will thermally throttle earlier than the actively cooled MBP, iMac & mini.

The products aren't being differentiated in silicon, they're being differentiated in feature & format. My mother can tell me the difference between an iPad and a macbook without describing anything that she can't see with her own eyes.


The M1 in the macbook air throttles when it gets too hot due to the thermal characteristics of the case (no active cooling).

The M1 in the macbook pro is actively cooled, and does not throttle.

When additional cooling is applied after-market to the macbook air, it has the same performance characteristics as the macbook pro.

The M1 is the same in both products. It throttles when hotter. It's more likely to throttle if not actively cooled and under heavy workload. It's not clocked differently. There's no artificial constraints.


There are only two different M1 versions. The 7 GPU version and the 8 GPU version. All of the cpu cores are the same. All power envelopes are the same. That's much different then the vastly different form factors and power envelopes intel has.


We have yet to see any new iPad Pro with the M1. I wouldn't be surprised if there was a difference in clock speed with those.


I don't expect so. The air has no active cooling and can run the M1. Should be doable for the ipad as well.


I have an M1 Macbook Air and this thing gets a bit hot for an iPad. I don't think the iPad Pro will be able to hold the same clock speed as long as the other machines over the long haul.


Make it as fabulous as much as you want, $1700 for an 8 core, 8 gigs of ram machine is just one plain fabulous joke. This in the time when 16 gigs ram is the baseline if you plan to do anything more than facebook and instagram.


So my sister does professional photography, and went from a 16GB MacBook Pro(2016) to an 8GB Air, and reports no decrease of productivity - quite the opposite, in her experience all Adobe programs run much faster, and the machine is quieter and lighter to boot. So yeah, I'm not sure - maybe the ram amount isn't as a big deal as people make it out to be. On the other hand I'm a C++ programer and my workstation has 128GB of ram and I wouldn't accept any less.....so obviously it varies.


> I'm a C++ programer and my workstation has 128GB of ram and I wouldn't accept any less

What on earth are you programming?


He won't know until all the templates are instantiated...


That's a fantastic joke, thank you :D


Absolute madlad


Lol monster


AAA video games do that for you :P Well, it isn't the programming part that uses the ram(although yes, building our codebase takes about 40 minutes and uses gigabytes of ram without using distributed build), but just starting up local server + client + editor easily uses 80-100GB of ram since ALL of the assets are loaded in.


Did you have the chance to try your setup on an M1? If it worked for your sister, although you seem to have way higher requirements, is there anything to say it wouldn't work for you?

I'm asking because I read a lot of comments when it was released that it just doesn't need as much RAM because $REASONS. I wouldn't put my money on this, but I'm curious if this assumption holds water now that people have had time to try it out.

Edit: there are such comments further down the thread where it seems to still be a mystery: https://news.ycombinator.com/item?id=26913643

So I'd really like to know where this magic breaks down: if you're used to 128GB or RAM, will the M1 feel sluggish?


I doubt it's possible to try a AAA dev setup on OSX at all, and, for whatever it's worth 64G workstations were "hand-me-downs" at my previous gig (AAA gamedev), I doubt there's much magic that can make "64 gig is not nearly enough" go to "16G is fine"


I also have doubts but that's what the marketing hype has been claiming for some time now, so I'm really curious about real-world experiences and where the hype breaks down. The debate is often "I need way more RAM!" vs "But this is a new paradigm and old constraints don't apply!".

AAA gamedev might be the wrong demographic though, since it's mostly done on Windows (I think?).


Well, 100% of my development tools are Windows only so I can't really give it a try, sorry :-)


I'd suggest people just get the larger ram unless they're tight on budget. I know Apple's trying to argue otherwise and people will agree with them. But I can't hear it as anything other than thinking molded to fit a prior decision. For what it's worth, and not scientific, but reported "percent used" statistics seem to grow slower for the 16 gig models than the 8 gig models (from the smart utils).


When VS decides to disk bomb you or uses literally 120GB of ram because its auto scaling trips up.


I'm equal parts happy and terrified that MS announced x64 version of VS recently, because I know it will just mean VS can now scale infinitely. At least right now the core process has to stay within the 4GB limit :P


So instead of attacking Microsoft about giving VS, we attack apple for not having enough ram lol


>What on earth are you programming?

Earth. He is programming the earth.


One example is linking QtWebEngine (basically Chromium) which can sometimes take upwards of 64 GB of RAM.


How did chrome eat safari’s launch when they were both based on WebKit. amazing !


Large code bases in an IDE, program dumps, large applications (the software I write will gladly use 10-20gb in some use cases), VMs, large ML training sets, &c.

128gb is likely overkill, but I can see a use case depending on what you're doing.


I think everyone should get more ram than they think they'll ever need. 32 gigs is that number for me, but if I thought I'd get even close to using 64 gigs, might as well go for 128.


Same setup as him, I'm working on llvm. It's very nice to be able to test the compiler by running and simulating on a threadripper CX3990, means I don't have to run everything past the build server


Web developer, I had an 8GB M1 MacBook Air and if I ran vscode with my 50kish typescript codebase and dev server for api and frontend + native postgres and redis at the same time I’d be right at the limit of my machine slowing to unusable levels. Switched it for a 16GB one and I’ve never had any noticeable performance issues since.


I think the efficiency of the new M1 is what blows everyone away.

When the M1 first came out I was super super skeptical, seemed like an under powered chip compared to what's in the x86 world.

Now I'm convinced that it's got some magic inside it. Everyone I've talked to said it chews throw whatever they throw at it.

I'm consistently floored by Apple's ability to innovate.


It’s funny you say “innovate.” They certainly innovated with the M1, but with one key difference. They did so in public, step-by-step over the course of a decade.

Perhaps more than any other Apple innovation, we have the greatest visibility into the process with the M1.


I consider ability to perform open development of future products a key differentiator for Apple.

Like Jobs said in his Stanford speech, “You can't connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.”

It seems like Apple uniquely combines open development of technology in known products with secrecy around new product development.

This allows people to be so surprised by the M1, when the late AX processors were obviously pointing toward massive capability.

I believe there are other examples of this happening--specifically with the Apple Watch.

On that product, the size limitations combined with increasing expectations of performance and functionality have allowed Apple learn and improve production-capability in many areas that will be in any forthcoming AR/VR products.


The sweet spot is 8GB for most people. I’d guess 80% 8GB/15% 16GB and 5% more. Even many types of developers are fine with 8.

Apple collects a lot of metrics and I’m sure they know this well.


The M1 also means that Apple regains full stack control of its desktops and laptops. Their phones and tablets prove that when they develop both the main chips and operating systems themselves, they are able to eke out greater performance from lesser specs.

Flagship iPhones always have less ram than flagship androids, but match or exceed their performance.


That may help a bit but unless they adopt that draconian policies that govern iOS runtime that impact is limited. Safari is an example of improved battery life when Apple owns the stack. It’s a great feature, but not a game changer.

Most business users were fine with 4GB 5-6 years ago. Electron apps like Teams and Slack pushed it up to 8. The next tier are folks with IDEs, docker, etc that are usually 16-32GB.


I agree. Honestly electron used the main thing pushing up ram usage these days for most people.


It's the Gigabyte Myth! Less is more! Up is down! In is out!


Running a few Google Docs tabs and I can easily bring my 16GB Linux laptop down..


Web Developer here. Wouldn't accept anything less than 32GB for a work machine.


Now I know why all the websites I use are so slow :D


You joke, but I think this is actually true. If companies gave their developers of client-facing software slower computers the resulting software would end up being faster.


Yep because they would not get anything done ;)

In my experience as a web dev, all the performance goes out the window as soon as the tracking scripts are added.


I had nearly a perfect Lighthouse score until the marketing team added GTM


I don't think it's the developers who are the problem. In my experience, they're usually the ones advocating to spend time improving performance.

PMs and executives are the ones you should give slow machines to if you want more focus on performance.


Agreed, IME it's nearly always the devs arguing to improve performance.


But it would take them an order of magnitude longer to ship it. `npm install` even on a fast machine is a slog.


> `npm install` even on a fast machine is a slog

It may not be with fewer and less heavy dependencies!


It's not the end product itself that eats huge amounts of RAM or CPU. It's the dev tooling.


We must be using very different websites.


Obviously a lot of the modern web is crazy and many sites are pretty heavy on resource usage.

It's still not comparable with your average developer tooling in terms of footprint, was my point.


And why some of my past upgrades were driven by the web browser over-taxing the machine on certain sites, while nearly everything else was perfectly performant with absolutely no complaints.


haha so true


Yep, if you're running big IDEs (e.g. Rider/IntelliJ, Visual Studio), containers and VMs, 32GB is really a must. There always seem to be people in these threads claiming that 16GB or even 8GB is enough - I just don't understand how that could possibly be for most of the HN demographic.


Do you think most of the HN demographic is actually running big IDEs, containers and VMs at the same time? I'm personally a CS student and never had to run more than a few lightweight docker containers + maybe one running some kind of database + VS Code and that has been working fine on a laptop with 8GB and pop_os. Could imagine that a lot of other people on HN are also just generally interested in tech but not necessarily running things that require a lot of memory locally.


CS PhD Student. Running a laptop with 16GB of RAM. I dont train ML models on my machine but whenever I have to debug stuff locally, I realize precisely how starved for RAM my computer is. I start by closing down FF. Just plain killing the browser. RAM down from 12GB to 7. Then I close the other IDE (usually working on two parallel repos). 7GB to 5. Squeeze out the last few megabytes by killing Spotify, Signal, and other forgotten Terminal windows. Then I start to load my model to memory. 5 times out of 7, its over 12-13 GB at which point my OS stops responding and I have to force reboot my system cursing and arms flailing.


> lightweight docker containers

If you're on macOS, there's no such thing as a “lightweight Docker container”. The container itself might be small, but Docker itself is running a full Linux virtual machine. In no world is that “lightweight”.


I was going to say, I'm on a 16gb 2015 macbook pro (not sure what to upgrade to) and Docker for Mac is _brutal_ on my machine, I can't even run it and work on other things at the same time without frustration


I run like 50 chrome tabs, a half dozen Intellij panes, youtube, slack, and a bunch of heavy containers at the same time, and that's just my dev machine.

My desktop is an ML container train/test machine. I also have ssh into two similar machines, and a 5 machine, 20GPU k8s cluster. I pretty much continuously have dozens of things building /running at once.

Maybe I'm an outlier though?


Yeah. I suspect most people here are software engineers (or related) and IDEs, Docker, and VMs are all standard tools in the SE toolbox. If they aren't using Docker or VMs, then they are probably doing application development, which is also pretty damn RAM hungry.

I do most of my development in Chrome, bash, and sublime text and I'm still using 28GB of RAM.


Depending on the requirements of your job—-just a single VS instance with chrome, postman and slack open takes around 5GB. Teams adds another GB or so. The rest probably another 2GB (SSMS and the like).

On my particular team we also run a dockerfile that includes elastic search, sql server, rabbitmq and consul—-I had to upgrade my work laptop from 16GB to 24GB to make it livable.


Wouldn't you just have all the heavy stuff on a server? I don't understand the goal of running something like sql server and other server type apps on a desktop/laptop.


Having a local environment speeds up development significantly.


When I’m developing I don’t want any network dependency. I love coding with no WiFi on planes, or on a mountain, etc.


If you are on linux, try out zram.


I could believe that for students because students are usually working on projects that are tiny by industry standards.

10 klocs is huge for a student project but tiny for a real-world project.


I do use intellij and similarly hungry IDEs all the time, with many other resource hungry processes without trouble on 8 GB of RAM.

Though truth is that I use zram, which everyone should who is not fortunate enough to have plenty of RAM, but does have a decent CPU.


I don’t understand how a demographic as technically intelligent as HN could make the flawed assumption that GBs of RAM in isolation of the entire system is all that matters. Consider the fact that iOS devices ship with half the RAM of Android devices and feel as responsive, have better battery life, and have better performance.

The Apple stack is better optimized to take advantage of the hardware they have. Indeed, one of the reasons is because they have so few SKUs to worry about it focuses the engineering team (for example, in the past, internally engineers would complain about architectural design missteps that couldn’t be fixed because 32bit support wasn’t dropped yet and was pushed out yet another year). Now, obviously in a laptop use-case this is trickier since the source code is the same as the x86 version. It’s possible that the ARM code generation was much more space efficient (using -Oz instead of previously likely set at -O3). It’s also possible that they have migrated over to iOS frameworks in an even greater part than they were able to in the past, leveraging RAM optimizations that hadn’t been ported to macos). There could also be RAM usage optimizations baked around knowing you will always have a blazing fast NVME drive. Now you may not even need to keep data cached around and can just load straight from disk. Sure, not all workloads might fit (and if running x86 emulation the RAM hit might be worse). For a lot of use cases though, even many dev ones, it’s clearly enough. I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.


> I don’t understand how a demographic as technically intelligent as HN could make the flawed assumption that GBs of RAM in isolation of the entire system is all that matters

I didn't claim it was all that matters, and I haven't seen anyone else do that either.

I do take the point of the rest of your comment though, and it may well be the case that Apple does some clever stuff. But realistically there is only so far that optimisations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.

> I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.

Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.


> But realistically there is only so far that optimizations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.

Except the M1 is a novel UMA architecture where the GPU & CPU share RAM. There's all sorts of architectural improvements you get out of that where you can void memory transfers wholesale. There's no "texture upload" phase & reading back data from the GPU is just as fast as sending data to the GPU. Wouldn't surprise me if they leveraged that heavily to get improvements across the SW stack. The CPU cache architecture also plays a big role in the actual performance of your RAM. Although admittedly maybe the M1 doesn't have any special sauce here that I've seen, just responding to your claim that "DDR4 is DDR4" (relatedly, DDR4 comes in different speeds SKUs).

> Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.

No one is going to model things as "well users aren't using that much yet". You're going to look at RAM usage growth in the past 12 years & blend that with known industry movements to get a prediction of where you'll need to target. It's also important to remember that RAM isn't free (not looking at the $). I don't know if it matters as much for laptop use-cases as much but for mobile phones you 100% care about having as little RAM as you can get away with on your system since it dominates your idle power. For laptop/iMac use-cases I would imagine they're more concerned with heat dissipation since this RAM is part of the CPU package. RAM size does matter for the iPad's battery life & I bet the limited number of configs has to do with making sure they only have to build a limited set of M1 SKUs that they can shove into almost all devices to really crank down the per-unit costs of these "accessory" product lines (accessory in the sense of their volumes are a fraction of what even AirPods ships).


Anecdotal. I write client software for bioinformatics workflows. Usually web apps or clis. Right now with my mock db, Emacs, browser, and tooling I’m using ~5G of ram. At most I’ll use ~8GB by the end of the day.

I also shut down at the end of the day and make judicious use of browser history and bookmarks. If I were compiling binaries regularly I guess I could see the use in having more ram but as far as I’m concerned 8 is enough and so far people find what I put out perform at.


Yeah 32gb is my baseline now. I could probably work on a 16gb machine now but last time I was using an 8gb machine the memory was nearly constantly maxed out.


(Curious) why? VScode, Chrome, and a terminal running a local server usually will do fine with 16gb or less. Are you testing with or querying massive data sets locally or something?


Chrome can fill all your ram, just add more tabs (I'm usually at 100-200+ unfortunately).

With Firefox and auto tab suspend (addon), it's manageable.


Another webdev here. My second dev machine has 8 GB RAM and works fine for my purposes. JetBrains IDE, MariaDB, Puma, browser.


Throwing in some more anecdata:

- 16GB MBP (Intel, not M1) - Running MacVim w/ coc.vim & coc-tsserver (so it's running partial TS compiles on save, much like VSCode)

- One image running in Docker

- Slack, Zoom (at idle), Safari (with a handful of tabs), and Mail.app running as well

Per Activity Monitor, 8.97GB of 16.00GB is used with 4.97GB marked as "Cached Files" and another 2.04GB of swap used.


Anyone remembers the times when "web development" was something you could do on a pretty any computer available?


But have you benchmarked your workflow on x86+32GB vs M1+8GB?


I'm typing this response on a 8GB M1. It's great, but its no magic. Its limitations do start to show in memory intensive and heavily multi-threaded workloads.

Getting some down votes, which I attribute to reasonable skepticism, so hopefully this will allay your concerns.

https://ibb.co/VM4Z1DY


> memory intensive and heavily multi-threaded workloads

Such as?


One example, I was trying download all the dependencies for the Kafka project via Gradle with IntelliJ while watching a video on YouTube and working on another project in Visual Studio Code. The video started to stutter then stopped and Visual Studo Code became responsive. I basically had to shut a bunch of stuff down and go to lunch.

I haven't seen a modern computer struggle with that kind of workload before.


End of the day the Intel Macbooks of the last few years have been terrible low performance processors that get thermal constrained and have abysmal inconsistent battery life. So if all you use is Macs the M1 is going to feel amazing.


Working on very enterprisy web app 100+ servics 30+ containers my MBP 16GB is handling it fine


That machine is using swap file a lot, in couple of years SSD will die.


[flagged]


Funny. Except there are fully supported MacBooks from 2013 running the latest version of macOS, still on their first battery, still bringing people joy and productivity.


Or in my case, a 2012 MacBook Air. I hope they'll support Catalina for another three or four years or at which point it's more than a decade old and can finally mature into a laptop's final stage of life as a Linux machine.


They tend to support OS releases for three years. Catalina EOL is likely to be sometime in 2022.


I stand corrected. I'd attribute my false confidence to the fact that even the previous version (macOS 10.4 Mojave) is still supported. But now that I think about it that'll likely change at the Apple WWDC in June.

Still, decently satisfied with ~10 years of software support for a laptop.


Mojave Support ends this year right on time for 3 years. I finally left Mojave behind myself as I really wasn't using the older apps catalina lost support for anyway.


Late-2013 MBPs still came with replaceable storage unlike the M1 Macs, no?


Yup. My personal machine is a base Late 2013 15” MacBook Pro (8GB RAM). Original battery, original storage, it saw very heavy usage up until a a couple of years ago. Still fast, decent battery.


In ~2018 I briefly used an old 4GB MacBook Pro for work. It was only untenable longer-term because I needed to run two electron apps or many-hundreds-of-MB-memory tabs at a time, sometimes.


But why not 16G? It's a small price increase compared to base and it would basically make it much more usable in any extended amount of time, especially given the SSD write 'bug' that were exposed.


I just went from a 32GB RAM Macbook Pro to an 8GB RAM M1 Macbook Air...the difference is insane, I don't know what the hell the MB Pro is doing with its RAM, but it just felt like RAM was never enough on Intel Macbooks. Here on the M1, I don't feel my system crawling to a halt like I did on the MB Pro, and I'm doing the same workloads.


Exactly. It's not an apples-to-apples comparison. 8GB of M1 RAM is very different than 32GB of standard DIMM-like RAM.

Sort of like when people compare nm of processes on the chips. You can't just say "oh, their number is smaller, it must be better."

To be sure a device with 16GB of M1 RAM should be more competent than 8GB of M1 RAM, but now that's apples to apples.


> 8GB of M1 RAM is very different than 32GB of standard DIMM-like RAM.

Go on... What’s the nature of this great difference?


At this point, I don't think that's entirely clear. Something to do with the way Apple has tightly integrated the SoC parts and the OS's memory management lets them get massively more performance out of much lower-specced machines, and at least from what I've seen, no one has yet managed to truly unravel all that makes this possible.


I'm interested in this because I have a maxed 2018 intel mac mini and loaded it to 64gb of RAM. I want to ditch my eGPU and go to Apple Silicon on the next release.

I wonder if the memory performance should be so surprising. Because haven't people been bemoaning the "low" amount of ram in iPhone / iPad?

iPhone had 4GB on XS, and 11. And only went to 6GB on the Pro models. Yet the performance and benchmarking on these devices has seemed to garner praise with each successive generation.

https://www.macrumors.com/2020/10/14/how-much-ram-iphone-12-...


I've just made a similar switch I went from a Pro (2018 i9 Pro with 32GiB of RAM) to an M1 Air with 16GiB of RAM and its ridiculous.

I've tried to look up what this difference might be, but all I've found is a hand-wavey answer about it being an SOC. If someone can ELI5 then I'd be super appreciative.


My feeling is that as everything is on chip, things are physically closer - less lanes on the motherboard - so latency from tasks is lessened. I would not have guessed it would be such a difference though.


I found this discussion and article talking more about it, if you're interested: https://news.ycombinator.com/item?id=25659615


Really? I still use seven year old MacBook Pro with 8GB of RAM. It’s perfectly fine for developing in at least Python, Go or Nim, with a Docker container or two and MariaDB in the background.

Sure, when/if I upgrade I’d go for 16GB of memory, but one should be careful about projecting ones own needs onto other.


I have a 2015 MBP as well... it's HORRIBLE when I hook it up to a 4k monitor. Lag, freezing, etc. I can still get stuff done, but the experience is pretty bad.

Perfectly fine on its own screen though, the dual-core in mine just isn't up to the task of a 4k external monitor.


Your web browser probably doesn't have many tabs open, and those that are open, are probably not web apps (think Slack, Facebook, WhatsApp, various internal corporate apps, etc.).

I can't even remember a time when just my web browser used up less than 4-5 GB of RAM, on its own.

Add at least 500MB - 1 GB for the OS itself, and we're talking about 2 - 3.5GB for apps. I'd immediately swap with just 8GB of RAM with an IDE and a DB running.


I have four browser windows filled crazy number of tabs on M1. I’m yet to experience any slowdowns.


You’re right, I can’t mentally deal with more a few open tabs and I use none of those webapps at home. I might be an outline in the other direction, but there’s a lot of room between using maybe 4GB of RAM in total an using that for a browser alone.


Auto-tab-discard on Firefox is your friend. Makes thousands of tabs possible with almost no memory use and seamless functionality.


I do visit 20-40 of those tabs regularly. They're SPAs so they use up a lot of memory.


How often? I set mine to discard after 60 minutes, except for slack, WhatsApp and a couple of others; cost is reload when I do venture into discarded tabs - e.g. I scan HN and Reddit headlines in the morning, open each interesting comments page in a tab; much later, when I have downtime, I visit them - but I used to reload to get the new comments. Now with ATD, they get reloaded automatically, so for that use it is even bette than the real thing.


What IDE are you using?


None, I use Vim or TextMate. But you have a point Xcode isn’t fast on my laptop. You use it, but if I where to use Xcode professionally I’d get a newer machine with more memory.

I have been working on a TVOS app, and it works fine, but there is some waiting involved.


X cores + X gigs of RAM does not mean better performance for higher values of X. That is the fundamental innovation of Apple Silicon, which people still struggle to grasp. The M1 upended how we think about CPU performance. It's not even a CPU, it's a unified memory architecture with hardware-level optimizations for macOS. You can't even cleanly compare the performance benefits of having memory and CPU on the same chip, because the time spent on copying operations is far less.

The plain fabulous joke is that we've spent 30 years thinking that increasing cores and increasing RAM is the only way to increase performance while the objective M1 benchmarks blow everything out of the water. The proof is in the pudding.


But we still live in a physical universe where there are voluminous things some of us need to put into ram. Things like big projects, application servers, database servers, IDEs, and for some of those - multiple instances of them. On top of that, browsers with tabs open, productivity tools. Can't benchmark out of that.


Apple have done things to mitigate a lack of RAM (consistently using the fastest SSDs they can with a controller integrated into their custom chips and now into the M1 presumably, memory compression, using very fast RAM in the new M1, etc.) but yeah at some point you just need more room. 16GB has been enough for me for a while thankfully.


> Can't benchmark out of that.

You still have to try it out and see. I'd welcome an article detailing a dev setup where the M1 isn't suitable because of performance reasons. So far we've seen mostly praise.


Right. People are comparing Apples to Oranges.

The salient question is: how long does each machine take to execute the processes I use on a daily basis?


On the other hand “16 gigs ram is the baseline if you plan to do anything more than facebook and instagram” is something to cry about.

Programmers working on products with a billion users should realize that every kilobyte they save decreases world-wide memory usage by a terabyte.


It's also not true. I don't know if I'm the only person not doing fluid simulations all day on my laptop, but I don't understand where this idea comes from


> decreases world-wide memory usage by a terabyte

While true, that's not going to motivate anybody. Memory is not a very scarce resource, we're making more all the time, we can continue to make more, and it's reusable.

Moreover, memory that isn't being used is almost completely useless - the only thing it does is act as disk cache.

Better to use memory than CPU, as the latter actually consumes power (leading to climate change if not powered using renewables etc.) - although even better to use less of both.

A more effective argument would be to look at the number of users on low-end devices, and point out that, the more memory you use, the less these users are able to run your applications.

Not that I'm expecting companies like Slack to care - they design for the high-end user, and if your company forces Slack on you and you have an older device, there's nothing you can do, and they exploit that.


DRAM refresh takes power. I don’t have the numbers, but I’m not sure that, in the average smartphone or tablet of today that energy usage is negligible compared to that of the CPU.

Reason for that is that DRAM must be refreshed 24 hours a day, while the CPU is sleeping a lot of the time, even if the device is in use.


For typical dimms it's about 2-3 watts per stick. Doesn't sound like much until you max out a SuperMicro (or other server grade) motherboard with 24 dimm sockets. :)


Right! And imagine the number of ELECTRONS whey would save from moving! That's also a big number.


I have an M1 from work (which I now mainly use for private stuff...) and it does have 16gb memory. It did cost a bit less than $1700 (after taxes) I believe, although I am not sure. I know it has some upgraded gpu compared to the standard model in some form, but didn't know about the memory.

As for the device: It is neat, but not revolutionary to any degree. I do paint and model and can run blender/krita just fine, it is even quite performant. This is through emulation, I don't have native builds for arm. Maybe those have become available in the meantime, but you don't notice the emulation at all.

But it won't be the end of x86 in my opinion. Why would it?


The M1 isn't what kills x86 (if it ever does completely). ARM kills x86.

Microsoft is working on the ARM transition. ARM has good control of mobile hardware. And Apple will be only selling ARM hardware (in the form of Apple Silicon) in another 12-18 months.

M1 and Apple Silicon are just part of the trend.


The main feature for me isn't the ARM, it is that the device is passively cooled and still very performant. If that isn't possible with x68, ARM might have a chance perhaps. But 99% of my time is still spend on that platform.

I just wish I could install Linux.... If MS and Apple just provide their locked down environments, it will never be more to me than a neat device and I would still crosscompile instead of binding myself to a manufacturer.


I ran Linux for years on laptops/desktops/servers, doing OLTP software that does millions txns/day. I adopted Linux back when Solaris was the way to go for backends.

But moved to Mac (and OSX) about 8 years ago.

I don't get the "locked down" thing. On my current macs (a 2020 iMac and a 2015 MBP) I run Macports that lets me install pretty much every bit of userland software that I want. I also get the advantages of the MacOS gui environment and the availability of most "user" software.

Yes SIP and the new sandboxes lock down the MacOS part of the system, and things like VPNs (eg Wireguard) need to get a dev cert and distribution from Apple.

But the "lockdown" is very lightweight. There's nothing I can't do on this devices that I used to do on my Linux environments.

If I truly need a "native" Linux, then there are a number of VM and container environments also available.


But there is no technical reason for it being locked down. I don't want to subject myself to more of this, which would be the result if I get dependent on it. Why would I? There are only disadvantages if I don't want to sell software for the ecosystem.

I do some low level system developing and I doubt I would ever switch to MacOS for this. Higher level software? Maybe, but as I said, why give Apple any handle here and these sandboxes don't provide security for me. I will also not get a dev cert from anyone, that is just something that will never happen.


Then just turn SIP off via csrutil disable.

You can then debug anything, load unsigned kernel modules...



> Microsoft is working on the ARM transition.

And has been for years. Unlike Apple they don’t have the ability (or the courage) to tell their partners to get on the train or get out of their way.

It’s not obvious they’ll ever be able to leave x86 behind.


That ARM is on the cusp of finally dethroning x86, is really, for me, amazing me that Intel has kept it dominant for so long (3+ decades).

Partially it's amazing they remained on top, and also partially that they never managed to cannibalize their own success (like Apple and later Microsoft have done).


Don't forget the entry-level 256GB SSD. With 1TB drives around 90€ at retail.


Apple always nickel-and-dimes storage. Overpriced storage is how they make probably 5-10% of their hardware profits.

iPhone still have 64GB as their base storage in 2021...


The default is 128GB..


Only on the pro's. The base iPhones still start at 64GB. But that's a heck of an improvement from a couple years back when the entry was 16 GB!



I mean, that price tag is being swung in a kinda silly way. The Mini with 16GB of ram is like $899. At that price the CPU has 8 Cores but 50% greater single core performance than basically anything in the price range. In lightly threaded or thread-racey tasks the M1 will outperform any chip on the market for most people working on their machine. In my case, we build, run, and run tests on a very large typescript app in half the time of any single Intel chip we have ever tested, including desktop i9. I'm not defending the pricing of their ram or storage upgrades, those are nuts. But the pricing you are comparing with here is for machines that include a LOT of gravy in the build sheet intended to draw your point such as a 4.5K Display or TBs of PCIe SSD.

Did I mention that the case I was mentioning was in the very base Macbook Air with 8Gb ram cross-compared with an i9 64Gb machine?

You are kinda making a blanket statement that is a little unfaithful to the intent of any of these machines. None of these machines are intended as 'pro' machines, even the 'Macbook Pro' is just the low end model and it is three times as powerful as the outgoing model. Sure you can spec one to the moon to make a price point, but that's the story of anything.


I wonder if Apple's strategy is to push macOS developers to optimize for certain SOC cadences, rather than having to traditionally target every system configuration possible. Therefore they opted to only have limited SKUs of the M1: only differentiated by RAM and GPU cores.

Analogy is gaming consoles - the hardware is fixed for X years so game developers know exactly what to target and make better looking & performing games over the cycle of that console. Compared to, say, Windows 10 that has to run on an almost infinite number of hardware configurations.

This is actually similar to their approach to iOS and iPhones - iOS versions span across multiple iPhone cycles, but they are limits. For example, iOS 13 was supported on iPhones 6S thru iPhone 11, or the A9 thru the A13 SOCs. There's probably some correlation between good performance and the tight SOC-to-OS coupling.

We'll likely see similar, limited configurations for future Apple M* SOCs.


> This in the time when 16 gigs ram is the baseline if you plan to do anything more than facebook and instagram.

Facebook and Instagram is probably the upper bound on memory-hungry apps. (The average webpage consumes more memory nowadays than gcc -O3.)


The 8 gigabyte machine is fine for 90% of people. I've recommended just getting that platform to a number of people, and none of them have had issues.

I've got an 8 Gigabyte MBAir, and it never stutters. Meanwhile, on my 16 GByte Dell XPS (Ubuntu 20.04) - I routinely live in fear of exceeding my chrome tab quota because I know it will bring the system to a crashing halt. Somewhere around 45 is the point it all comes down.

Meanwhile, I don't even think about how many safari tabs I have open (hundred+) - and 8-10 applications open at the same time.

Different operating system has different models of swapping and degrading performance. Apple has nailed it.


Safari vs. Chrome (or Firefox) is one part of why I can get so much more mileage out of a Mac's memory than I can under Linux or Windows. It's wildly more respectful of system resources, across the board—memory, processor cycles, and battery.

The rest of their software's mostly like that, too, with the possible exception of Xcode. I often forget Preview with a half-dozen PDFs is open, and Pages with a document, and Numbers. I wouldn't forget a single tab of Google Docs under Chrome, left open for weeks, because it would make itself known in system responsiveness and battery use. Ditto MS Office. Mac Terminal's got notably lower input latency than most other terminal emulators, especially featureful ones.

Apple, seemingly almost uniquely among major software vendors, gives a shit about performance, and it shows. I really, really wish they had competitors, but in so many ways they're the only ones doing what they do, to the point that they can make periodic serious blunders and I'm left going, "yeah, but what else am I gonna buy, that won't have a 'normal' that's overall-worse than Apple's 'broken'?"


Went from 64GB to 8GB. Didn't notice a difference - infact it is better. I kind of get your mindset but they have done some fuckery somewhere to get it to work so good.


My M1 Air with 16GB feels way more responsive and behaves better under load than my 64GB hex-core beast of a desktop I built late last year.

It's like the old days when I could put BeOS on a Pentium with two-digit MB of memory and it'd feel as good and responsive in actual use as a newer desktop clocked 6-8x higher and with 8x the memory, running Windows or Linux.


Looking at core count and ram size isn't how to measure performance. You test and measure. But, the cores are just better on the m1 as has been shown through tons of benchmarking. Also, if the interconnects and busses are smarter (better cache coherency impl) then you'll be better in multithreading cases. Also because it's an SoC it has a way better implementation for the ddr interface (that's what they claim at least). I wouldn't be surprised if the latency hit for accessing RAM was way better than on x86 where you have to cross a long bus. For the workloads that people run on iMacs, light photoshop, some video editing, etc. We already know the new macbooks are amazing at it and this machine will have the same hardware, but with a dope screen (and cool colors. I want yellow >.>).


Thats the point, price/value does not fit for the majority of apple devices... But the M1 is amazing, I have to acknowledge that.

I'll stay with AMD and wait for 5nm...


> 16 gigs ram is the baseline if you plan to do anything more than facebook and instagram

Just stop.


I bought a new machine last year that came with 8GB and ordered a 16GB upgrade for it which didn't arrive for a few weeks. I literally just now remembered I've never got round to installing it.


Space and specs are becoming more and more meaningless to me, as all programming work is offloaded to the cloud anyway.

Yet I have a gigantic laptop that does nothing but produce heat from opening python scripts.


Yup, I have 16Gb of RAM on macbook 2011.


This laptop does Visual C++ and C# development in Web, UWP and Win32 just fine with 8 GB.

Ah, and I also do stuff in Java/Eclipse/Nebeans, D and Rust equally fine.


I don't disagree that 8GiB for $1700 machine is absurd, however, I feel compelled to point out that MacOS is significantly more aggressive with memory compression than other operating systems and they come with blazingly fast SSDs, allowing devices to get away with less available memory.


A blazingly fast SSD is excruciatingly slow RAM.


In-memory memory compression is the most meaningful part of what parent comment said. And it helps tremendously on other systems as well, for example zram is quite underhyped on linux.


zram is a game changer. I believed I needed to upgrade 16gb to 32gb, with zram and a decent cpu no I don't. 97% full memory + 20GB zram swap and no noticeable problems.


The M1 SoC interconnects are what makes their SSD seem like swapping to disk doesn't even feel slow.


I am trying to figure out where I said anything on the contrary.


You mentioned "they come with blazingly fast SSDs" as part of what is "allowing devices to get away with less available memory"


I did indeed, I never said that SSDs are a replacement for RAM, I just said that having access to 2.5GiB/s RW operations allows you to offload many things into the SSD, yes, thrashing is an issue, but it is much more insignificant at those speeds than disks.

I am not defending apple in any way whatsoever, I even mentioned in the first sentence of the comment. I stated that they are able to do that because they compensate with compression and SSDs, applications in the background can be thrown in the swap and the user will rarely notice.


2.5GB/s isn't really all that extraordinary speed anymore.


The part you're skipping over is the memory compression, which is much more important than you think it is.


I think they are saying that you will start swapping to disk with that little ram.


I don't even bother reasoning with apple fans anymore. It's a losing game.


It's not even an argument, their comment is a fact that does not change nor contribute anything.


> and they come with blazingly fast SSDs...

Actually, they're somewhat slow compared to the competition.

A 500GB M1 SSD gives you about 3.1GB/s of write and 2.75GB/s of read performance [1]. In comparison - a $219 Samsung 980 Pro benchmarks at 4.2GB/s of write and 5.2GB/s of read performance [2]. Both on Disk Test from Black Magic.

[1]: https://www.reddit.com/r/mac/comments/k2lhi8/ssd_speedtest_o...

[2]: https://www.servethehome.com/samsung-980-pro-500gb-pcie-gen4....


Oh wow, Gen4 NVMes are really fast. Thanks for the link!


> blazingly fast SSDs

So like, 2 times as fast as any other SSD? No? The comment reads like marketing spiel.



What makes you think it's random RW?

Fastest SSDs barely reach 200MiB/s on 4k block random writes (reads are usually slower, likely because they can't be cached if they are truly random):

- 124MB/s for Samsung 980 Pro (https://www.servethehome.com/samsung-980-pro-500gb-pcie-gen4...) - 211MB/s for Sabrent Rocket 4 Plus (https://www.servethehome.com/sabrent-rocket-4-plus-2tb-revie...) - 171MB/s for Intel 670p (https://www.servethehome.com/intel-670p-2tb-m-2-nvme-ssd-rev...)

32/64 queue depths generally do not apply to desktop computing since you are unlikely to read that many file streams simultaneously (or rather, your one program that you care about right now might be loading only a few files at once, which is what you'd perceive as the SSD speed: the fact that OS services might be accessing other stuff in the background does not help much there).


BlackMagic tests video workloads using sequential reads/swrites. 2.2/2.7 GiB/s is pretty average for a modern NVMe SSD. Even consumer-level Samsung NVMe SSDs are faster.


> This in the time when 16 gigs ram is the baseline if you plan to do anything more than facebook and instagram.

You'd be shocked looking at any telemetry data which says that the average ram is 8gb and average ram use is ~50% or so.


I'd believe that - because the average user likely doesn't do anything more than Facebook, Instagram, general browsing, word processing etc.

I'd also believe it, since the number of laptops available with only 8GB of RAM is ridiculous - I swear it's because more prevelant in the past few years, presumably because of RAM prices.


RAM use should always be 100% for reasonable definitions of "use". There's always more disk pages to be cached unless the machine has just booted. If they're not all being used, it means something temporarily flushed all the pages which is actually a bad thing.


I think it’s reasonable to assume that OP was talking about user space applications, not kernel disk cache.


Adding up the memory use of apps isn’t actually a good way to calculate RAM needs though. It undercounts (because file caching matters too) and overcounts (because many things are fine swapped out).


I am not sure what the end-game for releasing those under-powered machines is, tbh. I am a bit more extreme in that I am not buying any machine below 32GB given RAM is no longer upgradable, but even if my mom came and asked me about those iMacs, I'd tell her to not buy.

It's a shame that Apple is so great at greenwashing and at the same time spits on every way they could help create truly sustainable products by just dropping a few percent of their bottom line.


I think that's a bit dramatic. I'm using a 16GB M1 Macbook Pro as my daily driver doing standard, boring professional work (lots of email, tabs open, PDF manipulation, Word, Excel, etc). It performs as well if not better than the 2018 Macbook Pro it replaced with 32 GB of RAM and an i7. And it cost less than that one.

The iMac will perform comparably (probably a bit better due to better thermals). My point is that these are not bad machines and I don't see why you'd steer someone away from them. However, the price is still quite high compared to other manufacturers and options. But that's always been the case.


> I am not sure what the end-game for releasing those under-powered machines is, tbh

The lowest and highest tier exist only to shift consumers towards the middle tier, where Apple profits the most and where lies less value for the consumer. It is a marketing strategy invented by Apple IIRC (it's been a long time since my marketing exam).


Of course it wasn’t invented by Apple, that would be ridiculous.

It was actually invented by Goldilocks.


The strategy is called ‘anchoring’

I had no idea it was invented by Apple. That feels a little too recent...


I know, but AFAIK those new iMacs ALL feature just 8GB of RAM? There isn't really a middle-tier, that what makes it so weird. you'd expect one machine with top SSD and 16 GB (and space grey, lol).

Probably going to be the next iteration of the 27" iMac, spanning from 8GB/256GB (LOL) to 16 GB/1TB or something.


You can upgrade them to 16GB. Having been developing heavy applications in a M1 Mac mini for since they were released they’re definitely adequate for most users. Obviously some people need ~100 cores and ~1TB of ram (myself included for other parts of my job) but that segment is always going to be better served by companies other than Apple.


> AFAIK those new iMacs ALL feature just 8GB of RAM?

No. But people are confused because the RAM is (annoyingly though not entirely nonsensically) tied to the SSD.


I don't think it is. They usually offer a couple of configurations at different prices, but you can get a BTO machine with just the upgrades you want. The SSD+RAM are tied together in the new M1 iPad.


Ah you're right, i'm the one who confused the two and remembered the weird setup of the iPad, the iMac's Tech Specs page does just state "8GB (configurable to 16GB)" for the entire range (whereas it's quite clear that the "low end" iMac with only 7 GPU cores can only be BTO'd to 1TB SSD, versus 2 for the 8-cores).


Apple has WAY more statistics about what kind of computers people buy and use.

I'm pretty sure they're not nickle & diming people with 8GB RAM M1 machines, someone somewhere has calculated that as being enough for most users.

HN hardly is the place to compare what is "min spec" :)

(As I'm writing this, my i7 MBP sounds like a jet taking off, CPU capped near 100%. Normal day at the office.)


Reading this discussion thread makes one psychological quirk clear. People, even well informed professionals, are rarely comparing Apple to the best available alternative on the market (arguably Ryzen U line processors). They're comparing 2021 Apple products to their last experience with an alternative, which for many ppl stuck in Apple's ecosystem could very well be from 2010.

So what's Apple's end-game? Milking (predominately US) professional class in perpetuity. It's not like they're going to switch, especially if they perceive the alternative to be sluggish heavy bricks whose battery barely lasts 2 hours. For sales, perception is more important than reality.


> Reading this discussion thread makes one psychological quirk clear.

Is that so clear? It sounds more like a huge assumption to me. It’s not like having an Apple laptop makes it impossible to see how the competition is doing. Doubly so for people with friends in tech.


The M1 processor is a direct result of the death of Moore's law. It's an amazing processor, but a sad sign of things to come.

The performance gains from Moore's law have typically come from shrinking die size. That has ended, you can't juice more performance from general purpose CPUs. If general purpose processors no longer advance quickly enough, the only way to get performance gains is to build custom chips for common specific tasks. That's what we're seeing now with the M1. The M1 buys us a few more years of exponential-appearing performance gains, but it's a one-trick pony. You can turn code into an ASIC once, but after that, your performance is at the mercy of the foundry and physics.

The death of Moore's law has many consequences, the rise of ASICs and custom co-processor chips is just one of them.


> The M1 processor is a direct result of the death of Moore's law.

I know most people misunderstand Moore's law, but this is HN, so I expect better:

https://en.wikipedia.org/wiki/Moore%27s_law#/media/File:Moor...

Moore's law is quite alive and showing no signs of problems.

> The performance gains from Moore's law have typically come from shrinking die size.

Moore's Law is about number of transistors. Not about their size, and not about performance.

And it's ESPECIALLY not about linear core performance.

> That has ended, you can't juice more performance from general purpose CPUs.

You don't need to, they're fast enough. Performance is expanding in other areas like GPU and ML.

> The death of Moore's law has many consequences, the rise of ASICs and custom co-processor chips is just one of them.

No, Moore's law is the very thing supporting them... You need extra transistors for those co-processors.


> You don't need to, they're fast enough.

You had me with everything except this. Any time someone claims the current state of computing hardware is "enough" I'm reminded of that fake 640K quote. There is no indication we are running out of applications for more compute power.


I'm not saying we're running out of applications for more compute power.

We're specifically running out of reasons to want faster linear (per core) general-purpose performance (in fact I'd say this happened some time ago). Everything else we get from here on in terms of smaller process etc. is just a bonus. We don't fundamentally need it to keep evolving our hardware for our ever-growing computation needs.

And that's because as our problems multiply and grow, parallel execution and heterogenous cores tend to solve our problems much more efficiently on the watt, than asking for "more of the same, but faster".

There's this Ford quote "if I had asked what people want, they'd have said faster horses". Fake or not, it reflects our tendency to stare at the wrong variables and miss the forest for the trees. The industry is full of languages utilizing parallel/heterogenous execution and you don't need a PhD to use one anymore.

CPUs are effectively turning into "controllers" that prepare command queues for various other types of processing units. As we keep evolving our GPU/ML/etc. processing units, CPUs will have less to do, not more. In fact, I expect CPUs will get simpler and slower as our bottlenecks move to the specialized vector units.


Production quality multiplatform software is much much harder and less fun to make for GPUs due to inferior DX, rampant driver stack bugs unique for each (gpu vendor, os) combination, sorry state of GPU programming languages, poor os integration, endemic security problems (eg memory safety not even recognised as a problem yet in gpu languages), fragmentation of proprietary sw stacks and APIs, etc. Creation of performance oriented software is often bottlenecked by sw engineering complexity, effort and cost, and targeting GPUs multiplies these problems.

tldr; we are not running out of reasons for wanting faster CPUs. GPUs are a crap, faustian bargain substitute for them.


https://scicomp.stackexchange.com/a/1395

Explains it better than I could in a HN reply.

Assuming our desired work to be done continues to grow, faster core speed* will eventually matter.

* technically, net instructions per second after prediction/compute ahead etc


Honestly that's a bit too abstract to make sense of.

As a programmer to another, I'd rather ask... what's one example of a problem we have today that needs faster linear performance than our best chips (not in a nice-to-have way but in a must-have way).

I'd rule out all casual computing like home PCs, smartphones, and so on, because honestly we've been there for years already.

Also due to decades of bias we have serialized code in our programs that doesn't have to be serial, just because that's "normal" and because it's deemed easier. Also we have a huge untapped potential of better performance by being more data-oriented. None of this requires faster hardware. It doesn't even require new hardware.

But anyway, I'm open to examples.


> I'd rule out all casual computing like home PCs, smartphones, and so on, because honestly we've been there for years already.

Casual computing can definitely be a lot better than where we are today[0][1].

The software business has moved to a place where it’s not really practical to program bare metal silicon in assembly to get screaming fast performance. We write software on several layers of abstraction, each of which consumes a ton of compute capacity.

We have resigned to live with 100ms latencies in our daily computing. This is us giving up on the idea of fast computers. It should not be confused with actually having a computer where all interactions are sub 10ms (less than 1 frame refresh period at 90fps).

[0]. https://danluu.com/input-lag/

[1]. https://www.extremetech.com/computing/261148-modern-computer...


Linear compute is the ideal solution. Parallelization is a useful tool when we run up against the limitations of linear compute, but it is not ideal. Parallelization is simply not an option for some tasks. Nine mothers cannot make a baby in a month. It also adds overhead, regardless of the context.

Take businesses for example. Businesses don't want to hire employees to get the job done. They want as few workers as possible because each one comes with overhead. There's a good reason why startups can accomplish many tasks at a fraction of what it would cost a megacorp. Hiring, management, training, HR, etc... they are all costs a business has to swallow in order to hire more employees (ie parallelize).

This is not to say parallelization is bad. Given our current technological limitations, adding more cores and embracing parallelization where possible is the most economical solution. That doesn't faster linear compute is a "nice to have".


Virtual reality, being so latency-sensitive, is always going to be hungry for faster raw serial execution. It seems like something that ought to parallelize alright (one GPU per eye!) but my understanding is that there are many non-linearities in lighting, physics, rendering passes, and so on that create bottlenecks.



On his recent Lex Fridman podcast appearance, Jim Keller speaks to exactly this mindset. He says that they've been heralding the death of Moore's law since he started and that the "one trick ponies" just keep coming. He says he doesn't doubt that they will continue.


> they've been heralding the death of Moore's law since he started and that the "one trick ponies" just keep coming. He says he doesn't doubt that they will continue.

The situation is clearly far worse than what you suggest. Back in the 1990s and early 2000s, apparent computer performance was doubling roughly every two years. Your shiny new desktop was obsolete in 24 months.

Today, we're lucky to get a 15% gain in two years. The "one-trick ponies" help narrow the "apparent performance" gap, but by definition, are implemented out of desperation. They aren't enough to keep Moore's law alive (it's already dead), and their very existence is evidence of the death of Moore's law.


Moore's law is only about the number of transistors per chip doubling every 24 months, not about the performance. Seeing that the trend is still happening, Moore's law is not dead, as so many have claimed.


But what is it good for, if it does not improve performance? For example, increasingly larger and larger part of transistors on a chip is unused at a given time, due to cooling issues.


It makes other workloads more economical.

And as long as there's something to gain from going smaller/denser/bigger, and as long as the cost-benefit is good, we'll have bigger chips with smaller and denser features.

Sure, cooling is a problem, but it's not like we're even seriously trying. It's still just air cooled. Maybe we'll integrate microfluidic heat-pump cooling into chips too.

And it seems there's a clear need for more and more computing. The "cloud" is growing at an enormous rate. Eventually it might make sense to make a datacenter oriented well integrated system.


It obviously does improve performance, otherwise why would people be buying newer chips? :) It doesn't mean we'll see exponential performance increases though. In specialized scenarios, like video encoding and machine learning, we do see large jumps in performance.


I thought it was about number of transistors per chip at optimal cost level per transistor.


On the contrary, I think this is fantastic news:

- It means consumers won't have to keep buying new electronic crap every couple years. Maybe we can finally get hardware that's built to be modular and maintainable.

- It means performance gains will have to come from writing better software. Devs (and more importantly, the companies that pay devs) will be forced to care about efficiency again. Maybe we can kill Electron and the monstrosity of multi-MBs of garbage JS on every site.

The sooner we bury Moore's Law and the myth of "just throw more hardware at it" the better.


If you don't just look at the failing Intel, AMD has been doing 15-20% improvements year on year.


>Today, we're lucky to get a 15% gain in two years.

The 2012 MacBook Pro 15-inch I'm typing this on is about 700 on Geekbench single-core, while the 2019 16-inch is about 950. 35% "improvement" in seven years!

M1 13-inch is 1700 on single-core, which is why I hope to upgrade once the 16-inch Apple Silicon version comes out.

>The "one-trick ponies" help narrow the "apparent performance" gap, but by definition, are implemented out of desperation.

I don't think that's right. x86 hit an apparent performance barrier in the early 2000s, with the best available CPUs being Intel Pentium 4 and AMD Thunderbird, both horribly inefficient for the performance gains they eked out; those were very much one-trick ponies created from desperation. It took a skunkworks project by Intel Israel, which miraculously turned Pentium III into Core microarchitecture, to get out of the morass. Another meaningful leap occurred when going from Core Duo to Core i, but the PC industry has been stuck with Core i for almost a decade.

We've finally smashed past this with Apple Silicon, but it is certainly not a one-trick pony; Apple could sell it to the world tomorrow and have a line of customers going out the door, just like it could have sold the A-series mobile processors to rivals. AMD Ryzen isn't quite the breakthrough Apple Silicon is, but it is good enough for those who need x86.


Apple's M1 is a good processor, but the only reason it "smashed past" previous macbook single core results is apple was using older Intel lower powered processors.

It is not twice as fast as even mobile x86 stuff, as much as people seem to want to think otherwise.


Anecdata of one, but compiling our product at work on my three machines (a 2019 intel macbook pro, a 2020 10 core intel imac and an m1 mac mini), the macbook pro is the slowest, but the imac isn’t that much faster than the mini. it’s something like:

- macbook pro: 9 minutes

- mini: 5 minutes

- imac: 4 minutes


Where the M1 really blows any other CPU away is single-threaded performance; multi-threaded performance is just normal. So it's not surprising that it's not faster than your 10-core iMac when compiling (which I assume is using 100% of every core).

In fact, given that the M1 is an 8-core CPU and your iMac has a 10-core CPU, the fact that they take 5 and 4 minutes respectively to compile seems to indicate that they're fairly similar in multi-threaded performance (and the iMac wins only because it has more cores).



Is this a bad thing? This seems like a great outcome for consumers, and will reduce e-waste. I look forward to a future where people see less need to upgrade year after year.


Which is why Apple is moving revenue streams also to services.


This is false, computer performance has been doubling nearly every year. See for example https://www.top500.org/statistics/perfdevel/


How is this calculated? It isn't very clear. Is this representative of individual devices or is it caused by more of the same devices?


Even then Jim Keller is using a looser definition of Moore's law - i.e. he's saying there's a lot of scaling left rather than that the scaling will continue as it did in the past.


"Moore's law" was strictly about the average cost of a transistor, not performance in general.


I get your point but...

> The M1 processor is a direct result of the death of Moore's law.

It is a bit ironic since the M1 is a 5 nm processor, currently the finest process, and I think it plays no small part in its success. A very Moore's law-esque solution.


Death of moore’s law? Hmm. Meanwhile I just got a R9 5950X and it is drastically faster than my 5 year old i7.

There must be some doubling of transistors in there, right?

Also maybe buying a 5950X at the birth of a new generation of ARM CPUs wasn’t the wisest choice.

Or maybe it is, idk.


Mores Law as originally stated said transistor density doubled every 18-24 months. Using larger CPU’s for example let’s you have more transistors, but has nothing to do with Mores Law.

Clearly density has kept increasing, but the law refers to a rate of increase that we haven’t been able to meet. The original 386 released in 1985 had 275,000 transistors, using the slowest interpretation we would need to be at (2^18) = ~72+ Billion transistors today or (2^17) = 36+ Billion in 2019 which is close, but the chip would also need to be the size of a 386 which they aren’t.

AMD Epyc Rome is 1008 mm^2 vs a 386 at 104 mm^2. The M1 is 119 mm^2, but it’s only 16 Billion Transistors. As such it’s safe to say Mores Law is dead.


Did Mores Law take into account 3D density or was it just single layer compactness?


It’s per wafer area. Which effectively compresses the full 3D nature of modern chips into a 2D structure.


Back in the old days you'd get that sort of improvement in 2-3 years, not 5. I used to expect at least a 4x improvement on my last machine every time I upgraded.


Yeah, I bought myself a new PC two years ago or so to replace a 5+ year old one, and the difference was... okay? If it was twice as fast (mostly for gaming) I'd already be impressed.

Whereas back when (thinking of early 90's) you'd upgrade every three years and be taking massive leaps forward. 10x increase in disk space (40 MB to 500 MB), or going from diskettes (~1.5 MB? I don't even remember) to CD's (650MB). We went from Wolfenstein to Half-Life in just six years (it felt longer).


Maybe buying a 5950X at the birth of DDR5, PCIeGen5 and TSMC's 5nm wasn't the wisest choice. Ehhhh seems like all that new stuff would still take lots of time to actually get ready, and the 5950X is the best CPU now.


Ah well at least I can now run my test suite 3 times faster compared to my 16” i9 macbook pro so I’m happy.

From 60s to 20s every run is huge for me.


I expect the new 16" / 14" to have dual M1 cpus. It would solve the number of external display issue. Also, it would bump the RAM to 32GB.

Then the next step, a new Mac Pro, would have up to 4 M1 CPU's. Sounds very sweet to me.


No way the M1 supports "dual socket" configurations. Absolutely no way a configuration like that would "combine" the GPUs and display outputs. I'd bet money on Apple releasing a larger monolithic "M1X" or whatever for the large MBPs.


Is that stock, PBO or manual OC? It's quite wattage limited at stock, you might go significantly under 20s with PBO :D


5950x is the last greatest chip on the AMD4 platform. I think their would be enough demand for it in the future for the price to stay high.


Moore's law is safe for at least two generations of chips, for which there are processes developed.

As we speak people are putting together 3nm(TSMC) designs, which will ship once the infrastructure is there.


Scaling will continue for at least another 10 years.


The death of moore's law made us wonder - there is so much effort trying to optimise hardware, but less emphasis on making software more efficient. Our view is that there is a lot to do with regard to software efficiency to mitigate the limitations in hardware progress... See the company we founded in my profile, this was one of the drivers to build it


Software optimization depends a lot on economical reasons. Which is why it is so hard to prioritize over new features IMO.


Let’s not debate whether we really are at the end of Moore’s law (not a foregone conclusion, given that the M1 is the first CPU at 5nm)

Why do you find it sad that we now have a holistically designed system, rather than the glueing together of ever more powerful parts that desktop PCs have gotten away with for a few decades?


Can't wait for an iPhone Pro with an M1 processor that I can plug into a thunderbolt/usbc dock, run monitors, a keyboard, ethernet, and have it running MacOS when in desktop mode.

EDIT: A little context; when I am in the office, I use vscode over ssh to connect to my desktop PC at home. My desktop takes care of my language server, syntax highlighting, compilation and vscode forwards my ports and spins up terminals. All I will ever need is a low powered computer that can run my browser and tooling fast enough.


They could've done this with the iPad already, but have not shown any indication that they will do so.

I suspect Apple don't want you to use your phone as a laptop replacement, they want you to buy one of their laptops.


> have not shown any indication that they will do so.

I find it very suspicious that the new iPad has a 16GB RAM option. There's ~no use for that in iOS. Wouldn't be totally amazed if some sort of dual boot solution shows up at WWDC.


I'd love for that to be the case, although I think it's more likely they'll anounce a reverse catalyst - running macOS apps on iPadOS.


Also: how would you cool it? The passively cooled M1 MacBook Air throttles after 15 minutes of high CPU usage. So, perhaps a phone would throttle after a couple of minutes?


Even when the Air throttles, the performance hit is barely noticeable for interactive tasks. You really only see it on batch tasks where you can measure a discrete start and end times. Not saying that there isn’t throttling but it’s not that impactful in practice.


This is the utopian future I dream of, but unfortunately there is 0 economic incentive for a company like apple to create a product like this. With how powerful our phones are, they could easily be the main workstations for millions, but they're forced to remain as toys. I'm somewhat surprised google don't try to push this more.


Basically many modern Android phones today work perfectly well paired wirelessly with modern TV screen. Enable desktop mode, pair with a mouse and keyboard, and you have a kind of working environment: video chat, programming, text editing, office apps - all exists.

That's actually how to watch youtube without ads, because non-Android TV has no freetube/newPipe apps.


Sadly I can't use a terminal or VSCode on Android so I can't use it as a development environment.


Termux, Andronix and (Wireless) Dex (Samsung phones only) work pretty well to bring a workable Linux desktop onto an external screen, with the phone as a separate screen for android.

You can also connect a usb-c hub if you want hdmi in and out, network etc. It's even possible to drive 3D printers directly over USB.

As an additonal bonus, for Galaxy Notes and some newer S devices you can use a stylus for art, signing PDFs etc. They also support splitscreen multitasking and virtual desktops which is quite practical if you work from the terminal (emacs).

Certain phones can be very good replacements for a computer if you require the portability.


Have you tried installing[0] code-server[1] in Termux[2]? code-server is basically the VSCode app split into a backend which runs on NodeJS and a front end Web app which runs in an Android browser app and communicates with the backend running in Termux. It works pretty similar to the Electron app, but extensions don't come from Microsofts Marketplace because of licensing and usage restrictions. Most popular extensions are available though.

[0] https://dev.to/codeledger/how-to-get-visual-studio-code-to-r... [1] https://github.com/cdr/code-server [2] https://termux.com/


You can run vscode with termux and even a full X server, connecting over VNC.


I hope that browser you want to run on a “low power” computer is not Chrome


Sure, can be whatever browser. Seems like Chromium, Safari, Firefox all run perfectly fine on a MacBook Air so I have high hopes that an M1 powered "iPhone Pro" wouldn't have too much trouble running MacOS with a browser and vscode.

It's more that Apple wouldn't create a device that would remove the incentive for customer to buy other products.


And yet, for work I'd like to see something 10x as powerful as a phone still. The differences in performance between the two is quickly becoming less and nonexistent though.


thermals would be quite bad for performance here, but the vision is quite appealing.


Isn't basically the same vision Canonical had for Ubuntu Touch/Phone?

You carry your (main) computer with you in your pocket. If you're on the go, you use its screen. If at home, you plug it in better screens/inputs peripherals.


I wish they had had the firepower to develop this. I'm not exactly unhappy with Android, but I'd bought that in an instant.


Many have had this concept, no one has actually executed well on it.


My use case is pretty fine, I think. I use vscode's ssh development feature to remote into my desktop at home. My laptop just needs to run a browser and editor window which I think an M1 powered iphone would be fine at.


The difference is the advantage Apple has with their HW+SW vertical integration. Its as simple as that.

Intel sells CPUs, so it creates the ranges of CPUs to make money. They advertise clock speed and put the higher ones on a pedestal, that is how they can charge more money. The OEMs just used that playbook and developed their own marketing stories on top of Intel's marketing. Either no one tried to differentiate or they just didn't have the power to fight it.

Apple has a lot going for it in that scenario. They never had proliferation of models and always kept the number of options to a minimum. They also don't deal with volume, so they didn't have to do 20 variations of mac mini or the iMac. They kind of did their own thing even with the intel macs. Now with their own processor they were in a position to double down and make the whole product line even more efficient.

Like the article said, they couldn't have done if the M1 is not clearly better than the competition.


> Intel sells CPUs, so it creates the ranges of CPUs to make money. They advertise clock speed and put the higher ones on a pedestal, that is how they can charge more money.

How is any of this specific to Intel?

Apple uses M1 at different clockrates in different devices. And that's not only due to battery and heat concerns, but also because those M1 are rated to run without errors at different clockrates.

Similarly the new iMacs come with two types of GPU: 7 core (cheap), and 8 core (more expensive). The 7-core ones had one core disabled because it was defective.

What Intel does reflects the simple reality of producing microchips. Some units turn out better than others. So you sell them at different prices. It's the same for AMD. It's the same Apple.

> Apple has a lot going for it in that scenario. They never had proliferation of models and always kept the number of options to a minimum.

Yet.


> How is any of this specific to Intel?

As of right now, there are 6 different Intel processor brands and close to 100 different SKUs.

Source: https://www.intel.com/content/www/us/en/products/details/pro...

Currently, there is one M1.


I just explained above there's no single M1. They vary by core count, and by clockrate. And probably more.

The only thing that's "one" about M1 is the brand. It's one brand. It's easy to have a clean brand, when you don't sell naked CPUs by themselves. "Oh everything we have has M1". It doesn't even matter if this is accurate or not, we realize that, right?

Are we going to hold up Intel for having hundreds of SKUs because they sell chips alone, and Apple sells them in computers? I hope not, that'd be silly.

Are we also going to hold up Intel for being in the PC CPU business for decades, while Apple has been at it for several months only? I hope not, that'd also be silly.

Finally, are we going to hold up Intel for targeting cheap office machines, and high-end data centers, and hardcore gamers, and scientific uses, and many other customers that Apple isn't even trying to have? I hope not, same reason. :-)


> It's easy to have a clean brand

I think vast evidence over the past few decades has proven this is actually not the case, especially in tech.

> , when you don't sell naked CPUs by themselves.

This is Intel's choice of a hill to die on. They drove everyone else out of the market then failed to realize that market itself is collapsing with the rise of cloud computing and ISA agnosticism and haven't evolved with the times.


I thought you were making a claim that Apple would eventually trend like Intel with hundreds of SKUs. I don't see apple complicating its CPU line like that because it has no need to: the M1 works in a variety of scenarios, which which is what the OP was pointing out.


Apple has no need to complicate its CPU line like that because it doesn't sell CPUs. It sells the whole banana.


x86 is dead - first in consumer, then in cloud.

It is hard to see to me how this ends any other way. The creative class (us) will quickly have largely all ARM computers within 4 years.

Its not hard to see from there how software will be even more optimized for ARM variants than x86 and that the scale of both mobile and consumer computing will push x86 out of the datacenter slowly as old software that relies on x86 is retired over the next decade.

People won't want to develop on x86 and deploy to ARM. ARM is more power efficient which is important in the data center too. We already scale by the core in the cloud, so why not just heap a few more cheap cores on if we need more to match x86 (which right now looks like we might not).

Tell me how I am wrong.


Are there examples of Arm designs OTHER than the M1 which are suitable for consumers? Yes, the M1 is a remarkable product and it will certainly make inroads against x86 on the desktop but it is from a single company. Will Apple M1 (2, 3 etc) replace all x86 devices in a decade? That’s hard to swallow. Now, if we see another Arm product released that ALSO kicks x86 butt, from another player, then maybe I’d believe a change is happening


Qualcomm is also entering the laptop segment with its 8cx platform [1]. While not a serious contender to M1 yet, it is comparable to an Intel i5 [2].

Samsung is also planning to release an Exynos laptop [3].

[1] https://www.qualcomm.com/products/snapdragon-8cx-gen-2-5g-co...

[2] https://www.digitaltrends.com/computing/snapdragon-8cx-vs-co...

[3] https://www.engadget.com/samsung-exynos-amd-2200-windows-10-...


Qualcomm are terrible, though; rather than doing an M1 and competing on performance, they rely on cornering the mobile market with a patent portfolio.

They also have (fortunately) zero control over the downstream software, so they can't do Apple-style vertically integrated improvements.


So what? Microsoft SQ1 has been there for a while [1]. Without a software support they are just a fancy stones that can't run desktop software.

1: https://en.wikipedia.org/wiki/Surface_Pro_X


This is a very late reply, but Windows 10 has its own x64 on ARM emulator [1]. It's not production ready as Rosetta but we'll see how it goes.

https://blogs.windows.com/windows-insider/2020/12/10/introdu...


Qualcomm tried so many times, it is actually a strong argument against ARM dominance, if Apple is the only one who can make it work but there are two companies that can make x86 work then the answer is pretty obvious which architecture is better.


if you are not using an apple phone, your smartphone likely uses an ARM-licensed chip, and chromebooks/windows RT devices ran ARM cores

the only thing holding back wider adoption is legacy x86 software


The smartphone ARM and chromebook ARM processors are nowhere near the M1 or x86 CPU in terms of performance. You need an M1 kind performance in these devices outside of Apple.


Gaming is still x86 territory and that includes 2 of the 3 main competing gaming consoles isn't? Hard to think of a PS5 as legacy stuff.


Is not the majority of gaming occurring on mobile ARM devices, not dedicated consoles or rigs?

“Mobile gaming has fast become the largest gaming market in the world with industry revenue expected to hit $76.7 Billion by the end of 2020 and 2.2 million mobile gamers worldwide. It’s become so popular that 72.3% of mobile users in the US are also mobile phone gamers. To put this into perspective when compared with the wider video game market, by 2022 the global game market is set to reach $196 Billion, and the mobile gaming market will account for $95.4 Billion of that alone.”

https://www.wepc.com/statistics/mobile-gaming/


>Is not the majority of gaming occurring on mobile ARM devices, not dedicated consoles or rigs?

Yes, the "I'm taking a 5 minute toilet break" gaming or the "I'm sitting in a bus and I have nothing else to do" gaming.

It's making money off of whales who like that there is f2p plankton to gobble up.

I played a bunch of mobile games in 2014 and they were all trash or cash grabs. Often with grinds worse than MMORPGs.


That's only because no one other than Apple has put up a serious competition in ARM CPU space.

Earlier consoles used a boatload of different CPU and GPU architectures (PowerPC, SuperH, IBM Cell, other proprietary stuff, x86), nowadays consoles have converged to essentially a locked down, glorified x86 PC on one side (Xbox One, Xbox S/X, PS4, PS5) and ruggedized ARM-based tablets (Nintendo Switch) on the other side.

On the mobile gaming space, ARM already has achieved utter dominance with Switch + iPhone + Android... all it needs for ARM to conquer the console market is for someone to tape out an extremely powerful chip and actually sell it - Apple won't.


Is the dominance of x86 due to any inherent properties of x86, or historical legacy?

I think the xbox-s/x and ps5's choice of CPU vendor was driven more by business factors and the integrated graphics, than the virtues of x86 per se. The Switch does pretty well despite the less powerful CPU in the tegra.


Also Raspberry Pi’s are ARM.

ARM Macs run x86 software great (my experience) and ARM Windows machines run x86 software adequately (I have read). Don’t know how well QEmu works on Linux - haven’t tried it but know it’s available.


Apple's Rosetta 2 that allows x86 code to run on M1 is a different beast than QEmu though. It translates x86 machine code to ARM code. That's why it is much faster than VM solutions.


QEmu also translates x86 machine code to ARM code, or vice-versa, or many other combinations. It's more optimised for portability and ease of adding translation support for new architectures than performance though but it's a really nifty bit of engineering.


Lots of software have been ported to ARM architecture in the last half year, so the clock started ticking for great ARM based Windows laptops. But it's clear that ARM's memory ordering has won.


Creative class is a lot more than programmers. Look at boutique system integrators like Puget Systems... do you see Apple/ARM Hardware? Heck no you don't! These guys sell hardware to a LOT of the companies that comprise the credits in movie production, game production, laboratories, university science departments, ML researchers, big energy, and soooooo much more.

These guys are not rewriting their software stack for thin chassis, thermal limited, and non-repairable hardware. Raytheon doesn't buy Apple/ARM hardware for their simulations, design and development. Does Boeing, Airbus, Ford or Caterpillar run on Apple/ARM hardware? Are these companies just chomping at the bit to ditch their legacy stacks? I don't think so.


> Creative class is a lot more than programmers.

I'm not sure anyone thinks programmers when they think creative class. That's like saying the professional athlete class is a lot more than just Pokemon Go players...


Isn't creative class just people who make things without consuming raw materials? I think it includes consultants, "knowledge workers", etc. in its original formulation, not just artists.


Yes that's very much the more artistic side aka graphic designers, it smacks of doing an easy "olgy" degree or even a "PPE".

The sort who work in traditional professional jobs and who shudder at the thought of the "boffins" and greasy "engineers" .

Recall yesterdays conversation about European pay rates for developers.


The parent comment didn’t say “we’ll all be using Apple”

He said ARM.


I edited my comment as I think in the context of the larger discussion, ARM and Apple are basically interchangeable.


I wholeheartedly disagree. Apple is probably the "first, best" case of ARM, but there's absolutely nothing stopping anyone from making a similar investment in ARM hardware. Indeed, Amazon is already doing so with Graviton, and we're seeing similar improvements in both raw speed and in perf/watt like what we're seeing with M1 chips vs. Intel[0]. And that's one of the best parts of ARM -- the design itself is available for licensing, unlike x86, so anyone can pony up the cash and build up their own customized chips that maintain compatibility with base ARM code (maybe needing a recompile to take advantage of new hardware features).

[0] https://dev.to/dnx/arm-vs-intel-a-real-world-comparison-usin...


ARM, not Apple. They are not the same thing.


Right now they pretty much are.


Raytheon etc don't buy them yet.

But the M1 is already a huge hit in the music and design community, which is pro-performance oriented. If subsequent Mnx chips provide enough of a speed bump they're going to destroy the high end Xeon market.


You've named two industries where Apple have histrionically been really popular, mostly because of their software rather than for any particular performance reason. They'd all buy the latest, Apple product whatever was inside because that's the only one Apple is going to support going forward. The idea of real high-performance computing moving to ARM is entirely dependent on there being suitable good software support, which currently does not exist (at least in my industry there is nothing yet and the vendors move very slowly). This "destruction" might eventually come but it won't be any time soon and Intel will of course react in the meantime.


I agree with you in general, but disagree on the time scale. Particularly:

>The creative class (us) will quickly have largely all ARM computers within 4 years.

I think good options might exist in 4 years, but still possess trade-offs, where legacy x86-64 platforms still have their place. This could be something as niche as maturity of IOMMU implementations in x86 workstations, but stack up a bunch of niches and I find it hard to see "largely all" in four years. Fifty-fifty in four years, "largely all" in ten for "the creative class". Then there is a long tail of industries may end up legacy-bound to x86 for decades to come.

Nothing is guaranteed. If starting in 2022 we were to see another 3-year performance plateau, ala 2016-2019, then anyone who bought into an mid- or high-end x86 platform today may not feel compelled to upgrade in just four years. Even if the ARM options exist, it might not be a compelling enough gap in four years' time. I'm not saying that a plateau is likely (competitive landscape suggests the opposite), but I don't have a crystal ball, so I won't discount it either.

Again I want to emphasize that I mostly agree, but I wouldn't bet any money on sweeping changes on such an aggressive timeline.


You overestimate the size and relevance of your peer group and you underestimate the inertia that will impede this migration.


"creative class" :-)

The m1 is cool but for heavy duty work its still got a number of issues memory multiple displays.

Rember there no such thing as to much Memory, IPC, Graphics performance - thee is sadly often a luck of budget.


> People won't want to develop on x86 and deploy to ARM.

It's still too early to say if people want to develop on ARM to deploy on x86.

The 90's with all their "better than x86" chips couldn't beat the fact that people want to develop on the same architecture to which they deploy. Back then that meant displacing all these other chips from the server side because there was no option but x86 for us mortals to have as PCs.

This dearth of alternatives on the server side took out their respective offerings on the high-end/niche workstation market as well. The PowerPC was one of these casualties, lets not forget about that.

Things are even worse today. The x86 is dominant in both ends and has no real alternatives: ARM server chips are weak and the inertia of building on x86 and deploying on x86 is too strong, and ARM desktop chips are also weak, except for a single luxury brand (Apple, which incidentally cares very little for any developers besides the ones that develop for their own ecosystem).


> ARM server chips are weak

For AWS cloud customers, I don't think this is true anymore. Graviton2 is quite capable with better performance in some instances for less money. I've already started moving some Java services to Graviton2 for the cost savings.

AWS is positioning the the Graviton2 in such a way that everyone using AWS will end up on them if they can.


Only if I can buy a desktop/laptop with a powerful ARM processor and plug-in a bootable usb and install operating system (arm version) of my choice with all drivers etc working fine.


I expect this will be the case with Linux on the M1 in a year or two, given the pace of marcan's work. Windows depends on Microsoft's willingness, of course.


That's the thing, reverse engineered drivers are not the solution and I don't see Apple providing drivers or any documentation of M1 SoC for other operating system. We need M1 equivalent from vendors that are NOT into "vertical integration" like Apple.


That's an argument against the M1 and ARM dominance.


Ok, it's time for the emulation anxiety meme (based on EV range anxiety), will this system emulate this extremely performance intensive x86 game at ultra with 4k graphics at 144hz? If not, then I will not buy it.


>Tell me how I am wrong.

It's already difficult enough to get Windows games to run on Linux, getting them to run on ARM with good performance isn't going to happen, especially when every ARM vendor insists on using an integrated GPU.


If your CI system builds for x86 and ARM from the first build, it will be simple to ship to both. For old games it’d be harder, but one could maybe compile from one ASM to another.


Intel and AMD aren't standing still in this.


Apple also isn’t just standing around, enjoying the M1


> as old software that relies on x86 is retired over the next decade.

seems very, very, very unlikely to me


a lot of people still rely on windows. maybe if microsoft gets a better emulator going, but I imagine with the amount of legacy they support that's a lot more challenging than for apple who are much more trigger happy when it comes to imposing changes on devs.

i think it'll definitely happen, especially with the web browser increasingly taking on the role of the operating system, but 4 years seems a little optimistic, even with your qualification of that statement


waiting for the gaming console announcement


Uh, Nintendo Switch?


Something a bit more modern and powerful, perhaps?


If you mean specifically an Apple Silicon game console, that's the Apple TV.


The article makes a good point on positioning, but I'm not sure if it's due to lack of data points.

Sure Apple seems to be using the M1 in across every price segment for their products, but M1 is also literally the first iteration of their shift to running macOS on ARM and not x86 architecture. This mass push mainly serves to speed up the transition.

No doubt there'll be a higher performing SOC for Apple's Pro lineup such as Mac Pro, Macbook Pro. History has confirmed this since Apple developed the A*X chips specifically for the Pro lineup of iPads. Main question is, how many concurrent SOCs will Apple maintain? Just 2 as they've done for the iPhone & iPad Pro divide or potentially more?


I believe you are 100% correct that when the M1X chips are released they will clearly differentiate the pro market. At the same time it would be impressive of Apple if the X chips capture MacBook Pro, iMac Pro and Mac Pro markets all with a single chip as well. So (as you point out) the article is only half the picture as it’s missing the pro lineup. And yet the article’s main point is true that Apple is satisfying an absurd number of products with (likely) only two processors.


I remain interested in seeing how much of Apple's lead is the process size, and how much is engineering prowess.

That is, would a more generic new ARM Neoverse on 5nm perform at roughly the same clip? I suppose AWS's Graviton 3 would be the first place to see that, or something close to it.


Apple can afford to exclude some hardware that other manufacturers need to support legacy OSs and apps, leaving more space on their SoC for more CPUs and specialized hardware.

Want to ditch OpenGL? No problem for Apple.


I'm not sure what OpenGL has to do with here?

Generally no hardware today is designed for OpenGL. They all translate it into their own instruction set. The M1 is no different as it supports OpenGL as well.


I'm not sure why you're comparing a server core to a mobile/desktop one. There are two X1-based SoCs on "5 nm" (Samsung not TSMC) and they're not very close to A14, let alone M1. https://www.anandtech.com/show/16463/snapdragon-888-vs-exyno...


The CPU microarchitecture is truly a quantum leap ahead of x86 processors. The fixed-width nature of arm instructions means there's way more front-end bandwidth for decode, which can then feed a much larger out-of-order engine. Having memory so close is also a huge win. TBH, I don't know what wizardry they have managed to get power consumption so low, but wow. 15W TDP and trouncing desktop processors pulling 12x that power!


That is the talk yes, but we don't know how much of the actual performance improvements is simply due to the lower process size. And process size is independent of if it is ARM or x86, but instead depends on manufacturers getting production slots in the chip foundries. Currently Apple is hogging up all the production capabilities of the smallest process that TSMC have.


TSMC themselves say shrinking from 7nm to 5nm offers either 15% better performance OR 30% lower power.

Neither of those would get x86 anywhere close to m1 in either performance or power consumption.


microarchitecturally is M1 really a leap?

In terms of decoding bandwidth I'm not sure how many instructions it can actually sustain but it's not like it's 10x - M1 is a basically a very wide version of a tried and tested formula rather than a wholely new thing.


x86 decoders are massive. They are about the same size as the integer ALUs in current designs. I think it was an Anandtech interview a couple years ago where someone from AMD said that wider decoders were a no go because of the excessive power consumption relative to performance increase. I’m sure they’ve looked into this exact idea many times from many different angles.

ARMs all 32-bit instructions make the decoder trivial in comparison. To parallelize 10kb worth of instructions across 8 decoders means read 32 bytes into the decoders, jump 32 more bytes and do it again (yes, it’s slightly more complex than that, but not too much).

x86 instructions are 1-15 bytes. How do you split up to ensure minimal overlap and that one decoder isn’t bottlenecking the processor? How do you speed up parsing one byte at a time? uOp cache and some very interesting parsing strategies help (there’s a couple public papers of those topics from x86 designers). They can’t eliminate the waste or latency issues though.

What is amazing to me is their efficiency despite the limitations. When you look at their massive 64-core chips and account for all the extra cache, IO, and interconnect necessary, it seems like scaling up the M1 to those levels would result in a less power efficient chip by (20% or so).


Wait till you see AMD put 8 cores in 15W.


That’s the thing, while we’re waiting on AMD, Apple is already half way through an M2/3


The 4800U came out in March 2020 with 8 cores, 16 threads.


Ryzen 7401p still represents the pinnacle-for-it's-time value offering to me. 24 cores, single socket, on a 14nm process, launched July 2017 for $1075. Just an amazing breakthrough processor. At the time there were supposed to be X300 and A300 chipsets coming, basically just boot bios, to make ultra-low cost motherboards possible. There have been improvements since then to architecture & IPC, but overall it feels like we've been headed in reverse since then in terms of chips that get put into medium/large-ish sized chassis.

It has been remarkable what a mockery Mac has made of mobile chips, and now of desktop chips. At a way more reasonable price point.

> but the company’s decision to eschew clock speed disclosures suggests that these CPUs differ only modestly.

I forget exactly when but the first Google IO that happened where Google started offering simply "Intel Core i5" or i7, without saying the model number (2017?), without revealing speeds, & it was a huge huge jumping the shark moment for me. A post competitive market, where speeds were good enough, where reputation & market presence domineered over metrics & comparable factors. I don't think chasing specific GHz & cache size numbers &c is super rewarding or important, but it felt like the first time we were being sold an unspecific system, where obvious inquiry into what we were buying was blocked.

This is somewhat the opposite of this article: that Apple has found a good enough CPU to sell everywhere. But I still think the real truth is that the providers, those building systems, have begun to refuse to compete. They refuse to detail what they are offering. AMD has been without competition and the new 24c chip is considerably more expensive, albeit for yes more IPC, but it still hurts me a little. Looking at Google no longer allowing us to have any idea what kind of Core i5 or whatever chip though, they beat this article to the punch almost half a decade ago. Consumers haven't been given respect, haven't been allowed to know what wattage, what caches, what Hz their chips use for a long time now. Google started that, Google pushed the post-knowable computing upon us. Apple is merely following up on this, merely delivering what Google started, at a far better price point, with far better underlying technology.


> but the company’s decision to eschew clock speed disclosures

There's one other major factor here: clock speed isn't all that useful for this new architecture full of custom-purpose cores, asymmetric cores, a new arrangement of connections between cores, etc. Actual benchmarks may be useful, but not so much a clock speed measurement (and which of those various asymmetric cores are you measuring against?).


I agree! There's a lot more to it than MHz! At the time, we also didn't have any other figures, like cache size, any power consumption/TDP figures, memory speed/bandwidth, any base or turbo clocks.

I felt like it was probably one of the first possible custom cores I'd seen, something specifically built for Chromebooks or Pixel or whatever the product was. That probably this part was not listed on Intel ark. But it was super distressing none-the-less: I had been denied any understanding what-so-ever of what kind of core was going to be within. There's more than MHz, yes, but also not knowing process (nm), not knowing wattage, not knowing caches... Google was asking me for something unprecedented: to spent ~$1000 on a system for which I had no understanding at all of expected performance.

It felt like a dark dark dark dark day. After decades of in depth analysis & review of every cache change, every TLB tweak, after endless in depth review of cores, we'd entered a bold new era. Where none of of what you really were buying was regarded as consumer-pertinent.

There's some "bright" spots but they are somewhat obfuscated. Lenovo's M75q gen2 with AMD 4750GE was an amazing package, regularly on sale for a very reasonable price with amazing performance[1]. But alas, it's rare that the genuine performance is known, is what is for sale. We have become a post-consumer market. The invisible hand operates at a post-consumer level, selling us on other, more illusory factors than capability. Truth vanishes. And yes, as this article says, many people just don't need to play the game in the first place, but still, the becoming invisible, the vanishing of actual competition is most dismaying.

[1] https://www.servethehome.com/lenovo-thinkcentre-m75q-gen2-ti...


What I find lacking in the article is an apt comparison with AMD's Ryzen chips.

Those are all the same chiplets, just binned differently. High performance ones go into the 5800x and 5950x. Lower performance ones into the 5600x and 5900x.

Which seems to be the same thing apple does, with a slight naming difference. Calling everything M1, instead of naming the CPUs.


At least on a x86, you can install Windows / Linux / Hackintosh. With m1 you have can install HackinWindows / HackinLinux / Mac, if you understand the joke.

The good part with m1 is that force amd and intel better cpus. Competition in always good. The not so good part is that Apple might start a trend of higher prices for CPUs.


Economics of scale. my bet is Apple will develop a cluster of M1s and call them M-power-2 to address the 1.5TB RAM workstation market you mentioned. It will be practically an array of M1s (or next gen) together. The way the M1 is used from iPad to iMacs is genius in terms of cost reduction at scale and for an end consumer, who doesnt care who else uses their chip, I get a good $/cpu power bargain. Tim Apple being the supply-chain guy he is, I see him doubling down and scrambling engineers to user more M1s in array to build a stronger core. Maybe an M1-based server rack for AWS?


ARM is not more efficient / faster because it’s “newer”.

People imply ARM is relatively new and thus could correct for the misdirection x86 has gone.

1978 is when x86 was introduced.

1985 is when ARM was introduced.

ARM isn’t that much newer than x86.

https://en.m.wikipedia.org/wiki/X86

https://en.m.wikipedia.org/wiki/ARM_architecture


People tend to lump them all together and just call them "ARM" but the 64-bit instruction set ("AArch64") first came out in 2011 and is hugely different than classical 32-bit ARM. And the chips Apple makes these days don't even implement the 32-bit instruction set anymore.


Honest question: could it be a matter of backwards compatibility?

Intel has been piling stuff on top of old architectures in order to stay backwards compatible at each step, while Apple had the opportunity to develop their architecture from scratch? I don’t know the answer, I’m just curious.


ARM 64-bit is cleaned up from the 32-bit ISA and showed up in 2011.


ARMv8 is new though. That's where the path to M1 started.


I was always taught ARM (and M1), being a RISC architecture, isn't as "capable" as x86 in some way, whatever it means.

I am no longer sure if that's still the case, since they seem to work just as good, if not better (energy efficiency). Of course, it's not exactly an apple to apple comparison since Apple upgraded so many other things, but I just didn't see any mention about the limitation of being RISC in these articles.

Could someone enlighten in this respect for an average Joe who knows nothing about hardware?


You are right that CISC processors (like x86) have more capabilities, i.e. more instructions. You as the programmer get to take advantage of the "extra" very specific instructions, so overall you write less instructions.

Less instructions sounds great, but with CISC you do not know how long those instructions will take to execute. RISC has only a handful of instructions, that all take 1 clock cycle (with pipelining). This makes the hardware simple and easy optimize. The instructions you lose can just be implemented in software. That space on chip can be used for more registers and cache, for a huge speed up. Plus nowadays, shared libraries and compilers do a lot of work for us behind the scenes as well. Having tons of instructions on chip only is a benefit for a narrow group of users today.

We've found that for hardware it's better to reduce clock cycles per instruction, rather than reduce total number of instructions.


> RISC has only a handful of instructions, that all take 1 clock cycle (with pipelining).

This isn't really the case with modern ARM. Fundamentally modern ARM and x86 CPUs are designed very similarly once you get past the instruction decoder. Both 'compile' instructions down to micro-ops that are then executed rather than executing the instruction set directly so the distinctions between the instruction sets themselves don't matter all that much past that point.

The main advantages for ARM come from the decode stage and from larger architectural differences such as relaxed memory ordering requirements.

https://retrocomputing.stackexchange.com/questions/13480/did...


So ARM nowadays is just straightly better than x86?


For the most part I think so. The main advantages x86 has are based on code size. Many common instructions are 1 or 2 bytes so the executable size on x86 can often be smaller (and more instructions can fit in the instruction cache). I'm sure there are tons of other small differences that weigh in but I'm not well versed enough to know of them.


A paper I read a few months ago compared instruction density between a few different ISAs. Thumb was 15% denser and aarch64 was around 15% less dense compared to x86. Unfortunately, mode switching in thumb impacts performance which is why they dropped it.

RISCV compressed instructions are interesting in that they offer the same compact code as thumb, but without the switching penalty (internally, they are expanded in place to normal 32-bit instructions before execution).

If they added some dedicated instructions in place of some fused ones, that density could probably increase even more (I say probably because two 16-bit instructions can equal one 32-bit dedicated instruction in a lot of those cases).

It’ll be interesting to see what happens when they start designing high performance chips in the near future.


You have been listening to propaganda. ARM has always been better. Consider the fact that it still exists, despite Intel's predatory nature. It exists because Intel could not make a processor for the low power market that was performant. They tried. But anything they built either took too much power, or used the right amount of power but was too slow. So ARM survived in this niche. But it couldn't grow out of this niche because Intel dominates ruthlessly.

But the low power market shows that, for a given power consumption, the ARM is faster. Does that not apply everywhere? Yes, it does. And so Apple, which controls its own destiny, developed an ARM chip for laptops and desktops. Its faster and cooler and cheaper than Intel chips, because ARM has always been faster and cooler and cheaper than Intel chips.

AWS, which also controls its own destiny has launched Graviton2. These are servers which are faster and cooler and cheaper than Intel servers, and the savings and performance are passed on to customers.

As long as Intel ruled by network effects - buy an Intel because everyone has one - build an Intel because it has the most software - their lack of value didn't matter.

There are now significant players who can ignore the network effects. The results are so stunning that many people simply refuse to believe the evidence.


Certainly seems that way. It looks like there going to be an M1 chip for 99% of folks, which works fine for all non-CPU-pegging work (Air, 13" MBP, 24"iMac, Mini), an M1.Large for stuff that pegs CPU (27"iMac, 16"MBP), and M1.XL for 0.01% Mac Pro folks who drop 5 figures USD on computers. But I'd expect the numbers to decrease logarithmically and the prices to be multiples. M1 machines from $700 to $1700, M1.L from $2000 to $4000, and M1.XL from $6000 onwards.


> Second, Intel and AMD both benefit from a decades-old narrative that places the CPU at the center of the consumer’s device experience and enjoyment and have designed and priced their products accordingly, even if that argument is somewhat less true today than it was in earlier eras.

I would have argued that memory was far more important than CPU in quickly judging the performance of a machine, once the 7th generation Intels made dual-core obsolete. But the M1 seems to buck this trend a bit as well, given that it only has an 8GB and 16GB variant and its new unified memory model makes traditional estimates of how much memory is needed less important. Some workloads such as an in-memory database won't change, but the memory usage for GUI rendering, graphics, etc. can take advantage of much faster accesses. And, with SSDs which are now considered a must-have for any serious machine, paging to disk is far less expensive than before in any case.

On another note, the M1 iPad Pro is the first time Apple has ever officially confirmed or marketed, let alone offered a choice in, the RAM for an iOS/iPadOS device.


M1 is the fastest Apple CPU YET. I suspect in the fall they will release the M2 for MacBook Pro 15 inches.

They have also delayed releasing the MacBook Pro 15inches with Intel on purpose in my opinion. When they will release the MacBook Pro 15 inches with M2 they will compare it with the Intel version with a 3 years old processor.

I don't trust the Apple benchmarks much. They are choosing what to compare and what metrics. Let's wait 3 years when the dust has settled and we'll be able to compare apples with apples.

Let's see also if Apple will be able to keep up the improvement of in-house built CPU+GPU with ALL COMPETING MANUFACTURERS OF CPU&GPU. What if Nvidia or AMD or Intel comes out with a huge leap, Apple then won't be able to take advantage of that. In my opinion the M1 is the new PowerPC. In 10-20 years from now Apple will have slow in-house built hardware and we'll be getting back off the shelf hardware like when Steve Jobs moved from PowerPC to Intel.


> M2 for MacBook Pro 15 inches

16" will be the size, like in the current gen. External form factor about the same size as the old 15", but smaller bezels.

If they come up with a 16" M2 with 16GB+ of RAM they'll be out of stock for MONTHS, everyone I know will be upgrading from their pre-touchbar Macbooks.


yes I found myself exactly in that situation. Got a MacBook Pro 15'' mid-2015 which was falling apart. I did not trust Apple with the whole M2 and abandoning support for Windows Bootcamp.

I made myself an hackintosh. Paid pretty much like an iMac but with everything maxed out. Still missing the GPU because of the crypto shortage but that's all another story.


What’s also lacking from a marketing perspective is the “Intel Inside” campaign - which was incredibly successful for the Wintel monopoly in the 1990s and early 2000s.

Seeing the sticker or hearing this slogan used to imply a premium or cachet to the product/hardware to the average Joe or Joanne.

No longer, Intel’s brand recognition has really taken a hit in the past decade.


People underestimate AMD and mix it in together with the current Intel chips when talking about the M1. Ryzen is not far behind M1 in single thread performance and beats it in multi core. If Intel had not made all kinds of deals with laptop makers for exclusivity then the narrative would be totally different in my opinion.


AMD have a license from ARM. Supposedly, Zen is pretty flexible. I’d love to see where they’d get just by swapping out the ISA.


Assuming the M1 performance extrapolates to desktop power/cooling it's going to be a monster chip. If that is the case I think apple will not stop at having the fastest watch/phone/tablet/laptop/PC. Why not go after the datacenter and leave money (a ton of money) on the table?


How you intend to extrapolate that? A seemingly much larger thermal/cooling headroom usually does not gain you that much in absolute performance. Consider the mobile/desktop Zen3 Ryzen geekbench (S)ingle/(M)ulti scores:

- 5800U, mobile (10-25W), 1400(S), 7000(M)

- 5800, desktop (65W), 1600(S), 9000(M)

- 5800X, desktop (105W), 1700(S), 11000(M)

and

- M1, mobile (10W), 1700(S), 7000(M)

Everyone's going gaga over the fact that the passively cooled M1 can trade blows with big desktop towers. But mobile Zen3 is not that far behind those towers either, so I think much more thermal headroom buys you less than people assume.


One of the big reasons the M1 is so fast is its RAM is on-die. Latency and throughput to memory are dramatically better and you don't have to drive those long traces, saving power as well. Servers need to scale to memory sizes in the terabyte range. Apple will need a new design that uses conventional memory before they can go after the big workstation market, let alone server.


It's not on die arghhhh why does this misinformation keep happening. It's on package. It's two separate LPDDR4X chips. Latency is not dramatically better. Other current LPDDR4X laptops run the same clock speeds off package. Maybe Apple has more aggressive timings than them, but the difference would be tiny.


Reality distortion field.


> Why not go after the datacenter

For certain applications, the migration will be challenging. It's also a slowly dying market. Amazon has an ARM instance, already. I'm sure they're already tearing the M1 apart to see what they can use. It's also no Apple's core competency; enterprise sales is a different beast.

Edit: I wasn't quite clear: it's dying because businesses are migrating to the cloud, and cloud providers are already working on/support ARM CPUs. Google might be willing to buy M1 chips. Not servers or boards because everything is probably custom by now, just the chips.


In what way is the datacenter market dying? Are Googles and Facebooks and Cloudfares and Microsofts and Amazons suddenly decommissioning huge racks of computers while their cloud businesses take off? What is that cloud made of?


Maybe because every tier 1 cloud provider is starting to design its own CPU. If this is the trend there will be little left to sell them.


Can you name a cloud provider that has recently launched their own CPU?


Graviton's been out for a while.

https://www.zdnet.com/article/microsoft-is-designing-its-own...

I haven't seen anything on this from GCP, but Google builds so much in-house, I'm sure they've at least considered it.


AWS is on its second in-house ARM chip. You can launch an M6g/R6g/C6g right now. If you have workloads on the cloud, you should be benchmarking already, as they will likely be cheaper for any given workload.


Those all owns their datacenters (and some can afford to design their own CPUs for those). Renting racks isn't dying yet, but it is on a downward trajectory.


Interesting marketing approach for sure. What I want to know are the economies of scale for using a single component, both at the chip fab level, the main board level, and overall product level.

Still seems like Apple's typical under-powered per price point offerings, but does this close the price/performance gap a bit, or drive higher profits at the same price points, or do they really see it as just a good marketing play?

While it sounds nice to have one chip in different machines, what is the benefit to the user who buys one machine, typically for efficiency or power? I suppose in some edge cases it might be nice to have a laptop with CAD-station power, but also able to stream movies or edit documents for a full intercontinental flight on a single battery charge?


Disrupt the hosting market, Apple. We're ready for your EC2.

Achieving a quantum power/performance leap without swinging for server market share is an massive wasted opportunity.

But Apple is Apple. I'll keep dreaming. Oh look, an AWS invoice.


Apple's focus tends to be on consumer products, I don't see them swinging for the fences and presenting an AWS like cloud offering at least under their own name.

Personally I'd like them to offer alternative versions of their CPU to cloud providers, I imagine that the current core configurations are not ideal for the sorts of workloads the big cloud providers need from a single processor.

But this will require serious heavy lifting and would likely require official linux support for Apple's hardware (undoubtedly they have internal systems running bare metal linux on Apple silicon), however I don't see Apple wanting to do this in the open and in public.


What’s the advantage? I don’t think it’s likely to cost less as a service, especially directly from Apple. If you get the same performance per dollar you might as well keep using what you have.

Heat and energy savings only apply to the direct user (in this case Apple Hosting) and unlikely to be passed onto you (since it’s Apple).

We’ve already seen this with M1 computers which are simpler and less expensive but did not have a lower retail price.


They would use their scale and engineering muscle to differentiate. Free outgoing bandwidth to iCloud users. The most eco friendly cloud. “Lambda at edge” which can compute on charging iOS devices nearby, those users earn a little Apple Cash. “Launch in iCloud” thin client support for resource intensive applications, using a proprietary low latency VNC alternative (like Parsec). I want to see them get creative.


Completely agree. I can't see them reviving the Xserve brand with their own silicon, but I could see them introducing cloud hosting services. Ideally with differentiation in something like environmental credentials (reduced, renewable power, etc.) to try and gain market share whilst maintaining margin.


They already have, though the marketing is subtle. [1][2]

[1] https://www.apple.com/icloud/

[2] https://developer.apple.com/icloud/cloudkit/


We are talking AWS competitors, not Google Drive alternative.


Besides a deliberate paucity of knobs to tweak, what exactly is the difference with AWS S3 as far as the target market (app developers) is concerned? This is a cloud hosting service in all but name.

https://developer.apple.com/icloud/icloud-drive/

https://developer.apple.com/library/archive/documentation/Da...


AWS is bringing ARM to the cloud in the form of their new Graviton instances, so in a way we already have this.

https://aws.amazon.com/ec2/graviton/


"has already brought"


It's so simple - Apple doesn't need to make money off of the M1 - they use the M1 to make money. For companies that sell CPUs instead of systems, it's the other way around.


>Apple’s gamble, with the M1, is that its custom CPU performance is now so high, at such low power consumption, that the choice of chip inside the system has become irrelevant within a given product generation.

This is clearly wrong as they are still selling Intel versions of the MacBook and MacMini. Apple makes a lot of money by offering a range of processor options. At the moment we are early in the transition but I have no doubt there will be M2, M3, etc options for most of the range.


But the point is that people do not care about the cpu in their mac. They care about screen size, price and other stuff.

Apple is still selling intel versions, because either they are cheaper (older generations), or because the newest version does not have the M1 yet (like the 16" mac book pro).


That's not the only reason - the M1 is limited to 16GB which is fine for many workloads, but not for all. Plus not everything works on the M1 yet e.g. running Linux in a VM.


> This is clearly wrong as they are still selling Intel versions of the MacBook and MacMini

I can think of other reasons some customers may prefer or need to buy the Intel version. AFAIK, you cannot run virtualized x86 code (docker, parallels, vmware) on the M1 macs (yet?) or maybe you need a driver for HW that is not yet available for M1/ARM. You can't conclude that they sell Intel versions for performance reasons.


Docker for Apple Silicon was just released. It uses emulation for x86 based images and works mostly seamless.


> This is clearly wrong as they are still selling Intel versions of the MacBook and MacMini

They're only doing that until they can replace the entire line with "Apple Silicon", which they said will take less than 2 years from when they announced the first M1 Macs. It's simply a stop-gap measure.


Did they also offer a range of processor options back when they were using PowerPC? I get the impression that they did not.


I seem to remember the Mac Pro had an option for more than one processor.


Perhaps one reason for using a single model of chip for everything is because the design costs for a 5nm SoC are estimated to be ~$500 million.

For comparison, revenue for Apple Mac computers in 2020 was ~$30 Billion. https://www.statista.com/statistics/263428/apples-revenue-fr...


It's not linear. Doing a M1 variant with say 6+6 wouldn't cost another 500 mil.


Sure. But does Apple's CPU engineering team have the capacity to work on variants?

Perhaps they instead prioritise working on a faster M2, and M1 becomes the low end CPU, similar to what they have done with the A series processors?


Perhaps Apple has more processors in the pipeline (this almost certain, as we have seen with its mobile processors).

It may be very difficult to sell products with the same processor but different hardware forms, because the market is tuned to comparing processors.

I am not saying whether its right or wrong, just that there is a whole mindshare of the market (marketing, advertising, news articles, blogs and videos) that start comparison of two different with their core processors.


Would M1 scale up with 64GB or even 1.5TB of RAM? It's not possible to have that much memory integrated; what would be the performance difference in that case?


Samsung started shipping 16GB LPDDR5 modules last year. They are supposedly 1.5x the speed and 20% lower power.

Double the stacks from 2 to 4 and moving from 8GB to 16GB gives 64GB. The two extra channels and extra performance gives something like 3x the bandwidth at only a bit more power. That seems perfect for feeding more cores and a larger GPU.

I’d love to see a move toward 512-bit HBM3. It seems like the perfect compromise. It doesn’t need expensive silicon interposers and offers decent latency while still having a lot of bandwidth. HBM2e uses about half the power as GDDR6 for the same bandwidth (I don’t know about HBM3 power numbers)


Can't they just stick a few dozen M1s in a box and call it a Mac Pro to scale up that way?


No, the M1 lacks any (usable) way to connect to other M1s.


For now?


Yes and they need to go away and design a very different chip. It's not trivial. x86 vendors already have chips like this.


Did you just “imagine a beowulf cluster of those”?


Thanks, that takes me back ^_^


The return of NUMA!


The year of NUMA on the desktop.


> "Part of the reason Apple can get away with doing this is that — and let’s be honest — it’s been selling badly underpowered systems at certain price points."

Well, they've been doing this for a long time, and were able to pull it off only because we got hooked up with their ecosystem and preferred more than the alternatives. What worries me though is the trend to make these machines totally unupgradeable: not only this is bad for the environment but will also decrease the long-term value of these expensive objects.


I'm very curious to see what happens to Intel. When the TSMC Arizona plant comes online in 2024 or sooner there doesn't seem to be much left for it to do that AMD and others are already doing more effectively. Perhaps that's too much an oversimplification, however it seems to be the sum of the various parts given as others have said here and elsewhere (and was my experience too), the performance of Intel hasn't changed much in about 10 years.


This is completely disregarding scale; the iPhone allows Apple the scale to manufacture these chips at an affordable price. Lenovo, HP and Dell do not have that kind of scale. Apple sold almost 3 times the number of iphones compared to Dell's total device line.

It also disregards B2B sales; apple has pretty much disappeared from schools and most office issues in big companies seem to Lenovo's, because of repairability.


> The most likely outcome here is a future M-class CPU with eight to 32 “big” cores and a conventional DRAM interface, based on either DDR4 or DDR5.

How much faster would M1 still be?


> Apple’s gamble, with the M1, is that its custom CPU performance is now so high, at such low power consumption, that the choice of chip inside the system has become irrelevant within a given product generation. It challenges OEMs to consider how they might spec higher-end systems if some of the higher price tag didn’t have to pay for a faster CPU.

I guess that’s the key paragraph.


I really do wonder what would Apple create in order to be able to offer ARM workstations.

Maybe chiplet / modular design like AMD? Only this time one of the chiplets is going to be a full M1/M2/M3 ARM CPU + some RAM and everything else usually included, and the other chiplets would be extra RAM or dedicated GPUs?

I am looking forward to their first ARM workstation offering.


Why does Apple bother with the 13 inch MacBook Pro if it has the exact same processing power as the MacBook Air? Touch bar and 2 extra hours of battery life aren't enough to differentiate the two. My guess is that Apple does intend to differentiate the M line but hasn't been able to do it yet, for what reason I don't know.


The screen on the pro is slightly brighter as well.


The really smart move is launching the low power consumer version first. This positions the eco-system for the native code where Apple wants it - on apps that run on the mass market devices. The higher power apps can come later and exploit 8 firestorm cores and up.


Personally I think Apple has completed its shift from a computer manufacturer to a processor manufacturer who serves two internal clients the software division and the consumer electronics division.


I was thinking that there’ll be “M2” by now but a suspicion is growing inside me that they might use a NUMA cluster configuration with PCIe interconnect for upper tier models. Doesn’t it make sense?


I think it's very likely they will announce the "faster M1" at WWDC this year. It will go into the 16 inch MacBook Pro and the iMac Pro.

Hard to say if whatever they announce will go into the Mac Pro - or if maybe they'll have yet another "faster" chip for that. Time will tell.


That would be a very AMD thing to do. Apple is likely to pump endless amounts of cash into TSMC to make huge monolithic chips so that there's no cross-core latency issues and that they can still say they have a single "chip" in marketing materials.


While the M1 is great now, I'm very hopeful in what's to come. If Apple can maintain even a bit of their current improvements year over year, in 2-3 years it just won't make sense to use anything else; see this graph to know what I mean:

https://miro.medium.com/max/1356/1*_aVjpbyBkMq-KIY8kbbP_w.pn...

Even if Intel can catch up somehow, it'll take years to catch up to this, and by then it'd be too late. The only reason why Apple will not win is that they bundle software and hardware too strongly. This some times and for some markets is a big strength, but other times it's a weakness IMHO.


> The only reason why Apple will not win is that they bundle software and hardware too strongly.

That is the major reason M1 is having such a momentum. I was pretty skeptical that niche x86 FORTRAN R extensions would work on M1. But here we are, Rosetta made it the smoothest ride possible. Now look at Microsoft and their Surface Pro X.


Yes, it's great to grab a big chunk of the market, but it'll limit then to grab the whole market IMHO.


I for one welcome our new ARM overlords, may x86 rest in peace along with other relics of the past. Saying this as both a gamer and intel/pc enthusiast but iphone owner.


Learning a bit about this in my MBA classes. Cost + Margin = Selling Price. Apple have the capacity to be more of a cost leader now, since they are not subject to the same bargaining power forces of CPU suppliers. As a result, they have the potential to push their costs down (if not now, then in the future once production scales) and increase their margins. A good time to invest, perhaps?


> "Cost + Margin = Selling Price."

that's cost-plus pricing, and should be viewed as a relatively naive model that is no longer typically used for pricing. sophisticated pricing decisions are based on price elasticities to find the profit-maximizing price. apple is a notable example of a company that exhibits this type of pricing decision-making.


Cheers for the clarification :)


Is it possible to write C++/Rust on an iPad? If so I'll consider buying one, as my iPad4 is struggling both wth charging and with 'modern' websites and apps. Otherwise Apple's hardware is just too expensive for me for what it is.


And here I am with an AMD Turion2 and 2GB of RAM.


Meh. The M1 got laughed right out of our office. It couldn't do anything without us changing all of our code and tooling for it.

We'll see how stable the landscape looks for it in five years.


"Part of the reason Apple can get away with doing this is that — and let’s be honest — it’s been selling badly underpowered systems at certain price points."

There is also market share. Being more affordable to more people means more sales. True, Apple has always positioned itself as premium. That will remain. Now it will be a higher value as well.


While everything wants to be faster. The Apple direction is everything I don't need.

I have a desktop for 1000$, 64 gb DDR3, 512 gb ssd + 1tb hdd, AMD 2600 ( 6 core) and a AMD 380 8gb and I could have gone overboard with the latest CPU and GPU. But I didn't think i needed it ( and after 7 months of usage I still think it was the right decision)...

Either way, I don't think I ever was Apples target audience. And something like an IPad is for consumption, not producing anything.

Ps. Yes, i use it for some gaming and for VR too, but mostly dev.


Here we go again. Comparing a 10w mobile RISC CPU to a 100w x86 CPU and somehow deducing that having a mobile CPU run desktop applications is a good idea.

I wish all the Apple fanboys would just hurry up and buy one so they would realize they are comparing apples to oranges and sit back down.

Nobody is buying an M1 to replace their gaming computer. And to insinuate that such a thing is possible or practical is disingenuous to the way PC hardware works.

Additionally I think the article is of amateur quality. It takes on an Apple perspective and assumes that there will be no response from x86 vendors basically because "how could you possibly respond to something as astounding as the M1?"

Don't worry. Apple hasn't taken over the desktop market yet, and they don't have the raw horsepower to do it anyway. And they don't want to. We are comparing embedded CPU's with proprietary north bridge architecture to industry standardized and socketed CPU's with an entire industry supporting it. Nobody in the market for a desktop PC is going to get a mobile Apple device just because the M1 is more efficient. Nobody goes out to buy a pickup truck and accidentally gets talked into a Prius. Just because you conflate efficiency with performance doesn't mean the M1 is capable of replacing a Ryzen 9.


I don't think this comment is going to age well.


I'm sure there are comments from the 64 bit PowerPC days that aged just fine.


A better analogy for the M1 is EVs. The M1 has near best in class single thread performance so similar to how Tesla's have crazy acceleration. Sure we've yet to see how efficiently M1 will scale to multithreaded or GPU work loads but all signs point to a very scalable architecture given the jump from A12 to M1. Simple extrapolation indicates that a M2 or M1x will match or beat most desktops while still consuming very little power.


> We are comparing embedded CPU's with proprietary north bridge architecture to industry standardized and socketed CPU's with an entire industry supporting it

Eh? x86es have had an integrated northbridge for about a decade.


You are right that as of right now the M1 can't really replace a gaming PC, but I don't see Apple currently really targeting that. However the real question is if that is an issue with the M1 chip or just an issue of Mac not really having much of the gaming marketshare.

That being said, personally I believe the gaming PC market is largely a niche market for the entire PC market.

At its core I don't see any reason the M1 chip can't play games (in fact we are seeing some good numbers come out of games that have a proper migration), it is purely a software issue.

It may not be able to compete with the top of the line custom built desktops, but the reality other gaming laptops that are far more expensive can't either.


Apple does not care about non-mobile games.

They stopped supporting OpenGL [0] and Vulkan support is a community effort [1].

[0] https://www.anandtech.com/show/12894/apple-deprecates-opengl...

[1] https://github.com/KhronosGroup/MoltenVK

edit: If you have something to add to the conversation, feel free to enlighten us. It does gets tiring discussing anything Apple around here because of the sheer bias.


I think Apple will keep selling x86 Macs forever. For heavy-duty workstation workloads, M1 just doesn't cut it.


They already said they would be 100% off x86 in two years: https://www.apple.com/newsroom/2020/06/apple-announces-mac-t...

We are still only 6 months into that transition.


The page you're citing contradicts that statement -- they will not be 100% off x86, but instead will offer 100% of their products with the option of buying an Apple ARM chip:

> This transition will also establish a common architecture across all Apple products, making it far easier for developers to write and optimize their apps for the entire ecosystem.

> Apple plans to ship the first Mac with Apple silicon by the end of the year and complete the transition in about two years. Apple will continue to support and release new versions of macOS for Intel-based Macs for years to come, and has exciting new Intel-based Macs in development.

I would expect Intel-based Mac Pros to continue for as long as there's a market for them. Probably at least 2-4 years after the "transition" finishes. (The last spec bump in 1-2 years, and sales end 3-4 years from now...)

It's also worth pointing out that Apple supports its products roughly 7 years from when they're sold, which means Apple will likely continue supporting Intel-based MacOS for at least another decade, possibly as much as 13-15 years. That said, I wouldn't buy an Intel Mac after 2024 and expect the same experience even if Apple supports it until 2031 or later: In just a few years, I'd expect Apple would have created even more MacOS features uniquely suited to the ARM chips they're now making.


I'm sure they will support x86 for years, but I have a feeling they will stop selling new products with x86 rather quickly.

If you look at the PPC to x86 transition [0] it happened faster than predicted, and OS support only lasted 3 years after it was done.

IMO Microsoft is the company that heavily prioritizes backwards compatibility, while Apple is happy to drop support where necessary to move forward.

[0] https://en.wikipedia.org/wiki/Mac_transition_to_Intel_proces...


You're using the transition that everyone wanted to point a rosy picture. Everyone wanted Intel chips and Wintel compatibility. PPC was too slow, too hot.

Fact is, until they can get nVidia GTX compatibility or quality, there will be a significant market of 3D and gaming that does not want to switch.

And I say this as someone who very much likes my new M1 Macbook and wants to convince the rest of the family and friends to never again buy an x86-based laptop computer.

If Apple can add 3D and gaming chips equivalent to the GTX 3000 series with software support, all within 2 years? Amazing and that speeds up their switchover -- but Adobe, for one, has still not yet updated Adobe After Effects and likely won't for at least a year, even though they announced they would. I'll remind folks that Adobe said they would consider adding Metal support to After Effects almost five years ago and still hasn't. Other apps are in a similar state.

I don't doubt that Apple would like the transition to be quick as well as seamless, and I think most portable products will transition very quickly. But unless Apple wants to leave the Pro market they just entered by making things too unstable for Pros, they will tread carefully on eliminating sales and support for Intel. Especially because even now they are selling 16" MacBook Pros that run Intel, and Intel Mac Pros, and 27" iMacs (formerly Pros) and all three of these platforms should ideally be supported for five years, minimum, ideally six given we could count the last year as the "seventh" year of support.


Yes, but in the PPC era Apple published amazing quality products several times superior to W98SE, W2000 and even XP.

Now, until the M1, Intel Macs where pretty much mediocre, even against a (fully configured OFC) KDE [3-5] BSD/Linux setup. Even with their iThingie ecosystem integration.


Hmm. Sure, M1 isn't suited for heavy workstation loads. They didn't target that sector yet. What's your point?

This is not the last processor they will do. Have you been watching the massive year over year performance gains Apple has been making for several years? There will be an M2 and an M3, and ....

Given the performance AND energy beating they are putting on the market sector that they targeted with M1, why is there any reason to believe they won't kill the high-end workstation sector when they target that?


I'm expecting apple to go the tiles or chiplet route for their workstation processors. TSMC have been working on some advanced packaging methods that are ideal for this usecase.

Apple could make one 16 core die and offer workstations with anywhere between one and four tiles for 16 to 64 cores.

As for the high end laptops and iMacs, I'm expecting Apple to launch a monolithic 8+4 chip.

That would be just 3 unique dies to cover Apples entire Mac lineup.


I will grant that the Mac Pro will be one of the last to transition, but I don't think it will stay on x86 forever. It may receive a spec bump at most.


> I think Apple will keep selling x86 Macs forever.

Apple immediately discontinued all Mac minis, almost all the Macbook Air and Pro line ups and the iMac's running Intel. The last form factor to transition is the Mac Pro.

> For heavy-duty workstation workloads, M1 just doesn't cut it.

Perhaps you are right on this one for now. The M1 doesn't cut it for 'Apple's standards' for now but when the time comes for the Mac Pro to transition, it won't use an M1 chip, but probably an M2 / M3 chip.

Apple Silicon is the start of it all and M1 is only the beginning and is certainly not the last. So its worth waiting to see how the M1X, M2 or M3 Macs improve over M1.


I heard similar arguments in regards to initial performance with the transition from 68K to PPC and then again with PPC to Intel. The transition will probably be far quicker than you anticipate.


Why should Apple care if a small portion of its product lineup will still rely on Intel/AMD? Why shouldn't they put pressure on these chipmakers to defend their position. They have both the vertical integration and scale to push entire industries to innovate. If this serves to “wake up” Intel we all benefit (including Intel). Look into xscale and its relationship to the original iPhone circa 2006-2007, that didn't wale up Intel and arguably they lost out on billions and weakened their future outlook.


Because of the cost of continuing to maintain software for that small fraction of the user base. Not only the core OS but the toolchain, and all their first party apps. And the difficulty of incentivizing and enabling third party developers to continue to support x86.


The m1 is built specifically to sip power and run cool. You don't think apple could come up with an arm processor that can keep up or exceed desktop x86 when those constraints are removed?


I'm not an expert, but isn't power efficiency the selling point of ARM? As in it runs efficiently, but there are diminishing returns to running it faster?


the power efficiency is a consequence of simpler circuitry. modern x86 chips devotes a fairly large amount of logic to translating the exposed cisc interface to an internal microcode that more resembles a risc system anyway. A risc chip doesn't need that.


...And you think there'll never be an M2? Or M1X, or whatever branding Apple decides to put on their 32/64/128/???-core version that supports terabytes of RAM?


I think the open question about the Mac Pro is the expandability. The trash can Mac Pro didn't do well and they went back with something with expansion slots. Is it worth it to Apple to build out and support everything necessary on the M1 platform for one low-margin line of computers? We'll find out in the next year.


Even the 27" iMac has a 128 GB RAM SKU, so they're going to need to build a chip supporting external RAM regardless. External PCIe support already exists thanks to thunderbolt. So the architecture will be there, it'll just be a question of how expensive it is to scale up.


Gigacore


16GB should be enough for everyone


Until they publicly show they can make such a thing, there will be some scepticism


You realize Apple is designing against more advanced nodes than intel - and from where I stand intel has nothing competitive in small form factors


Why? Apple just demonstrated that they could do something that had previously been considered unrealistic. They've also been demonstrating massive annual performance gains for many years. For people who have been watching closely, they should have some credibility by now.


I believe Apple will release something groundbreaking again when they scale up the number of cores in that M1.

8 high performance 4 low performance cores? Yes please. Or maybe we’ll get 16-16


It seems odd to think that their first attempt at this, which largely consists of small consumer-level devices with little to no active cooling, is the best they're going to do.

Not to mention that they've already been dominating mobile SOC performance for half a decade as well.

There's no indication that they couldn't take on the pro market. They've clearly got the talent in place. It's likely that the market size is a bigger factor than capability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: