Hacker News new | past | comments | ask | show | jobs | submit login
M1 Macs Review (sixcolors.com)
212 points by alwillis on Nov 19, 2020 | hide | past | favorite | 384 comments



After 6 years away from being "all in" on Apple, yesterday my base spec MacBook Air M1 arrived.

Battery life is the key takeaway for me. Performance is fine but I'm used to a powerful desktop so it's not blowing me away but the battery life definitely is blowing me away.

To run things under rosetta there's an arch command: e.g. arch -x86_64 brew install xxx - the arch preference is inherited by sub processes. I'm kind of amazed how well intel binaries run here. I can't tell the difference in usage and it just works, even games like Cuphead from steam seem fine.


Yeah so far the performance is very close to my 10 core iMac Pro I use for day to day development. This is on a MacBook Air with no fan, and so far I haven't felt the machine even get warm even when compiling _and_ also running tests or importing data in other processes.

This is when running a decently sized Rails monolith w/ PostgreSQL & Redis, so a fairly typical modern Rails setup.

It's seriously impressive.


Does it mean you can no longer easily tell if a process has gone crazy and takes high CPU usage without a fan and you also can't even feel the slow down.


Just wait, we’ll soon have an app for that — the equivalent of fake engine sound for EVs.


I like the idea. An app that roars louder the higher the cpu load.


Nullsoft (remember them?) had 'beep' which was always a joy to run.

https://www.1014.org/code/nullsoft/nbeep/


Of Winamp fame? Of course!


Haha!!!

Sorry but is this crazy to need the fan to sense the performance!


That is true in many situations, like a remote server. That's why we have top on the commandline, or iStat Menus. But I'm not sure if iStat Menus actually works on the new machines.

https://bjango.com/mac/istatmenus/


Those are passive solutions. You'd be looking at them after you realize something doesn't look right.

iStat or similar can do a notification if a process is taking high CPU usage for a while.

Servers have their own solutions like monitoring daemons that sends out an alert if CPU usage had been too high for a while.


Totally broke my workflow: https://xkcd.com/1172/


I was wondering something similar. VSCode tends to fly off the handle every once in a while and the only way I notice currently is a subtle lagging and the fan spinning up.

Though recently I switched back to VIM so perhaps this won't be an issue anymore.


Once I get mine, I plan on building AOSP and seeing what limits I can push the hardware to.


Would you mind sharing the RAM size on iMac Pro? I'm seriously considering buying Mac mini however its max 16GB of RAM is putting me off; fwiw I'm currently on MacBook Pro 2018 (2.6 GHz 6-Core Intel Core i7, 32 GB 2400 MHz DDR4). So any feedback around memory usage/size is very welcome! Thanks!


Nice! Are you using x code as well?

I’m in a similar boat as you - running bunch of node services, pg, etc. but I’m also developing for iOS. Xcode kills my current MacBook because I’m constantly rebuilding.


If a laptop's performance is on a par with a desktop's, and the fan's aren't going crazy, I'd call that a big win! My 2018 MB Pro has the fans running most of the time, and even then sometimes throttles back the CPU.


The MacBook Air that GP mentioned doesn't even have a fan! That's certainly a nice benefit.


I'm pretty convinced my i9 mac does not have a fan either. A noise maker, yes... but the 2018 model just seems to have a loud audible indicator that docker started. :)


In the same way my car beeps to remind me I’ve left the lights on, my MacBook buzzes to remind me I’ve still got Microsoft Teams open.


There is something wrong with Teams. As well as it’s resource hogging, it tries to connect to all my SMB shares (even if they are disconnected) when I open the app. Why is it doing this?


Microsoft programs get home sick and want to see some familiar file share protocol?


That's Electron for ya. We decided that javascript was thew ay to write cross-OS UI's. God help us.


_Do you think God stays in heaven because He, too, lives in fear of what He’s created?_


> There is something wrong with Teams.

There’s a lot of things wrong with Teams.


I really want the air for when I work outside in the summer. I worked from an iPad quite often with the magic keyboard, and it started throttling and sometimes overheated to the point of failure while outside in the sun.

It's too cold in Chicago to test, but I would love to see if the MacBook Air can work while outside in direct hot sun - it's the only reason I'm considering the MBP.


The Daring Fireball review of the MBP had this to say about the fan ...

"The two machines are equivalently fast in a sprint, but the MacBook Pro will win a distance race. But Apple doesn’t call it a “fan”. They call it an “active cooling system”. That sounds like a marketing euphemism, but it’s not fair to call this a “fan”. It is something else altogether, and nothing at all like the cooling systems in any previous Mac laptop.

I’ve never once heard it in an entire week.

Never. Not once. Not a whisper.

I presume it has engaged at times when I’ve taxed the system. Again, the Cinebench multi-core CPU benchmark taxes every available CPU core for 10 minutes. But if the active cooling system has kicked in, I never heard it. I’ve never felt any air moving out of the vents, either. This is nothing at all like the fans on Intel-based MacBooks, which, if you’re not familiar, you can definitely hear, to say the least."

https://daringfireball.net/2020/11/the_m1_macs


It's too cold in Chicago to test

I used to have one of those massive 17" MacBooks (Best keyboard ever), and it used to heat up enough to turn my thighs red. Back then, Apple avoided calling them "laptops," perhaps for this reason.

Then I moved to Chicago, and found it was a wonderful machine for using outside on my balcony in cold weather because it would keep at least half of me toasty. If I started to get chilly, I'd fire up Virtual PC and snuggle in.


Have to stop reading this post. Too many real life case so close to heart but also funny.

Virtual PC. Man, how can this nit a real story!


My 2006 Mac Pro was very nice to use in winter for this exact reason


One of the challenges of using a large-screen device in the sun is that most screen technologies (including transmissive LCD and OLED, but excluding eInk and transflective LCDs) are inherently "black" -- they absorb light energy that hits them. Even when they're displaying a white screen, they're absorbing nearly all of the light hitting them, and then emitting their own energy.

One of my mild annoyances with the iPhone 12 is that there's no white option. I have a white iPhone 10, and it's essential equipment for me as a convertible driver -- on a sufficiently sunny day, the phone will overheat on the center console screen-up, even when it's powered off! But with the white side up, it's fine even on sunny days breaking 110 degF.

The only way around this I could see for laptops and tablets in the sun would be a return to transflective displays or other non-black options... which would also be a huge boost to battery life in well-lit conditions. But none of these techs seem to have comparable image quality, and there are some inherent challenges (potentially addressable, but challenges) with color balance etc.


> One of my mild annoyances with the iPhone 12 is that there's no white option.

There is for the iPhone 12 non pro and mini [0].

The pro comes in what looks like a fairly light silver.

Disclaimer: I haven't seen those in real life, I'm going by the apple store website.

[0] https://store.storeimages.cdn-apple.com/4982/as-images.apple...


I have a white 12; it’s got a gold tint to the white that gives it a warm feel, but I’ve seen mixed reactions to it.


> One of my mild annoyances with the iPhone 12 is that there's no white option.

But there is a white iPhone 12? I have one myself. Or is not white enough/too matte or something?


Whoops! My mistake, thanks for that! I has been looking only at the Pro out of habit, but this might push me towards the non-Pro this generation.


If it helps, I'm perfectly contend with the non-pro iPhone 12.

The aluminum sides + white back look really great, too!


Would the "Silver" Pro (white glass with a silver band) be that different?


Or buy a white case?


> I would love to see if the MacBook Air can work while outside in direct hot sun

I'm not optimistic. An unused iPhone 7 will go into an overheating error state if left in direct sunlight on a British summer's day. I don't know how newer models cope.


Wouldn’t the air have the same problem? With no fan it’ll still heat up and have to throttle down.


If you're using the computer outdoors its best to get the MBP for the brighter screen.


+1 - I love the MB12, but then a summer arrived and I found out that stays under 1 GHz basically all the time when I'm outside.


Order a silver one instead of a dark grey one, just to be safe? :)


why would you want to work under direct sunlight? you can barely see the screen in that condition


I sit at my patio table and move twice during the day if I am out long enough. I face the sun, to tan, and my screen is away from it (behind it). When the sun is directly above I move under the shade.

It helps increase my mood, and productivity.


That's the entire reason I bought mine. Replaced the early 2020 model. I had complete peace and quiet during all of my video calls yesterday


When the news of apple silicon broke, apples shitty cooling of their recent laptops made sense. The CPU of my wife's MacBook air throttles several times a day without much heavy work.

I sincerely believe they botched it on purpose to be able to present increadible performance improvements compared to previous generations.

Apple silicon is amazing, of course. But the older machines were downright bad at cooling.


Original comment mentions MB Air, Air has no fans (now that sounds funny :)


Of all the things unveiled during the launch event no-fan thing completely blew me away. That, I was not prepared for!

I now wonder, does that mean that MB Air has no moving parts inside it at all?


There’s the haptic feedback for the trackpad.


The electrons are moving millions of miles.


>I now wonder, does that mean that MB Air has no moving parts inside it at all?

In addition, does that mean that the Air has no air vents? Is it a completely sealed unit, like an iPhone?


Don’t dunk it in water like an iPhone. Unlike lightning, usb-c is likely energized and not sealed.


It has a keyboard, not a big glass touchscreen. It is not sealed.


It's not a sealed unit.


The keyboard is still a big mess of moving parts, but there is indeed one less moving part to break.


Yes, like an iPhone or iPad.


And like the 2015-2019 MacBook.


I hope they do this form factor again with the new chips


I am thinking may be next year or the year after. The MacBook Pro will get new Design, at 16" and 14". ( I hope instead of going thinner they could stay within the current cooling TDP )

Then the MacBook Air will have a 14" as well, before a 12" MacBook Air is introduced at a lower price range, possibly at $799.


> a 12" MacBook Air is introduced at a lower price range, possibly at $799.

That would be great. But it could make more difficult to sell an iPadPro (starting at $799) with a MagicKeyboard (starting at $300) as an alternative to a laptop.


If your MBP fans are running all the time, you may want to open it up and see if it's filled with dust and lint.


Oh I have, thanks. It's just that running my dev. environment plus Teams and all of the other "necessary" accoutrements ends up using a fair chunk of CPU.


The process table as well.


How can you tell your battery life if you’ve only had your Mac one day?


Some of the reviews have mentioned what running a particular heavy workload reduced their charge level by.

>After a single build of WebKit, the M1 MacBook Pro had a massive 91% of its battery left. I tried multiple tests here and I could have easily run a full build of WebKit 8-9 times on one charge of the M1 MacBook’s battery.

In comparison, I could have gotten through about 3 on the [Intel] 16” and the 13” [Intel] 2020 model only had one go in it.

https://techcrunch.com/2020/11/17/yeah-apples-m1-macbook-pro...


Do you mind sharing what battery life you are seeing for your real life workloads? And also an idea of what those work loads are?


It arrived with 69% charge, by this morning 11am i had reached 21%. I left the screen brightness on the default setting and usage was a mix of installing software (mostly I/O i suppose), a lot of safari, around 4 hours of Citrix remote desktop, around 90 minutes of zoom, some light gaming last night to see if it works.

If you do anything that's heavy on I/O (packaging jar files in my case), this laptop is faster than my big desktop, the I/O is crazy fast actually.

Memory pressure has been low so far - my biggest fear opting for the base model but looks like i'll be fine - i spun up bunch of dev processes and was fine.


I wonder whether we need to reset expectations on memory amounts. From seeing the reviews where they launch every application installed on the 8gb air and it remains smooth I am suspecting that the unified memory and optimized storage interface means that for M1 macs 8 gb behaves like 16 gb on intel cpu’s.


Just remember that if your machine needs to swap because of low RAM you are now writing to your SSD.

The permanently soldered in SSD that has a finite amount of writes.

Saving $200 on RAM up front could get wiped out if you kill your SSD prematurely with constant, heavy swapping.

Here I see the fast SSD speed as a negative - making it painless to do something inherently dangerous towards the lifespan of your SSD.


Have you ever actually worn out an SSD using personal workloads (ie, not on a server)? I haven't yet and I own machines/SSDs from 12 years ago - benchmarks are still roughly equivalent to what I had to start.


Yes, I have. My old ThinkPad R61 I added an OCZ SSD to. Not sure how much RAM it came with, but it was swapping a lot, I guess. Back then, SSDs had less of a lifespan than nowadays (even consumer-grade ones).

I had the same on Windows + a SSD. Docker with some Sqlite applictions apparently did crazy writes. SSD went from 100% to 80% in like half a year time or so, when I noticed it. That SSD is still alive, but I had to adapt.


Those early SSDs if I remember right had very low cycle limits, the limits now are over 10x better than they were then.


Depends which age/tech.

Every additional 'Level' in the flash decreases the overall life cycle. Mind you, oftentimes drive controllers are fancier, often doing things like treating one of the sections as a less dense flash type to improve write cycles, and keep/move longer held data into the regions treated as QLC.

But, that means that some heavy write patters will indeed wind up worse on a modern drive than the better MLC/TLC models.


Its the other way around. Early SSDs had huge P/E numbers in tens of thousands (SLC). Now we are down in the low hundreds.


To re-iterate how much more reliable SSDs are now that they used to be, until about a month ago I worked for a vendor of hardware/software that is designed to read/write to disks hundreds of thousands of times per second (log collection and searching). When I first started working there ~5 years ago, they were testing out SSDs in the appliances and the team ultimately decided against it because they estimated the SSDs would burn out in under a year.

By the time I left they were shipping appliances with SSDs that were predicted to have lifespans longer than the longest hardware support you could buy from the vendor (longest hardware support contract - seven years). Hundreds of thousands of writes per second, every second, for 7+ years, uninterrupted.


They are most likely shipping bigger SSDs :-) Bigger the disk bigger the TBW despite P/E going to shits with every node shrink/bits per cell jump.


Would these first gen ARM chips not be obsolete by a few years before wearing out the SSD?

The SSDs from the previous gen are rated for something like 40Gb writes per day for 40 years i think.

I’d guess the battery will fail first, then the keyboard or screen hinge, then the logic board. But thats a pure guess.

I reckon the biggest risk is that all in one logic board, anything fails and you lose your HDD.


> Would these first gen ARM chips not be obsolete by a few years before wearing out the SSD?

Obsolete, who defines that?

I'm typing this on a MBP 2015. Still works perfectly fine (I did have screen replaced under warranty). Battery is replaceable on this machine, its sitting at ~85% capacity with 500 cycles. Apple hardware tends to have good resale value, too.

Performance-wise, hardware tends to last longer and longer. Repairability-wise, yeah, that's a can of worms of a discussion.


I'd say a 2005 PowerPC iBook is obsolete. I'd also say a 2009 intel macbook is obsolete but the truth is, add an SSD into that and it's probably still fine under linux so maybe 2009 isn't obsolete for some users just yet (although i think apple no longer support that hardware). 2015 is modern, intel performance just hasn't changed that much since 2015.


>the I/O is crazy fast

I seem to remember watching a real time video transcoding demo, or something like that, on an iPad Pro and the dev demoing it saying it was the I/O that made it possible. Desktop machines with much faster processors would choke due to bandwidth limitations, but Apple can custom tune every aspect of the iPad architecture and aren't limited to standard, generic interfaces between components.

The original 5k iMacs were a similar case in some ways. It was only possible in an all-in-one design, because there wasn't a standard video connection at the time that was capable of the throughput.


> If you do anything that's heavy on I/O (packaging jar files in my case), this laptop is faster than my big desktop, the I/O is crazy fast actually.

Tt's got a rather fast NVME SSD, 2.7GB/s reads and 2.2GB/s writes (Intel version was 1.3 and 1). Don't know if the IOPS & latency improved, but it wouldn't surprise me much.


Do they still ship these Airs with 720p webcams from the 2000s? I have a 2020 Macbook Air that has a terrible webcam.


I recently discovered this app that makes your iPhone (or iPad) a webcam for your Mac—the difference between the shitty webcam Apple ships and this is eye-opening: https://reincubate.com/camo/


Yeah and the low light performance isn’t great either


I have a feeling an improved webcam (and FaceID) will be one of the features Apple will use to distinguish their forthcoming higher end MBP's.

Surely, they won't keep using the same aging technology. At least I hope not.


Unpardonable in the age of covid. And I'm tired of trying out no name webcams from Amazon and hooking them with dongles and they barely stay on top of the monitor.


8GB integrated memory spec?


Yeah. I don't hold out hope of it surviving any vm workloads (vagrant, docker desktop etc.) but i'm ok with just installing the cli tools and connecting to remote clusters.


Great to hear that! I got myself the last Intel based mbp 16" fully jacked with R5600 HMB2 memory, as this is the last one that will run Windows, which I need often. I also need larger screen, so I'm waiting for M2 or M3 generation, that will inevitably come with 16" screen factor - looking forward to that amazing machine in a few years!


“New Intel chips were often delayed and offered only small improvements over previous generations.”

As noted on HN and elsewhere, Intel’s struggle has been in manufacturing. They minimized risk (“got lazy”) when Moore’s Law reigned and their fabs were 1-2 generations ahead of everyone else, but have since squeezed every last architectural drop out of 14nm. Put Tiger Lake on TSMC 5nm and a lot of M1’s lead goes away.

The M1 is a damned fine chip and I’d like to own one. But the recent hero worship is a bit awkward to read as Apple is generally following an accepted playbook in the End Times of Moore’s law. 1. Save power by creating accelerated logic blocks without locking yourself out of algorithmic improvements. 2. Execute flawlessly. A chip respin costs 3-4 months and who knows how much money, so you need to go to production with the A0 silicon you powered up. 3. Know your software-only workloads. At best, all mathy code can be shoved down SIMD pipelines for high IPC. At worst, you’re dealing with a bunch of branchy integer code. The M1 has solved for both with vector extensions and a massive reorder buffer and register file.

It’s a great chip and Apple got there first. They’ll be rewarded with sales and increased market share, and developers will have one architecture to wrench on for all platforms.

You would be right to say that Intel couldn’t make the M1. It’s not because Intel’s fabs are struggling or Apple has found tricks no one else in the world knows about. It’s because Apple controls the entire software stack and therefore knows what they need on the chip. Intel caters to the entire market (one the M1 just noticeably shrunk) with myriad integrators, workloads, and users, and each has different concerns and priorities making accelerated algorithm blocks useless to some and not-fast-enough for others.


“ You would be right to say that Intel couldn’t make the M1. It’s not because Intel’s fabs are struggling or Apple has found tricks no one else in the world knows about. It’s because Apple controls the entire software stack and therefore knows what they need on the chip. Intel caters to the entire market (one the M1 just noticeably shrunk) with myriad integrators, workloads, and users, and each has different concerns and priorities making accelerated algorithm blocks useless to some and not-fast-enough for others.”

This is an important point. It applies up and down the stack for everything Apple does.

The people who suggest breaking up Apple are correct that it would destroy them, by forcing them to use inefficient generic components at every level.

If we want competition against Apple, the way to get there is not by hobbling them with the inefficiency of the old paradigm.

It’s by the rest of the industry figuring out how to collaborate on delivering similar gains without vertical integration.


The places where people want Apple broken up are generally places where we all know it will make the result "worse" but will enable interoperability and competition and break down lock-in and empower users and lead to more repairable and less wasteful devices and (and and)... the idea is that what is best for an entire market isn't merely "the fastest thinnest devices with the longest battery life and the most integrated stacks" if that implies buying into one of a handful of locked down systems, but instead that we should actively enforce an optimization that is harmful for all of those companies and potentially their products but which causes an ecosystem of interoperable components where users have commensurate power to make demands back on platforms. It is the very true fact that economies of scale and efficiencies of vertical integration exist that make regulation and anti-trust law so critically important to prevent the world from being made up of an oligopoly of maybe three giant companies that each make one of everything you might ever own, all beautifully integrated with their own products while being almost entirely incompatible with the products made by the other two, with cross promotions from tying and forced purchases from bundling all making it difficult-to-impossible for a new company to independently introduce anything new without replicating the entire vertical stack of an incumbent.


With all due respect, I don’t see any logic explaining how an ecosystem of interoperable products where users have commensurafe power to make demands back on platforms would be a consequence of breaking up Apple.

I do think it’s a positive situation to wish, but the idea that it would arise simply as a result of breaking up Apple needs some serious explanation.

Repairability leading to less waste is also a seems like an unlikely consequence of making devices less integrated. Apple’s device lifetimes are much greater than any of their competitors.

I personally strongly value software freedom, and do not think we can afford to let the current situation persist for all time, but it pains me to see things like right to repair be tied in with the problems of the software ecology. One seems like Luddism, and the other seems like a bedrock of a sane digital future, two things that don’t mix well.

One step which would seem to serve both of these goals, would be to require all devices that fit some general category of ‘general purpose computer’ to permit alternative operating systems to be installed.

But that has absolutely zero to do with breaking up Apple. If we need rights, let’s actually say what they are.

Breaking companies up doesn’t create rights.

The null hypothesis for what would happen if you break up Apple is that Android would simply dominate, with no improvements over its current locked down state, which is admittedly slightly more free than Apple, but in no way resembles the utopia you describe.

We’d probably get Facebook and Amazon forks of Android gaining more traction this time around, but offering none of what you and I want in terms of freedom.


It’s by the rest of the industry figuring out how to collaborate on delivering similar gains without vertical integration.

This is one of the smartest things I've seen on HN in a long time. This is exactly what has to happen if the other companies don't want Apple to eat their lunch.


The thing is... doesn't Samsung already do this with their Exynos chips? And even with Qualcomm, they must already build them with Android (even a particular phone) in mind, right? Yet somehow the A14 is faster than a Snapdragon 865. I think strategically if Exynos put pressure on Qualcomm in a nontrivial way (e.g. Samsung switched entirely, and started selling Exynos chips to their competitors) that would change the game. Maybe the fab needs to be spun out of Samsung, just as Intel's fabs might be spun out as well.


Samsung and Qualcomm have different priorities.

Vendors are already complaining to Qualcomm about expensive packages due to 5G. They want it cheaper, at the same time people are suggesting Qualcomm's CPU goes faster. You can only have one of them.


I don’t see Samsung doing this. It’s one thing knowing that you are going to run Android.

It’s entirely another to decide to be able to invest years in optimizing reference counting in collaboration with the compiler team committing to ARC.


>The people who suggest breaking up Apple are correct that it would destroy them

I haven't actually heard too many people calling for Apple to be broken up (most of what I hear is around increased regulation) but when I do hear breakups being talked about, it's more along the line of removing their value-adds like TV+ and Music+ and App Store. Those are the things where you could make an argument that they're hurting competition, not the coupling of their OS and CPU.


A lot of people, including a recent US presidential candidate, are suggesting a breakup - see Saurik’s comment.


> But the recent hero worship is a bit awkward to read as Apple is generally following an accepted playbook in the End Times of Moore’s law.

Is anyone else doing this though? It might be the theoretical playbook, but if others aren't doing this in practice then Apple are worthy of credit for pulling it off.

1 - accelerators for specific things aren't new, and lots of processors have an H264 accelerator, but Apple seems to be pushing this a bit further than others. Even low power vs high power cores doesn't seem to be something others are doing much of. 2 - I don't know enough about. 3 - Apple has put a ton of FP units on this seemingly for JS performance, anyone else could be doing that, it's not a big spoiler to know that people run a lot of JS now, and yet the M1 has more FP units than any other consumer CPU. This knowledge about the software workloads seems to be beyond what others are working on.

It seems to be more than just the 5nm process, not least because the A14 generation is the first 5nm chip but Apple's chips have been well respected back to 7 and 10nm for similar reasons.

Caveat, very much a layman on this, but it seems like Apple are executing that playbook significantly better than others.


>Is anyone else doing this though?

Yes, but from a different perspective. For example the recent M&A between lots of Fabless companies are precisely because of that. Sharing cost and allowing different IPs to work together.

What is missing is someone doing the Software. If it was Microsoft, they would have done at least done something. It wouldn't be good, but they would at least be "trying" to compete. Google just doesn't care.


It's not just the re-order buffer, the M1 also has 8-way decode (vs 4-way for Intel & AMD microarchitectures), and a 128 KiB L1D cache (vs 48 recently from Intel, and 32 from AMD), and that cache still has a 3-cycle latency. These all let an M1 chip keep up in single-threaded workloads @ 3.2 GHz with what the latest Zen 3 5950X can do @ 5.0 GHz.


Interestingly, Intel could also control the whole stack.

They make NUC hardware[1] and Intel Clear Linux[2].

They could tune that stack for a new purpose designed chip... if they wanted to, but it seems unlikely.

1. https://www.intel.com/content/www/us/en/products/boards-kits...

2. https://clearlinux.org/


They could. Still wouldn't run 90% of the software I use daily.

You need ALL the pieces of a puzzle to compete.


Horses for courses I guess.

Even when I'm on a Mac almost everything I do is either in browser or in a terminal.


> 2. Execute flawlessly.

You say this like it's easy or routine. At the bleeding edge, execution is 99% the hard part.


"Intel caters to the entire market (one the M1 just noticeably shrunk) with myriad integrators, workloads, and users, and each has different concerns and priorities making accelerated algorithm blocks useless to some and not-fast-enough for others." Which is why even if Intel does manage to solve their process problems, they are not out of the woods yet.

As you point out Apple's per core performance is impressive, but their ability to customize their hardware to optimize very specific compute tasks like the nsobject release example or floating for Javascript will be a lot harder to convince off the self/least common denominator parts like x86 processors have traditionally been.

I also think we are at the beginning of seeing the influence of "accelerators" such as the ML and neural engine cores. Accelerators beyond the CPU are nothing new - that's what a graphics card is, after all. What's interesting is these are on the same SOC with the CPU cores.

What if Apple decides even for the higher end Mac's to integrate RAM into or closely couple with the SOC? What if they decide on their own memory controller that lets you address all the memory chips at once. What potential could that have for performance? Maybe not enough to offset the cost/complexity. But if it did turn out to be on the right side of the cost/benefit equation now instead of having to convince multiple parties it's a good idea, at worst Apple just has to convince a few internal divisions to work together.

That is the long term power I see with Apple Silicon. Not raw specs or Moores Law - but unfettered access to the full Art of the Possible.


As you point out Apple's per core performance is impressive, but their ability to customize their hardware to optimize very specific compute tasks like the nsobject release example or floating for Javascript will be a lot harder to convince off the self/least common denominator parts like x86 processors have traditionally been.

The improvement in retaining and releasing NSObjects is mostly due to Arm's weaker memory model. And javascript specific instructions like FJCVTZS are available for anyone designing Arm cores. Also, IIRC, FJCVTZS was added to more closely match the way x86 handles floating point numbers because that is what Javascript was designed to target.

But if it did turn out to be on the right side of the cost/benefit equation now instead of having to convince multiple parties it's a good idea, at worst Apple just has to convince a few internal divisions to work together.

The problem is that Apple does what they want at their discretion. You need them, but that don't need you. Whereas hardware manufactures are usually quite sensitive to customer needs. We've already seen how this plays out with Apple's professional software products like Shake or Final Cut.

Not to downplay Apple's accomplishment, after all nobody expected them to be this competitive. But people are getting caught up in the hype.


I found say intel not just failed in the process nodes. It also failed to see the rise of the dedicated fab (tsmc) and fabless business model as a more efficient model. Intel also failed at innovating on lower power designs and the gpu. Even at 14nm, there are so much that could be done to improve igpu performance. From color compression, to the more efficient shader array. The nintendo switch runs Nvidia tegra x1. It's a 20nm chip, and look at the performance of the gpu. Intel could have also put their markshare weight to put the alternative to cuda be it opencl or even their own API to really gain market traction. They could have their dedicated gpu. Yet they just ignored the whole massive simd compute market. I think for intel, lossing mobile to arm and see the improvements in mobile year over year should be a wake-up call. Yet they didn't do anything to increase their moat for the past 10 years. Or maybe they did try, but their strategies are just flawed. The strategists at Intel just don't have enough foresight and vision.


Feels like execution rather than strategy: they've tried at mobile, gpus, foundry etc - the right calls - but just haven't delivered. Apple / AMD / TSMC have executed much better.


I feel is both. Also a good strategist would have made sure whatever plan and vision you have is also going to be executed well. A strategy that didn't end up to be executed well is a failed strategy.


Agree 100% but some of the hero worship is a reaction I think to being told for a long time that 'Only Intel x86 chips are powerful enough' - which in part was put about by Intel marketing (Intel inside etc and some of the comments at the time the iPhone launched that consumers need an x86 to do real work) - and that ARM wouldn't scale up for desktops etc.

The other key point is that Apple can pick and mix from the technologies they use - they've unbundled Intel - choosing the best that they can find for their needs whether in-house or externally.


What workloads are you picturing that Apple isn't serving with Mac chips that Intel has to cater to? This has to be a much more general purpose device than an iPhone.


> Put Tiger Lake on TSMC 5nm and a lot of M1’s lead goes away.

> It’s not because [...] Apple has found tricks no one else in the world knows about.

I don't think anyone would disagree with either statement. The point is Intel can't get TL on a small node, and not for lack of trying. They eventually will, of course, but there is an organizational sickness at Intel that has crippled them.


Then how is M1 beating _desktop_ top end AMD chips in single core performance? Ignoring Intel here.


It isnt. In Cinebench its up there behind middle of the road Ryzen 5600X in single core, and ~two times slower in multi.


Right but that chip is a 65 watt part. The m1 is what, 10-15? I think that’s pretty amazing.


It is pretty amazing for what it is, but it definitely is not "beating _desktop_ top end AMD chips in single core performance"


Copy-pasted the comment I left at another forum:

The the M1 is Zen3 5600X performance in an ultra-portable form factor. Whether that's revolutionary or not particularly noteworthy depends on the individual use case. If you're don't care that much about power/heat/fan-noise, it's neat but not (yet) world shattering. Bit if you live your life on a small laptop, it may well be.

According to rumors, we'll see Apple's 8 Firestorm + 4 Icestorm CPU within a quarter or two in the 16-inch MacBook (current M1 is 4 Firestorm + 4 Icestorm, where Firestorm is the high-performance core). That will perform somewhere between a 5800X and 5900X (or some recent 16-core Xeon that nobody remembers the name/number of). Again, revolutionary or meh depends on use case.

Linux native is not in the cards but it won't matter, it will work fine virtualized with hardly any performance penalty (for ARM64 linux). If for some reason you insist on native linux, it's time to recognize that Apple hardware is not for you.

Personally I run my server workload on servers far away and let other people deal with infra (particularly, power, networking and physical security). But it's probably great for heavy media creators on the go.


Native linux could actually be a thing… https://www.reddit.com/r/jailbreak/comments/jtrt2l/upcoming_...


Does that mean you have to jailbreak your own machine to install linux or is it just a coincidence and it's just the same team?


No. https://keith.github.io/xcode-man-pages/bputil.1.html

This changes the security mode to Permissive Security. Permissive Security uses the same "global" digital signature for macOS as the above Reduced Security option, in order to allow running software which is not the latest version. Thus anything other than the latest software therefore may have security vulnerabilities. At a high level, Permissive Security allows configuration options to be set to not require all software to be digitally signed. This can allow users who are not part of the Apple Developer program to still be able to introduce their own software into their system. Additionally, especially dangerous security downgrades may be restricted to Permissive Security, and only available via CLI tools for power users rather than GUIs. Passing this option will explicitly recreate the LocalPolicy from scratch, (i.e. it does not preserve any existing security policy options) and only the options specified via this tool will exist in the output local policy.

It sounds like Apple left the boot loader unlocked on these systems, just not by default.


Amazing, thanks for taking the time to thoroughly answer my question!


"Linux native is not in the cards"

Sorry I'm not in the loop.. but why is that? Does the hardware somehow preclude it?


The locked bootloader will only run Apple signed operating systems (a la Chromebook, without the Chromebook Developer mode escape hatch).

https://news.ycombinator.com/item?id=23640746


You can still disable secure boot just fine, just run “csrutil disable” from recovery and the option magically appears.

https://support.apple.com/guide/mac-help/macos-recovery-a-ma...


Is it durable or can it be forgotten as a setting? I would be looking at plain Linux boot and erasing Mac os, so any dependency on Mac tools would be nonideal.


At this point, it's very much preferable to keep macOS. The bringup for the drivers for that SoC will take a while. It won't all be available instantly.

Note on those Macs that the security settings are boot volume specific, so that you can have a volume with all security off for fun, and still have a completely secure macOS install selectable via dual-boot.


Actually that's not the case. This post is misinformed (and 4 months old).

There was a breakdown from an official Apple page how to boot alternative systems on an M1.


https://news.ycombinator.com/item?id=25145119

Choose one of the following security options:

Full Security: Ensures that only your current OS, or signed operating system software currently trusted by Apple, can run. This mode requires a network connection at software installation time.

Reduced Security: Allows any version of signed operating system software ever trusted by Apple to run.

Permissive Security: Does not enforce any requirements on the bootable operating system.

Note: The Permissive Security option appears only when System Integrity Protection (SIP) is disabled. To disable SIP, start up your Mac in macOS Recovery, open Terminal, then run the command csrutil disable.



> There was a breakdown from an official Apple page how to boot alternative systems on an M1.

Link please.


That's the kmutil manpage section about it: https://twitter.com/comex/status/1328800304318603265?s=20

That work doesn't seem to be complete yet however in the latest beta.


I don't get this (sorry ignorant!) - isn't the Zen3 an AMD thing? What has this got to do with ARM?


Yes, Zen3 is AMD's just released latest architecture, and currently the highest performing x86 implementation. The 5600X is the the mainstream 65W 6-core desktop Zen3 CPU.

It has nothing to do with ARM or Apple. I left the comparison as a reference point for people who follow desktop PC performance and may not be as well aware of Apple's.


It’s also a 7nm design that has to deal with things like PCI busses, external memory on a narrower bus etc. so actually it do very well compared to the m1.

I’m interested to see how Apple can make this scale, can you get a Mx laptop with 64 Gb? Or will that has to be soc external memory for the process to scale?


Apple chips have PCIe since forever too, in fact Wi-Fi + BT is plugged over PCIe on them. Thunderbolt with two 40Gb/sec links isn't tiny either.

64GB will come for future laptops. And Apple can scale with a single package beyond 64GB too, very high performance solutions included. See the A100 80GB from NVIDIA with 2TB/sec of memory bandwidth, with RAM as part of the package, just not in the same die.


> Thunderbolt with two 40Gb/sec links isn't tiny either

It comparatively is. Thunderbolt is basically a PCI-E Gen 3 x4 slot but over a (very) short cable. Commonly more like an x2 link since getting the x4 speeds is difficult. Commonly 2 thunderbolt ports also share a single controller, so it's a 40Gb/sec combined maximum not per-port (either port can saturate, but you can't hit 80Gb/s total). It's unclear what the M1's controller actually is here, if it's dedicated lanes per port or just bifurcation.

But lets say the M1 is full on 2 TB3 controllers, for a combined 80Gb/s over those 2 ports. That's still not much. By comparison the 5600x has 24 PCIe Gen 4 lanes on it. That's 380 Gb/s of bandwidth off of the CPU. These are all things Apple would need to heavily beef up when they make the Apple Silicon variants of the larger class products. You can't do a Mac Pro replacement with 80 Gb/s of I/O. That'd be a joke. Apple obviously will scale up the I/O, but that costs power, too. It won't be free.


That I/O is relatively inexpensive power-wise from experience, and is totally off when unused.

For those Ryzen parts, the thing that hikes power use a lot is having a separate northbridge in the same package instead of an SoC, with the vast majority of that additional power use being due to that.


> I’m interested to see how Apple can make this scale, can you get a Mx laptop with 64 Gb? Or will that has to be soc external memory for the process to scale?

Memory is already external to the SoC on the M1; it's on the same package but it's not a single chip.


Just a performance comparison.


>The the M1 is a Zen3 5600X in an ultra-portable form factor.

Well, also without x86 baggage, and with tons of SOC co-preccessors added, and a unified memory.


Unified memory is pretty normal on non-desktop systems and has been for a while.


Not sure which those "non-desktop" systems are.

If you mean "smartphones and tablets", ok. But this is a laptop, not a mobile phone, and it's new development there.

If you mean "laptops" then, no, they didn't have unified memory, much less have it be "pretty normal".


Laptops with iGPUs are pretty normal, no? (And common on desktops too)


I don't know about the latest Intel integrated GPUs, but for the original ones it was not a unified memory arrangement. Rather the integrated GPU used a dedicated chunk of main memory. It was the same banks of memory, but was treated as separate. If you wanted a texture to appear on-screen, then it had to be moved from the normal pool of memory to the one reserved for the GPU.

This reserved pool of memory was even hard-set at boot time, and usually the size of it was determined based on the amount of total memory (for example at one point MacBooks with 4GiB of memory allocated 1GiB for memory, ones with 8GiB allocated 2GiB).

What it sounds like Apple has done here is heavily blur those lines. One example would be to load a texture into a memory location, make a change with the CPU, then let the GPU do some modifications, check something with the CPU, then apply it to some shape and display it. On current systems that would mean multiple copy operations of that texture back and forth between segments of memory (across the PCI bus on a dedicated card). Here that might all be done with no copies at all (just handoffs).


> On current systems that would mean multiple copy operations of that texture back and forth between segments of memory

This is not the case, as expanded in sibling threads.


iGPU doesn't mean unified memory.


For modern ones it does.


Can you give an example outside the M1 Macbooks?


Intel iGPUs since gen7, AMD iGPUs since Kaveri. Most laptops have one of these.


Yep. AMD called it hUMA from 2013 or so when they started marketing it (eg https://www.tomshardware.com/news/AMD-HSA-hUMA-APU,22324.htm...)


I run my Intel GPU for a message screen. It is i7-9700k and has uniform memory.


> Not sure which those "non-desktop" systems are.

Consoles have used unified memory since last generation I think?

AMD APU have supported unified memory since kaveri I think, and Intel iGP have supported limited version of data sharing for a while, but I don't think one can call UMA "pretty normal".


Why not? The vast majority of laptops sold today have iGPUs that support it. Both Intel and AMD docs even refer to it using the term "Unified Memory Architecture", and document use, including things like zero-copy. Is being available in most modern computers by sales volume not good enough?


Is there a distinction between shared graphics memory (an ancient and still very common thing) and "unified memory"?


Yes, they're absolutely nothing alike.

Shared memory means the CPU and the GPU share physical memory in different address space, so 512MB RAM with 64MB shared means the CPU gets 448MB and the GPU gets 64, and data has to be copied from one chunk to the other.

Unified memory means both CPU and GPU not only share physical memory but share the address space, all memory is accessible to both at the same time. This means that e.g. you can create commands or load data in memory on the CPU then just tell the GPU "your stuff is here" with no copy needed anywhere, and likewise the other way around.


> data has to be copied from one chunk to the other.

With iGPUs, it is pretty standard to be able to map "CPU memory" into the GPU's address space and access it directly, without copies.

See section 5.7 here: https://software.intel.com/content/dam/develop/public/us/en/...

I don't think Apple has provided enough information to say if the M1 "unified memory" is actually novel or not. Automatic mirroring between CPU and GPU page tables would be pretty novel, but I don't think there's any info currently available to suggest that happens.


Memory cross addressability/accessibility works with dGPUs too (but there of course accessing memory behind PCIe is slower). IOMMUs are involved. But software/driver stack has been slow to take advantage of iGPU unified memory, operating systems are partly to blame too.


> Linux native is not in the cards but it won't matter, it will work fine virtualized with hardly any performance penalty. If for some reason you insist on native linux, it's time to recognize that Apple hardware is not for you.

How would one run a graphical Linux distro virtualized and by which software?


Install Parallels or VMware (once they have their universal builds ready), install your favorite Linux distro’s ARM build, and you are off to the races.


Parallels and VMWare Fusion would run it. Virtualbox as well.

We can already do this on Intel Macs, Windows PCs, etc. Graphics are not a problem.

Heck, we could since forever also run graphical macOS and Windows on virtualization apps...


Probably not VirtualBox, it's very x86-specific.


Is it enabled in VirtualBox by default?


If you mean running GUIS, in the Intel version yes.

The ARM/Apple Silicon version not sure if it's ready, might need work to even run cli.


Is there even ever going to be an ARM / Apple Silicon version?

It would be a huge amount of work to add ARM support to VirtualBox. I would be surprised if Oracle were interested in making that investment.


That's up to Oracle. If MS also starts pushing Windows for ARM (as they seem intenting to do), I think they might make an VirtualBox offering for ARM.

In any case, we have Fusion and Parallels, and Fusion is now free for personal use iirc.


VirtualBox is an acquired product. Originally developed by the German company Innotek, who were bought by Sun, and in turn bought by Oracle. It is useful enough to Oracle for them to keep it alive, but I'm not sure if Oracle would have ever started a project like VirtualBox themselves.

The internals of VirtualBox are very x86-specific. It was not written with the idea of supporting non-x86 CPU architectures. I think trying to add ARM support would be a lot of work and redesign. VirtualBox used to support software-based virtualisation, which has something in common with CPU emulation – it is kind of emulating x86 on x86, and even uses some of QEMU's code for dynamic recompilation. But, that software-based virtualisation was only ever 32bit, it exploited obscure features of the x86 architecture (execution in ring 1), and in the latest release of VirtualBox has been removed entirely.

(I worked for Oracle for almost ten years. Never at the level at which decisions like this were made. But still, I feel like I do have some feel for what kinds of decisions Oracle's leadership are likely to make. They could always surprise me, though.)


M1 lack virtualization instructions, right?


No, just the DTK did.


No


> in the 16-inch MacBook

And the 13". We'll probably have something even better for the 16" too.


> the M1 is Zen3 5600X performance in an ultra-portable form factor.

It’s astonishing. I just hope they someday put in a good GPU. Something comparable to an Nvidia 3060 would be great.

I’d love to see Apple go HAM with their ARM chips in an iMac Pro or some such. Give me 12 firestorm cores and bump the GPU cores from 8 to 64.


> I just hope they someday put in a good GPU. Something comparable to an Nvidia 3060 would be great.

The GPU in the M1 is perfectly good for the power targets it's trying to hit.

Either fortunately or unfortunately, depending on your perspective, Apple is unlikely to be able to do with the GPU what they did with the CPU. That is, there's no legacy in either Nvidia nor AMD's GPUs that Apple can cut out to get wins. And similarly, neither Nvidia nor AMD slowed down their progress or is suffering from fab issues like Intel did & is on the CPU side of things.

I don't doubt Apple can hit an RTX 3060 level of performance, but they'll get there with roughly similar to RTX 3060 level of power draw. Which means it ain't fitting in a MacBook Air or MBP 13", either. And judging by the existing MBP 16" cooling solution, not really going to go well in that chassis.


I’m a game developer. I want macOS to not suck for games. Apple and Nvidia have never gotten along for a variety of reasons.

Apple will probably continue to offer shitty GPUs in their MacBooks. That’s fine and maybe even a smart business choice.

But it means mac continues to suck for games. Which is unfortunate imho.

I think we can all agree it would be very interesting to see how much GPU power they can stuff in an iMac Pro and desktop class Mac Pro. If it doesn’t make business sense then so be it.

Realistically I’m hoping in 5 years that Nvidia releases a high-end integrated SoC. Tegra on Switch is fine. Keep it going.


Well Apple has never really tried to target the Mac as a gaming platform, except maybe back in the G4/G5 days they had some effort, marketing Doom 3 running on the new G4.

Can't remember any other times they have really talked about gaming performance.

Gaming is mostly DirectX anyway or consoles with their custom APIs.


What gpu offers the same power efficiency as the one in the m1? Apple is matching the 75w 1050ti with less than 5w of power.

The 3060ti is a 200w gpu, which is 40x the power draw of the gpu in the m1, but it is only 5x the performance. Gpu’s might not scale linearly, but it may very well be the case that apple will blow us away when they ship their high end gpu’s.


> Apple is matching the 75w 1050ti with less than 5w of power.

Who gives a shit about the desktop 1050 Ti here? The M1 comes close but can't match a 1650 Mobile, and the 1650 mobile is both faster than a 1050 Ti and uses less power. Almost like the 1050 Ti was never the pinnacle of anything but being cheap? Then again, the 1050 Ti doesn't use 75w, either ( https://www.techpowerup.com/review/palit-geforce-gtx-1050-ti... ), just like the M1's isn't "less than 5W" ( https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste... - more like ~10-15w for just the GPU)

The M1's good enough it doesn't need nonsensical comparisons with fake numbers to exaggerate things further. Let it be what it is, which is a good integrated GPU not that dissimilar to other recent integrated GPUs

EDIT: Although if you do care about power efficiency, sites like techpowerup have you covered: https://www.techpowerup.com/review/amd-radeon-rx-6800-xt/36.... performance / watt graphs.


I can’t wait to see Apple’s high-end GPU configurations. In either laptop or desktop form factor.

Efficiency is cool. (heh) But if I need X then a highly efficient X/5 still doesn’t deliver X.

I doubt Apple will ever sacrifice their overly thin MacBook ergonomics for power. But I’m hoping they max out their desktop form factor.


>If for some reason you insist on native linux, it's time to recognize that Apple hardware is not for you.

And so, all the people that would provide support for their family and friends due to using the same hardware split paths. And now those Apple users have to go to Apple for support. A double win for the corporation.


>all the people that would provide support for their family and friends due to using the same hardware split paths

All 10 of them?

The subset of people that (a) had Apple laptops, (b) run Linux on them, (c) provided support for family, is probably irrelevant...

>And now those Apple users have to go to Apple for support. A double win for the corporation.

Yes, what a big lucrative market: relatives of people who run Linux on their Macs that dependent on them for support

/s


>The subset of people that (a) had Apple laptops, (b) run Linux on them, ... Yes, what a big lucrative market: relatives of people who run Linux on their Macs t...

Technical people moving away from using Apple's new properietary hardware that literally can't boot any OS but Mac OS will mean they can't help their friends and family as easily due to lack of familiarity.


>Technical people moving away from using Apple's new properietary hardware

Many technical people seem jumping into Apple's new hardware, and reviews are all raving for M1.

Don't confuse FOSS people / ideologues with "most technical people". The technical people that are big in numbers and give support need to be merely computer savvy people, not hardcore devs or people who'd rather run Linux.

>that literally can't boot any OS but Mac OS

That's wrong. It can boot any OS: https://news.ycombinator.com/item?id=25145119


The benchmark that's blown me away most so far: Marco Arment on Accidental Tech Podcast says the new Macbook Air is significantly faster to compile Overcast (a pretty big iOS app) than his 10 core $6000+ iMac Pro with 64GB RAM and a 160W CPU. https://overcast.fm/+R7DWXqqm0/1:08:11


I/O-bound, maybe? Could be misleading but still impressive


Compilation tends to benefit heavily from bigger caches & faster memory, yes. It's an area that Zen 2 & 3 have been absolutely fantastic at thanks to their huge caches as well.

The iMac Pro's relatively slow 2666MHz RAM can't be saved by the also relatively small (these days) cache in the Xeon-W it uses.


Xeon W is 7 years old at this point, and the Apple M1 is a pretty darn good modern chip.

Today's desktops are as good as servers from 5 years ago. Today's laptops are as good as desktops 5 years ago. Today's phones are as good as laptops 5 years ago.

---------

The Apple M1 makes a leap with a huge L1 cache, very wide execution, at a very low clock rate (2.8 GHz for power-efficiency).

But when you pair such a leap with a 7-year technology jump, yeah, things look quite silly.


Yeah something is fishy here. Could be the new RAM which is way faster


One thing to factor in here is that the MBA isn't cross-compiling. It's a RISC compiler writing RISC code.

To be clear though, it doesn't matter in the end if the _result_ is more productivity. But it is worth noting that under the hood, the comparison is not entirely apples to apples.


Why would it matter whether you emit x86 code or ARM64 code? That code that produces either doesn't get magically faster when executed on their respective archs.


Edit: as had been pointed out here there is no x86 code generation happening so this doesn't apply.

Yes I have seen claims from people along the lines of 'x86 code has more optimization paths in mainstream compilers for historic and other reasons'. Not sure about that, certainly can't imagine such a significant difference.


But it's emitting ARM64 code in both cases to optimisation paths not relevant here.


You're right there no x86 emitting going on here, so not relevant.


It's not identical but no obvious reason why emitting ARM64 code from a compiler running on x86 hardware should be more onerous on ARM64 hardware.

Plus ARM64 is not really RISC in a meaningful way - it'll still be executing microcode under the hood.


Here's looking at getting one. But I wonder what's the currently compatibility status of the dev stack: node, git, go, java, nginx, databases (pg, mongo, maria...), editors (macvim, vscode...), iTerm, et al. And the brew family of installs, which was covered as part of yesterday's thread [1].

Macos ships with some interpreters pre-installed as part of BigSur, including Ruby, Python and Perl. And most source distributions should just compile, but my guess is that it will be buggy given the arch + os combination is novel.

Firefox (beta) and Chrome/Chromium apparently are available too.

[1] https://news.ycombinator.com/item?id=25132217


There's a detailed issue[1] on the homebrew repo where you can track their progress. The "Major blockers" section lists go, rust, gcc, and others.

[1] https://github.com/Homebrew/brew/issues/7857


What I've heard is that for the most part, even x86 apps run quite well. Exceptions include weird cases like Docker (virtualization) and web browsers (JavaScript JIT), but as you say Firefox/Chromium have native updates available and Docker is working on it. Docker has no ETA though, so if you depend on that regularly it might be a temporary blocker.


For what it's worth, Rosetta 2 does claim to support JITs (which is a pretty neat trick), and the release notes for Firefox 83 claim it should work fine under Rosetta 2 on the M1.


Macos ships with some interpreters pre-installed as part of BigSur, including Ruby, Python and Perl. And most source distributions should just compile, but my guess is that it will be buggy given the arch + os combination is novel.

Everything at the Unix level Apple ships is already a universal binary, including Ruby, Python, etc. For example, zsh (and bash) run natively on the M1:

    file /bin/zsh

    /bin/zsh: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit
      executable x86_64] [arm64e:Mach-O 64-bit executable arm64e]
    /bin/zsh (for architecture x86_64): Mach-O 64-bit executable x86_64
    /bin/zsh (for architecture arm64e): Mach-O 64-bit executable arm64e


I think VS Code will get there very quickly given demand and availability on Windows ARM. For nginx etc, many people have moved to Docker or uses Vagrant workflows which may slow them down. Anyone who follows more classic workflows or installs from Brew should be in better shape.


VS Code is targeting an Insiders release for Apple Silicon by end of November. Experimental builds are available now. https://twitter.com/code/status/1326237772860981249?s=20

(on the team)


I would urge VSCode users to give Panic’s new Nova IDE a look. It’s a real native app, even without an M1 it has a much snappier feel to it.

If Docker isn’t ready by the time 16” M1s come around, I’ll probably just resort to cloud instances for a while.


I think it's a little early to get an M1 based machine for development, unless you do iOS or macOS applications. My seven year old Macbook Pro could stand to be replaced, but it's getting Big Sur, and it's fast enough. I'll just wait another year and see how the ecosystem on the M-series CPUs look at that point.


I had my Ruby on Rails / node dev env setup within a couple hours - the only weird thing was Redis (via Rosetta) had to run as root for some reason.


Java is mostly ready, Microsoft and Azul have published builds and they're in the process of pushing the patches upstream to OpenJDK.


I just bought a MacBook Air. I don't think it's an exaggeration to say that this redefines how we think about a laptop. Whereas previously a laptop was something that spent the bulk of its time plugged in, and only capable of short bursts of number-crunching/gaming due to thermal and battery limitations, I can literally get a good few hours of gaming untethered on this machine. And this is a base-level MacBook Air, I imagine the Pros are even better.

I've owned a few "gaming" laptops, but until the M1 MBA I haven't had one that run games for more than an hour while on battery power, and even then it sounded like a jet taking off.

Software support is another, though Rosetta works surprisingly well, the general lack of Mac support in games combined with the removal of 32-bit support in Catalina mean, you're not exactly spoiled for choice.


I can get a few hours of light gaming in on my Zephryus G14 on battery, and it has a vastly faster GPU than the one in the M1 such that I can also still play demanding games when tethered to a wall (it also has a VRR 120hz screen). A trick that the MacBooks can no longer pull of at all since eGPU support was killed.

It's a fantastic experience, but in no way has having this laptop for the last 6 months "redefined how I think about a laptop." It still does all the same things a laptop does, it still has all the same limitations a laptop has (like the physically small screen vs. other common gaming choices - be it desktop monitors or TVs). The M1 in the MBA doesn't really change any of that. Especially since the loss of bootcamp & eGPU means the MBA is a rather terrible gaming experience unless something drastically changes between game devs, Apple, and MacOS. The extra battery life is definitely welcome, but it's not game changing revolutionary, either. For some it may now be enough to go all day without charging, which is a key milestone, but many people were already there with the existing set of ultrabooks, too. Beyond that, eh? Kinda like a phone that can last 2 days. Cool trick, but it wouldn't change anything about my day to day life unless there happens to be a power outage overnight.


No idea why you are being downvoted, is HN really a group of Apple shills? I have the Zephyrus G14 also and love it. I'm closely watching the M1 but I really don't think it can compete with a GTX 2060. That matters a lot to me.

It also has an AMD 4900HS which while not an M1 is a fantastic processor.


> I'm closely watching the M1 but I really don't think it can compete with a GTX 2060. That matters a lot to me.

Per anandtech the M1 comes kinda-close sometimes to the 1650 mobile: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

It's then soundly wrecked by the 1660 Ti mobile, and the 2060 MaxQ is then faster yet still. They obviously also take a lot more power at peak, but that's also going to be a huge difference in the actual gaming experience, too.


What kind of gaming are we talking about?


The M1 on Mac Mini has a GPU on par with a 1650 ($150) discrete desktop GPU. It's by far the strongest integrated GPU ever.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


On closer inspection the M1 seems to be about 85% of the 1650 and is closer to the RX 560 a $130 discrete GPU


Blizzard already released ARM64 builds of WoW. More than likely D3 and Starcraft 2 as well. They were already fantastic supporters of the metal API so I imagine it was an easy transition for them.


khmOverwatchkhm


>the general lack of Mac support in games combined with the removal of 32-bit support in Catalina mean, you're not exactly spoiled for choice.

There are people already running 32-bit windows apps through Wine on the M1. When there is a will there is a way.


I wondered about that, doesn't it get hot and throttle performance?


According to Dave2D's youtube review the Air will throttle after 8 minutes of full load. The Pro never throttles.

https://youtu.be/XQ6vX6nmboU


Wow. That is pretty amazing. I have 15" 2018 mbp and it throttles if I look at it wrong ;-)


shhh. it will hear you.


It does get kind of warm, but it's not uncomfortable, and the throttling is not too bad, after 10 minutes you lose like 10% performance, after which it's solid. Probably the Pro doesn't have that issue.


My Air hasn’t gotten warm yet and I’ve been trying to really hammer it while I’ve been setting up my dev env and running benchmarks.

I have not tried Civ 6 late game benchmarks yet tho ;)


+1, I've had a beast of a Lenovo laptop, and though had a great GPU, fans were broken because of which the FPS was capped to <30.


I feel like this may the first time that Macbooks might actually be worth their premium from a purely hardware perspective.

Before people jump on this, I mean that for example the previous Intel MB Pro with i5 would cost like $1,500, which is far more than almost any windows laptop running the same specs.


What you are looking at, tho, is not the pure hardware perspective, but the pure hardware numbers perspective, which excludes ergonomics, design, build quality and hardware/software integration as well as stuff like speakers and screens which might not be comparable because no other vendor does them exactly alike.

I always found this strange, when looking at personal computers – or, in fact, most anything in life. As if what can not be checked or compared on a feature list has no meaning. Do people really feel this way about the world or is it just a way to deal with insecurity about what is important?

The most confusing thing about this is, that, to me, the human experience is pretty much the opposite: The soft parts beat the feature list every time. The better design wins.


First, I appreciate the experience differential between macOS, Windows and Linux. I've enjoyed each, and have no major complaints about any.

With the right hardware, I've really enjoyed installing and using Linux, while I struggle when drivers for networking have to be procured - I'm not experienced enough in that area to figure it out (or rather, I'm not motivated enough.)

For me, Windows and macOS alike just work and while I've had more issues with Windows, it's because I've used it 100x more often and thoroughly; a price I pay for the ease of getting into games. (I am not speaking about right now, a moment in time, but the decades through which I gamed.)

In my career, I was largely a developer of Microsoft-focused technologies, and the support for tools was always sufficient or better.

Anyway, to my point, the parts of the software and experience that would pain me would be the things I couldn't control, and the things that were meddled with. I avoided pre-configured laptops if I could, and if I couldn't, I would do a clean install of the OS immediately. With desktops, I could select the best performance/experience/price (including reliability, features, etc) compromise for each compatible component, assemble it myself, and have not only a perfectly customized kit suited to my personal tastes and needs, but also the experience of learning and building on my own.

Now all of that is irrelevant if your preference is either just very strongly what comes from Apple, or you're only looking at complete systems and comparing them. Since I can be ambivalent about OS, except for gaming, and the experience was quite good (for me, not for everyone, obviously), I could pick a Windows-compatible laptop that gave me the best "features for the price", or rather, great hardware for the price without too much software meddling (or meddling I could remove.)

Now, does it sound like I don't consider the experience, or that I'm buying hardware because of insecurity? It certainly does not seem so from my perspective. To say a "feature list has no meaning" is to claim that the features do not contribute to the experience, but that isn't accurate. Of course you have to understand how an experience is derived from a feature, or that feature list can't be used to aid in your decision-making process.


What was likely meant by experience is things like the display (P3 retina 3K in the macbook air), the trackpad (still unmatched in any other device), the metal housing that is very refined and doesn’t flex at all, ...

In my experience if you want the level of physical build quality of a macbook, you end up in the price range of a macbook.

Windows vs macos vs linux, that is personal preference. I use all three and they all serve their roles well. I do find that windows and linux require more configuration and maintenance, but YMMV.


Some linux distros have gotten so polished these days that I find they require less maintenance than OSX on my MBP does. Though, most people who use linux want power not it-just-works, so I think there is going to be a preference towards the higher maintenance distros.


I have a 2018 core i9 5000$ MBP 15, it is by far the worst experience from a premium device I've ever had - when I turn on docker and an IDE people in the office start turning their heads towards me, the fan noise is insane - I need to power limit the device using third party software to make it usable it's ridiculous.

Apple sold the old MacBook air (2015 one?) untill 2018 ? And charged like 1000$ for it ? Have you used that crap ? My wife has it and I was laughing my ass off at how terrible the device was - the viewing angles on the screen were better on 400$ HP devices.

Apple keyboards have long been subpar, they have no touchscreen or 2in1 offering which is a perfect form factor for my mobile computing (iPad pro is nice but iOS only that's a deal breaker for almost any professional).

And the beloved macos was super rough in Catalina.

I need to develop for iOS occasionally so I'm forced to use a Mac or swap arround, but I'm sorry - Apple laptops were a bad deal for a long time from most perspectives - I would personally get an X1 over any previous MBP.

M1 and pricing change that for the first time in a long long time - I would recommend this laptop to anyone buying a device in this category.


If we’re talking about hardware ergonomics the lack of a del key and placement of the fn key alone make me never buy a macbook again. I also hate the difference in cmd, alt and ctrl keys from other keyboards.


> I also hate the difference in cmd, alt and ctrl keys from other keyboards.

It's a matter of habit, and super easy to configure from the keyboard preferences in macOs. The trade-off is that ctrl is almost exclusively used for unix-y thing, and all the usual GUI-y things use Cmd. Using a shell is just painful for me in anything other than macOS for that reason alone.

Incidentally, because of the pervasive support for readline shortcuts in all text inputs, del is just ctrl-D.


> Using a shell is just painful for me in anything other than macOS for that reason alone.

On Linux at least you can use Ctrl-Shift-x, Ctrl-Shift-c and Ctrl-Shift-v for cut, copy and paste in the terminal. So again it's just a case of getting use to the shortcut differences.

On macOS Del can also be found on fn-Backspace and I often remap the right Alt key to Ctrl.

It's pretty easy to adapt to either with a little time.


The point is that it’s just annoying that the terminal is a special application that has its own shortcuts for everything. I could use shift-ins or ctrl-ins too — but I don’t want to. I want the terminal to behave like any other application.


Some terminals have the feature called "smart copy." You may like it.


To be clear, my comment did not relate to either Apple or MacBooks in particular.

Also, the value of each soft factor will differ from person to person. Good design highly relates to personal needs.


I guess if you redefine every feature someone cares about to be a "soft feature" then you are right. Kind of a pointless distinction though. Listing features matters because everyone cares about different features.


Yes I understand that and I guess I’m a bit salty. But part of my point is that you can make really great hardware and software of great quality, but one or two unfortunate choices can soil the whole thing for someone.

And Apple especially can sometimes be too stubborn to acknowledge people’s personal differences or an established industry standard.


Sometimes? Don't you mean always?


> the lack of a del key and placement of the fn key alone make me never buy a macbook again

What do you mean by these?


Macbooks and Apple keyboards in general lack a physical del key (in ibm keyboard terms). They only have a backspace, which they label delete.

Fn+backspace works as a shortcut, but they’re on completely different ends of the keyboard so you need 2 hands. Ctrl+D also sometimes works but is inconsistent across applications.

Also, unlike all other vendors, Apple places the fn key on the far end of the keyboard, instead of the much more used ctrl which is now in the middle. When typing I find it much easier to quickly hit the furthest key than the middle one so I often hit the wrong one.


Two things:

- Portables in the Mac world do lack a delete key (the delete-forwards key, that is) on the built-in keyboard, but that's more of a space-constraint than anything else. Mac keyboards in general do not lack the delete key; he usual full-size keyboard certainly has a delete key.

- Keyboards are a personal thing, and it's very easy to plug in any normal USB keyboard, and then get all the benefits of the [delete] key as usual. There's nothing else to do, just plug in your favourite keyboard..

[Mine came this morning: https://i.imgur.com/at1mGiT.jpg]


Any Windows laptop with comparable display quality, SSD speeds, battery life etc. cost similar or more. And they still have trackpads that suck.


I have both, MacBook Air 2019 and Dell XPS 13 2018.

MacBook is stunningly smooth, while under heavy CPU load you hit CMD+Space and Spotlight is there. Under my Dell with no CPU load at all, I hit the windows key - and nothing happens.

What really keeps me at Apple is this overall smoothness. Under Windows/Intel you can pick one spec that is impressive, but as you said, it is the overall experience.


Now this is the case. Before, Dell XPSs were VERY good laptops with same performance for a cheaper price.

And thats not including thermal throttling, the bullshit Apple pulls with background OS stuff like checking executable hashes online, being restricted to what software you can run, and so on.


Are you sure?

A Dell XPS13 with 8Gb RAM 512GB SSD is $1,349[1].

An Apple MacBook Air with 8Gb RAM 512GB SSD is $1,249[2].

The XPS13 is great, I have one myself with Linux on. They're definitely in the same price range as the Apple laptops though.

I think the other points you've raised have been addressed else where.

There's no restriction on what software you run. You can even replace the OS with Linux if you like by configuring the security.

The hash key transmission seems to be used for malware detection, is optional and Apple have committed to not keeping logs of it when people do use it.

1. https://www.dell.com/en-us/shop/dell-laptops/new-xps-13-touc...

2. https://www.apple.com/shop/buy-mac/macbook-air


I dunno if you are serious or not, but the XPS is a big step above the Air in performance.

Even with current M1 Macs, outside of the battery life, you have sub 2k laptops that blow the M1 out of the water

https://browser.geekbench.com/v4/cpu/15333968


Not sure I’d call that “blowing out of the water”.

I think, for a light weight laptop I’d rather take the single core bump of the air over the marginal gain for multicore of the Asus. That’s just me though.

Geekbench 5:

Asus: 1218 sc, 8031 mc [1]

MacBook Air: 1726 sc, 7417 mc [2]

1. https://browser.geekbench.com/v5/cpu/4854331

2. https://browser.geekbench.com/v5/cpu/4856634


Has there been any sort of long term load tests? I feel like the chassis limitations are going to result in thermal throttling compared to similar performance laptops with actual cooling design. Even if it doesn't thermal throttle, one of my main complaints with Macbooks is that you can't actually use them in your lap cause they get uncomfortably hot.

As for being worth it, wait till the next gen mobile chips in 2021. I feel like we will see similar performance fairly soon in cheaper laptops without being locked into the Apple ecosystem.


The m1 macbook air reviews all say they don’t get hot, no matter what you throw at them, and they at most throttle down 15%. The macbook pro and mini stay cool and quiet and do not throttle, regardless of the workload. Any performance expectations you have from macs before the transition you should set aside, these are entirely new and come with very different characteristics.


I feel like we will see similar performance fairly soon in cheaper laptops without being locked into the Apple ecosystem.

That's probably not going to happen any time soon.

If the $999 M1-based MacBook Air is already outperforming almost every Intel laptop that's been shipping (including Apple's), there's little chance Intel or AMD are going to surpass that.

Even if they get close to performance, they're not going to have the same battery life. From Matthew Panzarino's review on Techcrunch on the M1 MacBook Pro [1]:

"In fullscreen 4k/60 video playback, the M1 fares even better, clocking an easy 20 hours with fixed 50% brightness. On an earlier test, I left the auto-adjust on and it crossed the 24 hour mark easily. Yeah, a full day. That’s an iOS-like milestone."

It's important to remember: this is Apple's low-end chip for entry-level/consumer level hardware, yet it's more than competitive with a $5000 iMac Pro.

[1]: https://techcrunch.com/2020/11/17/yeah-apples-m1-macbook-pro...


People on the car forum comparing bmw with the same horse power as a fiat will agree with you.


Is the Mac comparable to the bmw because both are nearly impossible to work on and fix when they break?


I think it's more fair to say comparing a bmw 3 series and a toyota supra. Both are good cars but one is known for its premium finish and quality, and the other for the customization possibilities you can have. :)


This is a funny comparison since BMW actually builds the Supra. Its basically a rebranded Z4.

https://www.caranddriver.com/reviews/comparison-test/a310046...


thanks for the link I didn't know that :D.

I'm not a car enthusiast, but the supra is the first car that came to my mind when I read the parent comment because I saw a lot of youtube videos with supras from the 90s (modified and) producing a lot of power.

Like this one [1], I think the car in the video is a 1999 model and stock it produces at most 325hp (info from wikipedia).

1 : https://www.youtube.com/watch?v=yvh2HYIyBAs


If you meant the older supra then that makes more sense.

The Supra wasn't produced for many years and was recently re-released with this BMW partnership. Many old school Supra fans feel its strayed pretty far from its past and don't regard it well.

The car industry is full of comparisons like this. Luxury is a different scale than performance. You can have either or both. Macs exist higher on the luxury scale than most windows laptops and they offer models across the performance scale. Quite like BMW.


Although I’m definitely not convinced enough to make the jump yet, those emulation benchmarks are impressive (and seem to also apply somewhat to general use based on another review I read). Running x86 code faster than native equivalents, using an ARM chip, that will probably be consuming significantly less power doing it is something I did not expect to see in gen 1 of this switch.


What caught me off guard was the possibility through Codeweavers to run even 32bit windows software

https://www.codeweavers.com/blog/jwhite/2020/11/18/okay-im-o...


That is impressive, makes me wonder whether there's a potential future where Apple puts chips like these into servers to run existing software faster/on lower power


> a potential future where Apple puts chips like these into servers

Not to put too fine a point on it, but that potential future is already here. Mac mini is a server and off-the-shelf rack mounts are easy to find. [0]

[0] For example: https://www.sonnettech.com/product/rackmacmini.html


Without 10gig ethernet its really limited in that role.


It's interesting you say "not convinced", curious what you are skeptical of? I'm not upgrading because I just bought a 16" Mbpro 2 months ago, but I'm more than convinced this new architecture is the real deal. Unfortunately for me my 6 year old MacBook just died and I have to have something for work.


I reckon it's worth waiting a few months for longer term testing, and probably a little bit longer for software support to catch up. There's also a decent chance that M2 chips are somewhere on the horizon and those might help answer the question of whether these SOCs can scale to enthusiast/professional desktops.

It's probably not fair for me to say I'm not convinced yet anyway, as I'm still developing on a Linux desktop and got rid of my laptop during lockdowns as there wasn't a need for it. If that need comes back though, these laptops look like they could be some really nice options (which would be a first for ARM laptops).


Usually gen 2 hardware is the ideal time to buy. It might be a few years before we get a gen 2 MBP, so imo no worries.


Not OP but Apple Gen1 products have a horrible track record.


Although I’m definitely not convinced enough to make the jump yet, those emulation benchmarks…

Remember, it's not emulation; Rosetta translates the x86 binary into ARM code and runs it. That's why there's a small pause when launching x86 apps—it translates the app on the fly.


Transpiling is not the same as emulation.


Someone correct me if I'm wrong, but I think this is mainly because Apple does mostly translation, while Windows on Arm is pure emulation.

This combined with the fact that Arm-based Windows PCs have mostly used mobile chips not optimized for the desktop use-case and form factor, is why the new Arm-based Macbooks leave any Arm-based Windows PCs in the dust in terms of performance.


Also, Apple has tuned the M and A series chips to perform well with Objective-c and Swift as used by iOS and Mac OS. One example is that they made some object operations 5x faster than on Intel chips. This speeds up loops and garbage collection significantly. Those are very common operations.

Other vendors are using generic chips that were not tuned for the OS.


MacOS / iOS apps are not garbage collected


Reference counting is a form of garbage collection, although not usually refered to that way.


Windows on ARM also runs x86 apps via translation, but in their marketing materials they call it emulation.

x86 emulation works by compiling blocks of x86 instructions into ARM64 instructions with optimizations to improve performance. A service caches these translated blocks of code to reduce the overhead of instruction translation and allow for optimization when the code runs again.

https://docs.microsoft.com/en-us/windows/uwp/porting/apps-on...


Rosetta 2 probably also have a JIT mode to deal with software that JIT produces x86 instructions like browsers and java VM. And of course you can do AOT or trans compile static x86 executables. I feel a lot of success of rosetta comes from the sheer perf of the m1 so even the jit and transcompile process is fast, and the execution of not well optimized code is fast. The firestorm cores is at least 60% faster than snapdragons cores per annandtech


I’ve heard Windows emulation is also cached JIT.


I got the mbp when on day one but haven't been able to fully test it yet

BUT I have been using it (right now) for almost 2 hours reading reddit and whatnot, I went form 100% to 96%. On my current mbp (2019, i9, 16gb) 2 hours would kill 50% of my battery.

I don't care if compile time takes 2x. I can take my laptop with me at 8am and come back at 10pm and know I won't need a charger. It is amazing


A battery life of 4 hours doesn't sound okay, my 2013 MBP lasts longer (original battery).


I will be honest, I also don't think it is good. It is a company issued one so I won't bother too much with it. I remember when the i9 came out, Apple had some issues with it, so maybe the battery usage is still a problem?

But even so, 2 hours of my usage, in pretty much every mbp I owned (and I had a few) always depleted minimum 25% of battery. (maybe the websites I use are more draining that simple static ones?) And we are talking about 4% now with this one. No matter how you dice it, it is still a big difference.

quick edit: it has been 2.5 hours right now since I unplugged it. Browsing Reddit and Hacker News, Whatsapp and Slack applications. 5% of the battery has been used!


That's normal. My i9 does the same. It is heavily, heavily reliant on turbo boost for performance and regularly boosts up to 80W (!!!).

As soon as the last kinks are ironed out for M1 there's no reason imo to use the Intel Macs ever again.


I wonder what browser is used. Chrome would eat the battery fast. And some ads are very power hungry.


I wouldn't be surprised depending on usage. My Intel MBA 2020 is 4 months old (should have waited ;_;) but CoconutBattery already reports it <90% of design capacity.


I have a late 2019 16" MBP - the battery life is very poor. 4hrs is pushing it.


Yeah I have same-ish battery life. I pretty much live inside Chrome + VSCode though. Bare Node dev server, no Docker.


In my anecdata my 2013 MBP does a lot less than 2 hours depending on usage.


Steve Jobs would be proud. I imagine Apple didn't say anything like that, but I really think he would be stoked with this, especially a cooler computer (no fan) and a leg up all competitors.


I wouldn't be surprised if he planned this. Lisa Sue said she plans 10 years out in advance regularly, because building up hardware takes that long.


Nah, both Jim Keller and Gerard Williams said Steve wasn't interested. Which was definitely the right call at the time, remember that was in 2010.

Now that it happen it is more like a natural progression.


I'm blown away with how good they seem to work. I was about to buy a Lenovo x86 laptop on Black Friday but put it on hold. I presumed M1 was going to be like a Surface X. With Ryzen laptop chips slowly arriving too, next year should be a good one.


What hope do we have that M1-like chips are available soon in regular non-apple notebooks? I guess the chance is zero.


Not M1, but MS has Windows running on its own ARM chip as well. I presume it'll get a bigger push now. https://www.microsoft.com/en-us/p/surface-pro-x/8QG3BMRHNWHK


M1 isn't groundbreaking. RISK has been around for a while, and stuff like SoC with unified memory isn't new. Id say chances are pretty good.

While Apple will always have the edge in terms of optimization since they have to only design for specific hardware, the difference won't be massive, and you won't be locked into the ecosystem.


Apple's advantage is in the virtuous cycle of big demand and big market for their products. And now they have built up the talent, the process, to build different chips and chips that are very large. M1 is not a small chip, it's complexity is comparable if not more than intel or amd or Nvidia's chips. They also established a sustainable relationship with tsmc. I am sure tsmc have exclusive or volume contracts for apple to be the first adopter to their most dense node. Tsmc allocates a lot more of waffer capacity to apple than other designers. The marginal cost for apple to design and build a large version of m1 is pretty cheap.


Look at the ARM chips in Android phones. Compared to Apple Silicon the suck in both performance and power usage. That's what you'll get if vendors try to use similar chips in desktops and laptops. Vendors will instead continue to use AMD and Intel chips and fall further behind.


More performant than any other laptop in the same form factor, with less heat and less power draw than any other laptop. And it only runs macOS. Apple is, once again, two steps ahead of the entire rest of the industry. This is the future: custom hardware and custom software working together as a unit.

(Although for the next few years, I expect PC vendors will just use Snapdragons or Rockchips as a base.)


>This is the future: custom hardware and custom software working together as a unit.

Yes, the M1 is a product of a company with complete control over software and now hardware, including in this case the SOC that combines RAM and CPU. This is unheard of in a mass-marketed personal computer; I'm not sure this has ever happened before. Commodore used the 6502 CPU from subsidiary MOS Technology in its computers from 1977 onward, but I don't think MOS ever produced RAM, and certainly not integrated into the CPU like this. TI used its own CPU in the 99/4A, but not sure if TI ever produced RAM and, again, there was no RAM/CPU integration like M1.


It's in my cart, I'll place the order as soon as CS:GO launches fine on M1. So far it goes to black screen - other Source games work fine though, so I think it's just a matter of time. I already have a 2017 13" MBP but for some reason (dual channel memory? throttling?) the gaming experience is terrible, worse than a 2012 Mini in Bootcamp.

I've wanted a gaming laptop (or better put, a laptop that I can also game on) for years but couldn't stand the crappy quality of ASUS/MSI/Lenovo in sub-2k price range. As I said in another comment, I can't believe it's going to be a MacBook :)


Did you try Razer?


Well done Apple. You’re innovative again on the PC market.


“What's a PC?”


Ah, let me tell you the story of the "IBM" compatible PC.

"What's an IBM?"



John Hodgman was really great in / for those commercials.


Speed is still relative. Yes the benchmark numbers for the M1 are impressive, but I am curious to see non-benchmark, real-world workloads compared. I'm reminded of a YouTube video (link below) in which the comparison was made between an $11,000 Mac Pro and a PC kit that also cost $11,000. In this case the PC was rocking a Ryzen chip with 32 (or was it 64?) cores and smoked the Mac Pro on every benchmark. But when comparing comparable workloads, like doing a final render on a large video project using Final Cut Pro on the Mac and Premiere Pro on the PC, the Mac Pro did narrowly edge the PC. So I would be interested in seeing the M1 MBP do side-by-side, real-life workload tests like final-rendering a large video project, compiling a multi-gigabyte XCode project, applying a complex set of filters to an image, etc. Benchmarking is nice to an extent but I think lacks real-world significance in this context.

https://www.youtube.com/watch?v=jzT0-t-7-PA


TechCrunch's review includes a compile of WebKit, Final Cut export, as well as some traditional benchmarks. It includes performance and battery impact of the tasks.

https://techcrunch.com/2020/11/17/yeah-apples-m1-macbook-pro...


I saw that (snazzy guy video?) as well. Thing is, that video was explaining how a highly integrated system can perform better when software optimizes for it. These Macs will only see better performance as more software is optimized over time.



Awesome -- that's what I'm looking for... but now I wish there was some explanation as to why a $699 machine is doing things that previously you needed a $4000 machine to do. I look forward to learning more!


Anandtech had some theories based on the A14s design. https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


WebKit compile / speed / power:

https://images.macrumors.com/t/xjpsPEkXCCuNsW90kwPvVl0xfC0=/...

Percent battery remaining after compiling WebKit. 16" MacBookPro 2019: 61%, 13" MacBook Pro: 24%. M1 mini: 91%, M1 Air: 91%.

From: https://www.macrumors.com/2020/11/17/apple-silicon-m1-compil...


First, that's absolutely bonkers. Second, I think you've got a copy-paste error somewhere... I thought the mini was mains-powered. What does 91% battery mean there?


MBP, sorry.


Is battery consumption linear though? For example I can use my phone for half a day and battery will go down to 50%, then the remaining 50% lasts for maybe an hour.


M1 mini doesn’t have a battery. I assume you mean the M1 MacBook?


Yep, MBP, sorry.


SPEC benchmarks are your friend.

> non-benchmark, real-world workloads compared

Many (most) benchmarks are designed to reflect real-world workloads.


Sure, but not macOS workloads. What most (all?) of these benchmarks fail to capture is the difference made by the specific optimizations for Apple's software stack.

Things like the five times faster ARC retain/release cycle or the dedicated objc_msgSend branch predictor are a big deal when your software stack hits these hotspots constantly. The SPEC sample workloads won't.


So you want someone to compare a starting at $699 m1 Mac to a $10k top of the line pro machine?

That you are even interested in this shows how great the m1 chip performs, but as a comparison This is pretty useless.


No, more like comparing the M1 MacBook Pro to the i7 (or i9?) based MacBook Pro. Comparing the $699 Mac mini to the Mac Pro seems absurd... or at least I thought so until someone else replied with a link to a video where the video-dude is comparing the mini to a $4000 machine:

https://youtu.be/HxH3RabNWfE?t=626


This is the revolution we needed in a technology landscape that promotes waste (x86, Electron), however belittled by the haters. It's real and changes everything.


Why is x86 waste? It was beating everything on the market during the 90s and 2000s no? Its just intel hasn't pushed things forward in 10 years right?


x86 is wasteful in comparison to these M1 chips, not necessarily in comparison to 1990s chips. It wasn't wasteful back then, but today there are faster and more power efficient chips on the market. The longer Intel keeps making 100W designs that are slower than 10W ARM chips, the more wasteful it becomes.


Power consumption and inefficiency


I received an M1 mini 2 days ago. I noticed the Steam client seems to be a bit laggy in UI responsiveness, but besides that I have no complaints. I got it for use as a server, so I was a bit dismayed to hear that M1 Docker support has hit a roadblock, but hopeful that they figure it out soon.


Don't worry, we (at Docker) have been working on Apple Silicon support for a while. The command-line tools work under Rosetta 2 but the local VM inside Desktop will take a little bit longer to port. Just in case you haven't seen it there's some further info on Docker+M1 in the blog post: https://www.docker.com/blog/apple-silicon-m1-chips-and-docke...


> The command-line tools work under Rosetta 2

Does this mean I can `docker run` using a remote (x86_84) docker host in the meantime?


Thanks for the reply! I have subscribed to the newsletter and look forward to updates.


Just curious a mention or two the challenges getting this to work soon?


>I noticed the Steam client seems to be a bit laggy in UI responsiveness

Disabling GPU acceleration may fix this issue.


I have a maxxed out 16” and Steam is still laggy for me.


It's a big terrible WebKit browser with a whole bunch of crap glued on, it performs about how I'd expect it to.


My question going forward is, how far can they extend the memory within the SOC architecture before the chip becomes too large to make this economical? I am also interested in their gGPU solution but am curious if they will be as diligent with driver updates as Nvidia and AMD are as one reason gaming is so strong on Windows is both providers quickly adjust for new games and issues.

While gaming is not an issue to all users the Mac world needs to shake this concept of just get a console or windows gaming machine. There should not be an excuse to have two or machines on separate platforms.


The memory on m1 is nothing special. It's not part of the m1 die. M1 still have traditional memory controllers. It's just they built the logic die and the memory die in a single package for lower memory power. Increase package size is simple or if you really need big memory, do the traditional memory connection on a pcb.

As for gaming, I don't it's a hardware issue. It's software support. Microsoft now also owns a lot of gaming studio for xbox exclusivesz those will never come to the Mac. Also big gaming companies are now working with likes of microsoft, amazon, Google on cloud gaming. That seems like the way forward for making AAA games run everywhere. I doubt these companies will spin up another team to support these games on the Mac platform. It sucks that m1 is actually really good gaming processor.


Once Docker works on ARM, I’m gonna make the switch. I was very very close to trading in my 2020 MBP and eating the loss, especially after seeing the insane WebKit compilation times and battery usage. I loathe the Touch Bar and am very excited about having a MBA that doesn’t suck.

The only concern I have is this: it is my understanding that the arm7 instruction set differs slightly from vendor. Is that true? I’m thinking of what problems I might run into with shipping apps destined for AWS Gravitron.


Armv8 instructions should be consistent between vendors. Gcc and llvm is very likely to produce standard armv8 code that is portable. At work we use intel chips to compile armv8 linux executables and librarys and run them on broadcom arm chips.


Nice. Thank you for the response!


Docker on mac runs in a VM anyway, so I would imagine you're still going to run it in x86 mode.


VMs are only expected to run ARM clients. There is no cross-architecture VM support. That might come but it would require something more like software emulation of x86/64 processors which is a bigger tasks.


What limitation exists? Just the fact that right now the M1 doesn't do virtualization?

At some point full x86 virtualization will need to exist on this platform and when that happens you'll be able to run an x86 virtual machine on your mac to back the underlying Docker engine. This is what is happening right now with Hypervisor.framework on a Mac, for instance.

If you develop a lot of ARM for sure run it all in ARM configuration but if you are a dev deploying to the current status quo you would want to continue working with x86 images if possible.


That would be fine since most of my targets will still be amd64 anyway


this may be the fastest macbook pro ever but it will also be the slowest arm-based macbook pro ever. While I've never owned a mac, I've been mac-curious for a long time but I think I will wait a couple years before I seriously consider switching.

(and with new video cards, and new consoles, there are too many things vying for my money right now anyway)


> iOS apps on Mac—sometimes

Says Full Screen Video works on some apps (MLB) but not HBO.

What about other APIs like Filesystem and Notifications?

Responsive apps to resize window?

Any bigger write ups on this? To me, a single App Store across desktop/tablet/mobile has the most potential for developers from this chip change and it seems generally skipped over by almost all reviews.


I second this.

Most, if not all; the reviews I have read so far have been 100% focused on the hardware itself. I honestly had to dig deep to even find the information I wanted about Rosetta emulation performance.

For some reason, 80% of the articles out there are about benchmarks, when we're also looking at the single biggest new release of MacOS in arguably nearly 20 years. I get that it's a big hardware transition, but who cares if the software is not up to snuff? Both are equally important.

Catalyst is of huge interest to me, too; especially as an iOS developer.

Any good articles / links on this, HN? :)


Benchmarking if a 4K video takes 15 or 16 seconds to export is cool, but the universal App Store could be revolutionary and a huge money making opportunity for developers.

It’s likely why Apple wanted this so bad. The cost savings from Intel is one thing but the additional money earned through apps will be insane.


This is refreshing to see competition actually deliver better products instead of just cutting costs.

The technical innovation has only marginal value for me and I would love the same soc on a rpi like device, for home servers. Time will tell.

Now, as a Linux guy, I’ll just wait until innovation turns commodity, having no interest in macOS whatsoever.


Now, as a Linux guy, I’ll just wait until innovation turns commodity, having no interest in macOS whatsoever.

You may have a long wait.

Apple has been killing Qualcomm and the other ARM licensees in performance for years. Apple shipped the first 64-bit ARM processor at least a year before Qualcomm could do it.

It took 10-years of designing their own ARM SoCs to get to the point of being able to produce the M1.

Apple showed Debian running in a VM on a prototype 6 months ago, so we know that's coming. There's a decent chance Linux running in a VM (or native Docker when that comes) will be much faster than Linux running on similarly priced Intel/AMD hardware.


> Much faster

We can only hope so. I would love to run Linux on my iPad (which is not m1 of course). I would be sad to use macOS though. That’s all early thoughts.


Bought my wife the new M1 MBA as a replacement for her 3yo Macbook. She loves it. It's significantly faster for her applications, the keyboard is a return to one she likes, and she's not run into any issues with it thus far.

If the Docker issue is resolve, I'll likely jump on a M1 MBP when they're available.


So somewhere inside Apple there must have been a Skunkworks project to "put the iPad chip in a laptop" or something right? Someone has to have thought of it. I can't wait to read about the folklore entry for this ten years from now.


I have a 2017 MBP 15". Can't wait to get an M1 Pro, and I don't even care that much for the battery life -- it's all about the better performance on NLE software for me...


Same. Placed my order for the 16GB version. It is about time to replace my 2015 13" MBP. Works, bet gets too slow for DaVinci Resolve and Pixinsight.


A concern in my University department is that some of the FOSS/small scale software we teach on won't work on our students' shiny new M1 Macs. Should we be worried?


FOSS should be particularly fine, since the code is public, eventually someone will recompile it for ARM (if it's even written in a native language; languages with runtimes will work perfectly fine when the runtime is ported)

If not, everything should run through Rosetta 2. Apple did eventually pull the original Rosetta, so who knows how long they will keep the feature, but in the short term I can't imagine it being a problem.


> Apple did eventually pull the original Rosetta, so who knows how long they will keep the feature, but in the short term I can't imagine it being a problem.

The original Rosetta was only useful for legacy Mac software though; this one will continue to be useful for new x86 software that gets written. I think it will last a lot longer.


The original Rosetta was also a licensed technology that (presumably) cost Apple a pretty penny. If Rosetta 2 is all in-house tech, that probably bodes well for sticking around longer than the original.


I'd say the bigger concern is long term, will Apple even ALLOW you to even run non-App store software? E.g. FOSS

On iOS they don't.

And with this new direction of macOS & ARM, it's not out of the question the same will happen on the desktop.

EDIT: why the downvotes? Seriously - what's wrong with what I said ... I'm genuinely curious.


>why the downvotes?

Probably because currently MacOS just has a slightly more nagging version of Window's UAC when it comes to unsigned software.

Saying that they will disable it holds about the same amount of weight as saying Microsoft will make Windows Win10 S mode only. Is it possible? Maybe, but neither company is going to shoot themselves in the foot like that in the current market.


"The MLB app, always an iOS exemplar, did allow me to play video in full screen and send it to Picture in Picture. More like this, please!"

MLB here - we stay as true to native Apple HLS as possible including open-source HLS.js where possible. .tv games are a little more complex but we're big fans of Apple's frameworks


I want M1 MacBook Pro _without_ TouchBar


The touchbar can be pretty good if you reprogram it, using, e.g., Bettertouchtool. I now put use it as a combination of information dashboard and system-level control switches (audio volume, internet proxy, my own low-power mode, ...). BTT’s touchbar clipboard manager is also phenomenal.


What's wrong with the M1 MacBook Air?


Nothing wrong. But the Pro is slightly better in term of performance[1] (due to the fan I guess). As reviewed by John Gruber[2], looks like it's not really a fan but "active cooling system" - very quiet. It would be perfect for me if there exists a MacBook Pro without TouchBar. Otherwise I guess I will go with the M1 MacBook Air.

TouchBar might be useful for others but is beyond useless to me.

[1]: https://techcrunch.com/2020/11/17/yeah-apples-m1-macbook-pro...

[2]: https://daringfireball.net/2020/11/the_m1_macs


Instead of selling the M1 to manufacturers, Apple will create additional consumer devices with it, and take over everything. For example, in a few years, a console:

Apple will finally enter the game console market, with such a leap in CPU and GPU, that its premium-pricing will be value-for-money.

The higher price will be acceptable to consumers, via monthly plans (as for iphones) - it will be locked down, so the App store will be the only way to use it.

Technologically, they have great SP, they just need great MP. Same for GPU. They can do it.

Games: ARM-based machines are certainly capable - consider the AAA ports to the Switch. They can port again. iPhone GPUs have been more powerful than its 500GFLOPS (undocked) for a while. They also have many games - and developers - on the App store.


I'm really really excited about the new hardware here. Bummer that the software is pretty meh. If macOS was as great as it was in the 10.6 days I'd be the happiest Apple user ever.


feel like laptops are way more plugged in and non-mobile than ever now because of the covid pandemic and new world of working from home. Battery life means nothing since we're not moving around every day, and likely won't again in the same way as before. Saw a comment somewhere a few days ago that the M1 isn't really important to the laptop world but the server and phone ends of the spectrum in the bigger picture/longer term.


Reviews (including this one) all say webcam sucks pretty bad though. Why Apple chose to release a laptop with a subpar webcam in 2020 is a mystery to me. This isn't even hard or expensive. Spend a couple dollars more on that webcam, Apple, don't be so greedy.


Can't it be because MacBook lids are absurdly thin? How thick are the lids of comparable machines (the XPS etc.)?


The XPS webcam is also very bad, and the webcam on HP's MacBook competitor (Spectre) is the worst webcam I've ever seen. I've yet to find an ultrabook with a good camera.


They should probably add 2 or more cameras there and combine them into a superresolution image. There's lots of space for that. You could do cool 3D effects too.


Apple has no problem with the camera bump on an iPhone, but won’t do it for the MacBook. I wouldn’t even mind it on the laptop.


Ding ding ding


Here's an actual sample of the webcam and mic array.

https://youtu.be/sV7CLgTMXio?t=344

It's aggressively okay. I wouldn't have any issues with this, but it is a bit disappointing, especially considering how important WFH has become recently.


One might argue that they had priorities in order: "pro grade" microphones and a camera which… shows something? Audio is way more important for WFH, but so far I did not see any tests done on the mic quality and how true to the claim they are. Though "pro-grade" should be impossible at these dimensions, so I am not sure what they meant by "pro-grade" either.


Honestly, because it's good enough for most people. I mean, yes, they could and absolutely should upgrade the webcam, I just don't see it as a major selling point, the existing is "fine".


Zoom used to max out at 720p, I don't know if it still does, and it seems the M1 is bandwidth limited so maybe it is because they figure people won't actually care.


Yeah I'd imagine its entirely because they wanted to keep the form factor the same as the old Macs, and changing the camera will require a full redesign.


HW sounds good. Now it just needs to run Linux :)


Soooo... If anyone wants the last Intel MacBook Pro with 4 Thunderbolt ports, hit me up :D. I'm really wanting the new Air.


Not a buy for me since they lost Windows bootcamp


It is funny to read the comments full of excitement.

To me the M1 thing fills me with dread.

To me I just see problems for the IT team. I already know our corporate VPN doesn't run on Big Sur. I have no hope for the monitoring app, or the Antivirus.

Apple is such a pain for corporate IT


Corporate IT is often an even bigger pain for it's users since things like VPNs are often bought to meet the need of Corporate IT and not the users :p


Yes, popular meme.

However I have a bunch of users who suddenly upped and left the office in April and still needed access to in house resources, fileshares, intranet, reporting services. Things we haven't finished moving to the cloud. Perhaps you can tell me how else I should do that overnight without a VPN?

Everyone elses job looks so easy doesn't it


We did the same and our VPN worked fine then and works fine now on Big Sur. If anything isn't working for you, then I feel like that's your fault considering that 32-bit apps have been deprecated for years and kexts that were incompatible have been deprecated (and messaged) since at least Catalina. You, and whatever vendor you used, have had multiple years to prepare for the changes in Big Sur that would break this stuff.


That seems like a corporate IT failure, not a failure of anything else. We just got an M1 Macbook Pro and nearly half of our machines are already updated to Big Sur and we haven't had any major issues. The only issue has been an old kext from Cisco that can't be approved anymore but it hasn't caused any major issues and Cisco has already let us know that there's an update to that incoming. In my experience, this transition has been way smoother so far (even with the limited input) than our changeover to Windows 10.


How is it a corporate IT failure? When my predecessor chose the VPN provider it ran on all of the systems we used at the time. Corporate IT should have had a Crystal ball I guess


How long ago was this? It's not like they made that decision before Macs existed...


I stumbled right over the first sentence of the article:

> It slowly became clear that one day Apple’s processors would come for Intel.

Yeah, that became clear sometime back in 2005... but what does that have to do with the matter at hand?!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: