Hacker News new | past | comments | ask | show | jobs | submit login
Apple's follow-up to M1 chip goes into mass production for Mac (nikkei.com)
519 points by lhoff on April 27, 2021 | hide | past | favorite | 645 comments



Somewhere, the collective whos who of the silicon chip world is shitting their pants.

Apple just showed to the world how powerful and efficient processors can be. All that with good design.

Customers are going to demand more from Intel and the likes.

Just imagine Apple releasing the Mx chips for server infrastructure. Intel ought to be sweating bullets now.

edit: a word.


> Just imagine Apple releasing the Mx chips for server infrastructure. Intel ought to be sweating bullets now.

As a developer, I'm sweating bullets. Imagine a future where Apple is the only serious platform left on mobile, server and desktop, then developers can forget about starting that next disruptive startup and instead they will become just second-class Apple "employees" with contracts similar to Uber drivers.


Apple isn't harnessing magic here. The opportunity for competition exists, but everyone else is just sitting on their asses while Apple's chip team puts in the work.

If you want to beat Mx, you have to think outside the box, just like how Apple thought outside of x86.

That being said, I do enjoy the shift to ARM as the dominant arch. x86 has far too many decades of cruft behind it. I don't think Apple will take over the ARM space; in fact I think we will see the emergence of new players as ARM isn't as finicky about licensing their tech as Intel is. The only reason Intel even had a competitor (AMD) in the past 20 years is because they licensed x86 to them before it started becoming the de facto choice.


"The opportunity for competition exists" is the most true statement and response to everyone grumbling about Apple dominance. Somewhere, somehow, the ecosystem of competitors has failed to execute, got lazy, etc and now is looking to regulators to bail them out. It makes me a bit sick. Apple is nothing more than a computing stack wrapped in a portfolio of consumer products. They happen to see this future early, invested aggressively in it, from silicon to security and everyone got caught with their pants down.


There's already multiple billion dollar companies trying to compete and get to where Apple's at. Apple has the lead now but they wont forever. They never do. I don't see what the big deal is here.

Competition has brought us such great processors. We should be thankful for it.


In the consumer space there are only 2 paths for a viable competitor to emerge. 1) a coordinated ecosystem attack or 2) an individual company doubles down.

There are things that lead me to believe 1) can't happen. Google ultimately acquiring Fitbit instead of ecosystem compatibility and integration. It seems like the giants are needing to acquire the pieces to build their own Apple, rather than partner with them. Also Google and Microsoft have complementary businesses but they barely coordinate on this. The closest thing I have seen is Android helping Microsoft with Dual screen Android devices. In most other areas they are setup to compete (chromeos vs. windows, azure vs. gcp, stadia vs. xbox cloud, gsuite vs office, bing vs google search etc).

2) Samsung is the most equipped to lead this.


How far behind Apple are AMD at the moment? If ryzen chips were optimized for low power and manufactured on TSMC’s 5nm process like the M1, what sort of performance difference would we be seeing?


TSMC advertises 5nm as +15% performance OR -30% power consumption. Either one of these won’t be enough to bridge the gap in mobile. Even throwing in another massive 20% IPC improvement wouldn’t do the trick.

On the flip side, once you go bigger the game changes. If you increase the M1 power consumption to amount for interconnects, more memory lanes, more IO, etc the M1 seems to offer worse power to performance that shouldn’t be too unexpected. ARM just announced they are splitting into V1 (performance above all) and N2 (best power, size, performance combo)


Laptop Zen 3 is already equal in multicore perf to the M1 for the same wattage IIRC. While being on 7nm still. Don't underestimate AMD.


Apple ships a 30w charger with the M1 air. I can't find any zen 3 shipping with less than a 65w charger.

Mac mini uses like 25-27w at the wall. 15-25w TDP for the 5800u doesn't include RAM (it's LPDDR4 at 1.1v will use a lot more power than M1 LPDDR4x at 0.6v). 5800u also has a chipset that consumes a few watts.

5800u has some features that the M1 does not (these would likely increase M1 power usage), but still doesn't seem close in power consumption.


Does the charger wattage mean anything? Couldn't it be that they are including a larger supply in order to charge faster?


I don’t have any solid numbers on 5800u, so I didn’t want to make any claims.

4800u isn’t that different as far as power goes (mostly different in performance). A mini-PC with it uses almost 70 watts under load.

https://www.anandtech.com/show/16236/asrock-4x4-box4800u-ren...


No. Ryzen Laptops do consume a lot more power than the 15w the m1 consumes under load.


There's also 3) which is another sea change in computing, something that would lead to current device categories becoming irrelevant, and a currently overlooked company positioned to capitalize on it. To me, that seems more likely than 1 and 2, and due to the acceleration of the rates of consumer adoption, it should happen relatively soon.


Yes, that is a good point. It's also hard to know what this is. But for me, the end game of computing is just computing everywhere, reading the world and writing to our screens and our touches programming everything around us. Ultimately this is going to be an ecosystem, not a company. Maybe some better standards emerge that can tie everything together. But as far as devices go it seems to processors, network, storage, sensors everywhere. It's going to be hard to build a new computing company on top of those things with the current investments in cloud and consumer devices, without just being acquired.


Good thoughts. That’s along the lines of my hopes. Imagine if no one ever thought about buying a computing device or computing again because it was cheap and everywhere. There wouldn’t be any more consumer preference regarding how computers feel, how fun they are, how long they last while mobile, etc. All the preferences would shift to a different class of products that happened to be connected to computing in some way.

I see glimpses of this with some of the cheap commodity hardware available, but transportation and design costs are still too high.

Ability to break in would, I suppose, depend on that cost of design and adoption. The ability of large companies to efficiently acquire, though—-not sure what would change that situation.


What do you mean by “doubling down”?


Invest in more Apple like things: vertical integration, cross-device compatibility, app quality, more device types, more accessories, more spinoff platforms (similar to MagSafe, AirTag, etc)


> they wont forever. They never do.

retina imacs and macbook pros beg to differ

8,9 years later still no good alternative


I can’t disagree. I mean the new iMac has some design issues IMO (I hate the white bezel), but damn, the combination of good display (I’d LOVE to buy a 4.5k standalone 24” monitor, 27-28” 4K just isn’t good enough imo), goodish design and an incredibly powerful processor for $1299 is super compelling.


In a space that requires, quite literally, a decade of good product development plus investing a trillion dollars to make a product that can compete with other companies that have already invested a trillion dollars, it's only natural that there be very few competitors in the market, and they're all competing on different things so they can still make a profit [apple only builds for their own products so they can go the console route and make money on the backend; intel/amd do general purpose computing and thus relatively low-margin but wide market-share; nvidia only does GPUs and server stuff because they can specialize in it very well).


Yah, people saying the possibility of an apple dominated silicon industry is a bad thing must have forgotten what the Faustian nightmare that was the last 5-6 years of intel dominance. Competition is good. Intel and AMD will either adapt or wither. It’s not like apple is cornering the market through Broadcom-esque machinations or shady tactics. They put out a remarkably compelling processor that is both efficient, powerful, and relatively easy to get (yah, macs are expensive, but the Mac mini at 700$ comes with the same m1 you get in these new iMacs). I also really love how apple is only offering two almost identical models of the m1. It’s powerful enough for the vast majority of computing tasks and I am so relieved I don’t have to spend time figuring out what the fuck intel is doing with their cpu skus. No doubt an m1x or m2 will come for the bigger iMac, but it’s super refreshing.


> the ecosystem of competitors has failed to execute

I see what you did there!

It's probably because the ecosystem is multi-threaded and there's lots of deadlocks and unprotected reads/writes...


We might get something on-par from a Samsung fab: https://www.tomshardware.com/news/samsung-exynos-with-amd-rd...

(ryzen APUs are due for a facelift in the graphics department)


It exists, but not everywhere... I think Kirin was close to being on par for a while, it might even have been dangerous? With regards to privacy ? Plus there must have been some serious PLA money involved... anyway now this ARM cpu cannot be fabbed by TMSC because of the sanctions.


If that even happens, chances are the current industry leaders already know about it and are building their own variants to prevent an extinction event. Maybe that technology is more wearables like AR glasses but even that is probably going to be eaten by Apple: https://www.macrumors.com/2020/10/22/apple-glasses-sony-oled...


> Apple isn't harnessing magic here. The opportunity for competition exists, but everyone else is just sitting on their asses while Apple's chip team puts in the work. > If you want to beat Mx, you have to think outside the box, just like how Apple thought outside of x86.

They didn't even do anything special. People have been telling multiple ARM IP holders to build a wide, powerful chip for ages. Especially for the server space.

Apple is just the first one to actually do it. AMD and Intel have been doing it all along, which is why it's so impressive that Apple's chip is finally in spitting distance of them despite being a fairly normal design with some specific optimizations for use cases.


Maybe the magic is just the discipline to somehow keep tuning and optimizing everything across the chip. Year after year they release another power-sipping design with another 10-20% performance. It's not a sudden tsunami, but an ever rising tide.


You would need to think quite far out of the box and hit it to be able to beat a vertically integrated, extremely well-funded, and successful company.


I guess it's not black magic apple is doing right here. From my experience with big companies, Intel just got buried in its processes, approvals, middle-management etc.pp.; they still got the talent, and in the past years there wasn't any serious competitor to them.

The dual wake-up call from AMD and from Apple (ARM), combined with the money Intel has in its pocket will have a serious influence on the cpu market. Unsure if they'll come out ahead, but it will get interesting and probably cheaper, not only for consumers.


There are lots of very rich companies that claim to want to compete but choose not to take the risks Apple did, of investing in their own vertical integration.


Each individual risk Apple takes is small - they didn’t develop the M1 from scratch; they have years of iPhone and iPad processor development under their belt.


The risk of Apple deciding to design their own chips was massive.


Apple started working with ARM in the 80s, eventually used in the Newton. I don't know that they did design directly, but they influenced the changes that became ARM6.


I would speculate that the Palo Alto (P.A. Semi) acquisition is what kicked off this round of the Apple semiconductor (r)evolution.


The individual risks are small now, but they weren’t when Apple was recovering from near death.


Vertical integration is certainly contrary to mainstream management philosophy these days. Today, the more dependencies you can work into your business plan, the smarter you seem. You’re an expert juggler, woo! That free API you were using got retired? Not your fault, who could have predicted?

To their credit, Apple takes the time and invests the money to build the stronger house.


Like IBM circa 1968?


Vertical integration for our Mgt guru generation could not last. In fact the problem of intel not farming out its fab business (and recognise its asset is x86 etc) is the problem. What apple smart at is cut the line to fab.

Still ibm then have no consumer whilst apple now has no business and cloud business domination ... still lots of space to play.

But if windows moved to arm with the same result I cannot say intel can survive though.


Well, the world was changing and new opportunities arose. I think it's different now compared to then.

In 1968 there weren't that many computing devices around. When video games and home computing came around, there were massive opportunities and the entrenched players missed some of them.

Same with mobile phones, same with the internet, same with smartphones.


I'm really hoping one of the other chipmakers jumps on the RISC-V bandwagon. There's substantial potential to do to ARM what ARM did to x86 here. If Intel/AMD/Qualcomm/Broadcom/whoever started talking about a significant RISC-V offering I'd be buying as much of their stock as I could.


ARM didn’t do anything to x86. One specific team at one company did something. Maybe two if you’re super generous and include graviton.


ARM is destroying x86 across the board, not just at Apple. They've long won the mobile space and are now making significant inroads into the server market, long thought to be the last bastion of Intel/x86.


What's the revenue of ARM server CPUs?


It doesn't matter. The revenue for x86 server CPUs goes down every time Amazon install a CPU they made themselves instead of buying an intel chip.


Good luck getting that info from Amazon, though at least a third of their offered instance types are ARM based.


RISC-V doesn't offer the same advantage over ARM that ARM has over x86, so that's unlikely to happen.


I work with RISC-V, it has a lot of features that are not yet well explored. In particular the instruction set extension mechanism is extremely robust and capable, allowing much more FPGA fabric integration that you currently see on ARM devices. As we move towards more programmable logic it'll be a massive advantage going forward.


At the same time, the software world hates hardware fragmentation, so ISA extensions are the opposite of desired.

RISC-V likely will dominate in microcontrollers and specialized/embedded devices. But in general purpose computers? Highly doubt it. Arm has a giant first mover advantage there now. Great momentum, excellent ecosystem. AWS Graviton2 basically sealed the deal.


This is the really cool part, the ability to add custom instructions are part of the spec which makes them work through standard compilers. Any compiler that can build a RISC-V program can handle the custom instructions as inline assembly so all one needs to do is write a library to abstract them. Obviously you don't have the off-the-shelf ability for the compiler to insert the instructions automatically, but this is rarely the goal for small-to-medium projects.

Example:

https://nitish2112.github.io/post/adding-instruction-riscv/


I don’t think the claim was that tooling could not handle these custom instructions; I think the claim was that nobody wants custom instructions in cross-platform software.


That sounds really interesting — what makes RISC-V better than ARM for this?


Partly the design of the core, partly the open source nature. Speaking from experience, it's really hard to push the envelope with ARM chips as they refuse to license them for FPGA deployments so there's not a lot of work being done there.


What stops anyone from implementing ARM ISA on FPGA? If you think of an ISA like an API - that's something that is not copyrightable, so in theory anyone could do ARM on FPGA. So I am guessing that nobody wants to bear the cost of such development and also risking potential lawsuit?


Yeah, that's pretty much it. There's no real upside and you risk a lawsuit. There are plenty of other ISAs that will work and are licensable for FPGA development


> The opportunity for competition exists

Perhaps, but I guess it would take on the order of a decade for a competitor to build their technology including an ecosystem with customers.

And who says they won't copy Apple's business practices?

> but everyone else is just sitting on their asses while Apple's chip team puts in the work.

That's because the complexity of the semiconductor industry is daunting. You need far more than a team of IC designers, especially if you want your product to become successful in a world where there is one main player.


You mean the business practices wherein Apple is 10x better on privacy than everyone else to the point that Facebook and Google are now being forced to be honest with users, disclose all the user data they are stealing and selling, and disclose all the privacy and security risks inherent in their business models?

Sign me up for more of those kinds of "business practices", thank you very much.


Apple is reportedly beefing up their mobile ads business. Hard to say privacy is the reason when Apple immediately attempts to benefit directly financially and likely with some privacy hits.


Would be helpful if you could support this with some links and evidence. I monitor this issue a lot, and haven't seen anything that presents added privacy risks, at least not yet.


Might be referring to App Store ads


It depends entirely on how they beef up their ads, doesn't it?


Ad targeting doesn't have to be based on violating the privacy of everyone on the planet. Surveillance Capitalism is just the business model that was chosen by Facebook and Google.

Ad targeting can be based entirely on the content of the page where the ad appears.


The last I read Apple plans to add or expand one more ad slot on the App Store. Is there a bigger ads initiative?


Sure if someone is starting from scratch. We have Microsoft, Android and Linux which are alternative computing platforms, and android + linux are already fully ARM. Apple's chips are still much faster than most Android ARM chips, but Android ARM manufacturers are not starting from zero.


ARM also has some decades of cruft attached to it.


ARMv8, ie, AArch64 is a redesign/new, and entirely 64 bit with legacy (32bit) support mode tacked on. The design was done to be able to remove legacy without another redesign.

To my understanding, this is quite unlike AMD64 (ie, Intel's current 64 bit ISA licensed from AMD) - which extended x86.


The 32-bit mode is optional; M1 doesn't even support it.


AMD64 is a really beautiful design for the constraints it was operating under.

Does anyone know who(mses) designed the instruction set?


No, the aechitecture used in m1 is from 2012


This is a really cute pep talk. "Think outside the box. Just like apple did." But it is completely devoid of meaning.

On what grounds do you suggest you know anything about how apple accomplished M1?

Or are you just intuiting that it was some sort of "outside the box" thinking?

Maybe it was raw engineering and management mastery combined with elite head hunting and millions of dollars poured into research and design over the course of a decade?

Nah. Apple's too cool for that. Probably some eureka moment during meditation class, right?


Engineering mastery is pretty cool all by itself, right?


> Apple isn't harnessing magic here.


In 2009, the price to have a chip prototype was about $1M/year in software licenses and $50K-$100K per fab evaluation with half an year delay.

That was 12 years ago. I guess software licenses go up and I am certain that current processes have chip prototypes to cost $500K-$1.5M per evaluation.

Yes, you can say everyone sit on their hands and/or asses.

I say that having money in bank for several dozens of thousands of chip prototypes certainly helps Apple in the design.


> The opportunity for competition exists

This is so deep and powerful if you really think about it. This is what we were promised. While not perfect, I applaud Apple for at least trying.


I wonder how much they paid to engineers to pull this off. If none of them has become a multi millionaire by now, the competition would have an easy way to poach them.


With all the respect in the world, I think the complete lack of that happening at all this century should make you pause and reflect on why you think it's even remotely possible.

I'm happy to be wrong, but I think you're buying too much into the SV/media fearmongering on a handful of companies whose main business is actually just attention economics.


Indeed. It's interesting to see how people like to label Apple a monopoly - a company with a 19% market share in smartphones and an 8% share on the desktop.


Is that market share based on total revenue or based on device count?

I'm pretty sure it's device count. Which means we are comparing $90 smart phones to $1200 iphones that will result in thousands of dollars spent through the apple ecosystem.


This is so often overlooked -- Apple's revenue capture of the market is what is astonishing.

In 2019, Apple captured 32% of the global revenue for the mobile phone industry and 66% of all profit in Q3 of 2019 [1]. Asymco used to report on this frequently but I'm having trouble finding recent examples [2].

[1] https://www.forbes.com/sites/johnkoetsier/2019/12/22/global-... [2] http://www.asymco.com/2011/08/02/apple-share-of-phone-revenu...


When people use the words "Apple" and "monopoly" it's more often about the iOS walled garden: because there's no (good) sideloading, Apple has a monopoly over app distribution on their mobile devices.


Android did not have that and we pay less. But so. Like walled garden in a world where nobody know that you are a dog. And you have alternatives on both desktop and mobile.

Success is not a crime.


in the world sure, but it's a different story in the US.


About 15% of personal computers in the U.S. now.

https://www.statista.com/statistics/576473/united-states-qua...


Not happening this century is a big statement. I don't know a lot about this industry, but I don't see why Apple couldn't do what IBM did to arrive at "everyone on x86".


I don't know a ton about the history of x86, but I kind of assumed all of that shook out, one way or another, pre-2000.

I was using admittedly vague language for rhetorical effect. I only technically meant the past 21 years.


As a developer, I’m hyperventilating with anticipation of a tool that significantly advances past the previous generation. Finally, something more than an incremental bump.

Nothing gets me thinking about disruption more, than new, faster, cheaper platforms. We’ve had to accept 10% better per generation for way too long.


I feel like you may be being sarcastic. What are your thoughts?


There's no doubt Apple has a lead in consumer grade chips for the moment, but I expect Apple's lead here will last for 2 or 3 years tops before intel, amd, or perhaps other arm chip oems catch up. Apple doesn't have access to special unique wonderful secrets, arcane wisdom that only the elect can learn. Relax, Apple is not going to take over the world, as we speak their competitors are working like mad to close the gap.


If they release m1 in blade format ... as m1 MACmini is already with mdm ... that might be enough for some mac clouds services.


I guess maybe if Apple makes more changes after all of that happens. 10 years ago Instagram was iOS-only for years. Uber launched on iPhone and later added Android. Clubhouse is iOS-only right now. These companies are choosing to focus on iOS for their startups.

The other day there was an article here about Swift on the server and it was full of replies from people who would love to use it more but lack of investment from Apple makes it often a poor choice. It's doubtful even Apple is using it much server-side in house.


I think companies such as Tenstorrent have a good shot at disrupting the semiconductor industry once again with their AI chips


I'm sure linux will run on these chips as well. If I'm not mistaken linux has already booted on the M1.


For now. What happens when Linux or software that runs on Linux starts competing more broadly with Apple?


I'm unclear what that would mean exactly. Some particular market space where a Linux-only based software package is the dominant solution in a business or consumer space? Do you mean a Chromebook?


Let's say some game producer starts making games that run on their Linux platform that in turn runs on Apple hardware.


You mean how it was with Intel and Microsoft for decades?


Give Mac software a few years to "catch up" and everything will feel just as slow as usual (and I'm not even joking, so far, software was always excellent at undoing any advancements in hardware very quickly).


I feel like things have felt pretty good in the PC world since NVMe SSDs and approximately Sandy Bridge.

But, I do agree that there is a lot of jank in software that we write these days. Over the weekend I started writing an application that uses glfx + dear imgui, running on a 360Hz monitor. The lack of latency was incredible. I think it's something that people don't even know is possible, so they don't even attempt to get it. But once you know, everything else feels kind of shitty. This comment box in a web browser just feels ... off.


Emacs has given me this feeling for a while. Even over a remote connection that is super slow, I am supremely confident in every keystroke getting to the server and doing what I want.

(Except for lsp/company which occasionally just messes up everything).


> I feel like things have felt pretty good in the PC world since NVMe SSDs and approximately Sandy Bridge.

Agree, and I would add that devs on some platforms have not been as thorough about using (or abusing?) all that new power. I've been able to continue use a Sandy Bridge laptop from 2011 - it'll turn 10 years old in a couple of months - because most apps on Linux still have so little complexity that you could run them on a computer from 2006. Electron-based stuff is not the norm. Adding an SSD a few years ago has been my only significant hardware upgrade.

Meanwhile, Macbook Pros that came out two years after my laptop was built can't even be updated to the latest version of macOS, before you even get to the issue of software jank.


Speed is a feature, said well: https://craigmod.com/essays/fast_software/


There are some hard usability limits that feel like the performance has stagnated or getting worse: when resizing Relayouting needs to be faster than the time for one frame (16ms usually). On Windows seemingly no software manages to do that.


Are nvmes noticeable over say a sata ssd


This is called "What Andy giveth, Bill taketh away."


and this is now called Electron

trading speed of development with 'hardware optimization'.

Mac being a small market, some users are still opiniated/loud enough to reward nicer/native UI vs cross-ish platform lowest common denominator UI. Maybe there is hope Arm mac stays fast.


I would say that 90% of my pc resources are consumed to execute JavaScript code. If we need dedicated hardware to execute this more efficiently so be it. We do it anyway for so many other domains. Video, graphics, encryption etc.


One of the big deals with electron-et-al is that we're not married to JavaScript, really; we're married to "the DOM and css" as our gui, and things like FF/Chrome/Safari devtools to develop that gui.

Most javascript developers are already using babel-et-al pipelines to build electron apps, which are already transpiling between major variants of javascript, and I wouldn't at all surprised to see a thing where it gets compiled into WebAssembly rather than interpreting javascript. I also think there's a thing, right now, where it's possible to build electron apps with Rust+WebASM; I'm not sure, but I think the main thrust here is it definitely would eliminate a huge chunk of the slowdown.

I guess the main takeaway is just that the development revolution that's happened recently is mainly about how insanely good browser dev tools have become, and not about javascript - javascript was just along for the ride. As an aside - I recently saw a video of someone demoing one of the old Symbolics LISP workstations, and I was shocked to realize how much they had in common with a modern browser console - specifically of being able to inspect all of your gui components, live, and look at all the properties set on them. It's provided a hell of a way for me to explain what the deal was with all the old gurus in the 80s "AI Winter" diaspora who were pretty butthurt about having to move from programming in that environment, to having to write C on DOS (or whatever else paid the bills at the time).


WebAssembly is just another JVM which is notoriously slow for GUI compared to the native alternatives like Qt.


There basically already is in the M1: https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

tl;dr it has really sick floating point double performance which directly translates to JS performance


Most JavaScript JITs do math on integers.


Only if using the BigInt type: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

all other numbers are floats


No, this is independent of BigInt. JITs speculate that numbers are integers for performance.


Hmmmm okay! I see what you're saying now.


+ Mac OS checking every binary you run over the internet.


It's slightly more nuanced than that - you can of course just block it from doing so, and there's certainly an argument for it being updated to not need a network call each time, but phrasing it like this makes it sound worse than it actually is.

I'll just quote Jeff Johnson, who's looked into this and written about it - his comment[1] on this is post is quite useful:

https://eclecticlight.co/2020/11/25/macos-has-checked-app-si...

>The request to http://crl.apple.com/root.crl is simply checking the revocation status of Apple’s own Developer ID Certification Authority intermediate signing certificate. If you examine the cert in the System Roots keychain, you can see that URL under CRL Distribution Points. This request contains no information specific to third-party developers or apps. In contrast, there’s no CRL for third-party Developer ID leaf certs signed by Apple’s intermediate cert. Their status is only available via OCSP.

Notably, this:

>This request contains no information specific to third-party developers or apps.

https://eclecticlight.co/2020/11/25/macos-has-checked-app-si...


You added detail, yet didn't refute the statement. That it isn't easy to simply and specifically turn off is not acceptable.


No, I'm fairly certain I refuted it.

The parent comment is incorrectly stating that each request to Apple is checking a binary, and it's not. This has been well documented both here on HN and across the web.


That's not the definition of refute. The distinction between a lazy-implied always and merely often is pedantic. The point is that it happens enough to reduce performance, while doing something no end-user asked for and is difficult, if not impossible to disable without reducing other security protections.


There is no measurable performance loss from that one time check - it's not something happening constantly.


Pihole and others can block those Apple queries.


What happens if you block them.. do they eventually stop working if they haven't been validated in a long time?


Yep, see the success of the Nova editor.


Funnily enough, I dropped it after the trial expired because the plug-ins - which I belobe are written in JavaScript - were incredibly unstable.


It's a success?


I don't know what qualifies as a success, but I saw in Git Tower's developer survey that it has about 5% share. The share of the whole developer industry should be lower, because Tower's audience is interested in premium tools, but I expected it to be struggling to break 1%. https://www.git-tower.com/blog/mac-dev-survey-2021-results


Yes



If you don’t run Electron apps, even an older Mac will fly. My 2015 is still astoundingly fast, it just doesn’t have Chrome bogging it down.


This idea that Electron apps are inherently somehow slow is starting to bug me out. While writing Electron version of our graphics heavy web application, I noticed that the memory usage or CPU consumption is not a lot higher than some other native applications.

We have taken careful care in order to write fast and memory friendly javascript if possible, avoiding using slow features of the language (the list is long, but things like forEach loops, ineffecient data structures etc) and taking care to profile and optimize.

Result is an application that feels almost as fast as native and doesn't really consume so much memory even, although we are doing canvas based graphics realtime.

My suspection is that many web-developers (and thus, qualified to developer for Electron) just don't have the tenacity or background to write efficient code.

You can write slow-ass molasses javascript code very easily. Just take somebody who has done webdev for maybe like 2 - 3 years and doesn't have any other deeper CS background. Watch what kind of memory inefficient and slow code structures, especially with javascript where you don't really understand how heavy an operation like map() can be in worse cases, or where you are creating copies of your data multiple times for example, and voila, you have a slow-ass memory-hogging electron application.

Maybe I should do a blog post about how our Electron -application is performing just to show people that you can write fast code using javascript also. But it takes skill and time, and maybe in this current day what matters is just cranking out builds that work somehow.


I'm a self-taught JavaScript developer (front- and back-end) and I would love to read such a blog post. I mostly use language features for their expressiveness (map, filter, forEach) and rarely think about their performance implication unless there is reason to believe some piece of code is in a performance-critical path. However with a guide I might reconsider some scenarios where I'd be willing to give up expressiveness for performance.


Thanks for the encouragement. I will look into this more and write a blog post about, it is actually something I wanted to do for some time now, I'll make sure to post it to HN also and report what we have found ..

We have been developing this application since 2011 already, and I also develop in C++ and 3D graphics, so performance has been something I've had to think.

But yeah, there are many guides on what to avoid in JS already, how to optimize for speed and memory. But it is not definitely an easy thing to realize, as it's not inherently visible what can be slow and what not. Most blog posts at this time talk about how to optimize for the V8, and I'm not really interested in that so much, as it can be taunting to try to understand how V8 works for example.



You're not wrong, but at the same time, if 99% of Electron applications are slow, then I have no problem calling Electron (as a movement, not a technology) slow. It is also pretty evident that default Chromium itself is a complete battery slog.

You can make an Electron app not slow/a battery hog, but it requires working backwards from a state of "what on earth is going on under the hood here". This is the inverse of building a native application, where that complexity is often introduced by you. I personally find this significant - you can teach someone how to avoid the latter, but it can require crazy amounts of insight to learn to debug the former.

It is 100% possible to build better Electron apps. I trust almost nobody to actually commit to doing it.

Also, particular yet tangential to this thread: apps like Signal or Discord which still make their Electron apps run under Rosetta 2 on an M1 are a nuisance. I just run Discord in a browser tab at this point.


Yeah it's true. Of course native code will perform by default faster and be more battery friendly, but V8 is also very optimized and can perform easily like 80% of native speeds if given some thought.

Yeah many people do not just have the time to look into this, and most companies don't really care, as we have so much RAM and CPU power these days, so I can see why for example many messaging apps just choose to write their app in Electron, instead of trying to figure out how to do it natively cross-platform, which can be a more PITA especially when you also have web as a platform.


>can perform easily like 80%

This feels like... a trap.

I think there's a complexity in doing that which could be summarized as follows: it's possible, but the more complex your codebase, the more difficult it is to reason about how to get to or maintain that 80%.

Even if we assume that 80% is the maximum, if most products (or the notable ones that we all complain about, I suppose) are hitting 50%, then... well, that's a problem.

For full disclosure: I've defended Electron on the merits of "there is sadly no better way to ship cross-platform UI-based software". I criticize it with this in mind.


Yeah maybe 80% was over-estimated. Depends on the usecase, I've also seen 50% in more cases, but at least according to my experience 70-80% is realistic in some cases.


It isn't just Electron, Node is also plagued by its own low barrier to entry.

One of my former employers had the great idea of hiring hundreds of "senior" JS devs and redoing the entire frontend of the website. When it launched to production the whole system scaled at approximately 1 enterprise-grade server to 1 concurrent user.

While I applaud your efforts to teach people how to write code faster, the majority of JS devs I have found just want to hit feature-complete and go home.


IMO, it’s not just JS devs. Most of the backend PHP devs I work with are similar.

And when nearly every Stack Overflow post asking about performance implications is answered with a litany of “YAGNI” and “premature optimization”, its not hard to see why.

The current comp sci culture seems to discourage this in all but the lower level languages like C.


Please write the blog post and share it to HN. I would really look forward to reading it and learning from it.


Thanks. I will :)


You can even go back to 2013 and not notice that you’re using a 7-8 year old machine. A MacBook Pro from late 2013 runs Big Sur and performs fine for most everyday use use case and any development work that doesn’t require an IDE. You just can’t run Chrome all the well, which should tell you more about Chrome than Macs.


Still using a mid-2014 Macbook Pro with 16 gigs of ram and a 2.5 GHz quad-core i7 as my daily driver. Everything performs just fine to me, and macOS has even got faster during this time I've used the laptop, which is something to give credit to Apple for.

I use mainly VSCode these days though and Firefox, but even Unity game engine works just fine on this machine.

Any IDE I also throw at this machine works just fine. Only things that feel slow, are things that need a beefy GPU.


I do all my work (java/postgres etc) on a 2012 i7 Mac Mini. It used to be my home media server but I started using it as a "temporary machine" last year after killing my 2018 15" MBP by spilling water over it. I was planning to replace it with an Apple Silicon machine once they became available, but it's performing so well I'm more than happy to keep using it while waiting for a 15" or 16" M1x/M2. Amazing little machines.


> and any development work that doesn’t require an IDE

Probably depends on the IDE and the language. My impression is that newer computers (especially in the "consumer" / "general public" market) have especially improved efficiency. They're not much faster, but they last longer on smaller batteries.

My late 2013 15" (2.3 GHz / 16 GB RAM) works great with Pycharm for moderately sized projects. It's even usable for personal Rust projects (with the intellij-rust plugin).

For Rust it's somewhat slower than my i5-8500 desktop but not by much. For incremental compiles, I don't feel like I'm waiting around more. The i5 mainly wins when it can use all six cores.

It's however quite a bit faster than my work HP ProBook with an i5-8250U which is a much newer machine (2018 I'd say).

Aside from battery life, which is somewhat lesser, all in all the mac is a much, much better machine than the HP, especially for "creature comforts": gorgeous screen, no misaligned panels rubbing against my wrists, great trackpad, no random coil whine, inaudible fans unless it starts compiling for a while, no background noise on the headphone out, integrated optical output when I want to use my home stereo.


My early 2011 MacBook Pro definitely isn't fast but it's usable for checking out a couple tabs worth of stuff on the sofa.

My work 2019 16-inch MacBook Pro actually doesn't feel that fast considering how fucking expensive it is.


My 16” had two speeds.

Quiet and cool with about 11 hours of battery life when writing, reading or some simple productivity stuff.

Fast but loud and hot when doing anything else with about 3 hours of battery life. Even the Touch Bar (which I don’t hate, but I also don’t love) is uncomfortable to the touch.

It seems there is no in between. I’m generally happy with the machine, but I’m very interested in what’s next for the big MBP.


My fully-loaded 2019 16-inch MacBook feels VERY underpowered actually - once you try to use all the cores (multiple compilation threads, parallel tests with multiple docker images running), it goes into what I would call a "heat shock". I suspect throttling kicks in and cpu charts in Activity Monitor become close-to-death flatlines. Oh, and the fan noise becomes non-ignorable too.


Yeah, full utilization leads to throttling, guaranteed. Although apparently it can get much worse than what I've had, down to like 1.0 Ghz.

I've also had issues with the laptop waking from sleep, where it'd either hang for up to minutes, or just outright kernel panic. Big Sur seems to have fixed the kernel panics but waking from sleep still feels really slow.


You know, I believe you on the 2019 - I've heard this from more people than I care to admit, and makes me glad I skipped that year.

I think from 2016-2019 was a rough era for the MacBook Pro when you factor in price for performance; still great machines, but they didn't feel as polished as the 2015 MBP or the 2020 M1's.

Edit: year banding.


My 2016 MacBook Pro has been an absolute pleasure to use. It's small, light, great battery life, plenty of performance, I consider it the second incarnation of my favorite laptop ever, the Titanium PowerBook G4.

Except for the keyboard. Been replaced three times. It's defective by design. Such a letdown.


I have the opposite experience with my 2016 MacBook Pro. It feels really sluggish to me now. 2 cores is nowhere near enough, and the integrated graphics chip doesn’t feel fast enough for everyday tasks if you have external monitors connected. I’m on a video call in discord the machine lags so much it’s unusable. Xcode with swift is basically useless - it lags so much sometimes doing who knows what that it drops keystrokes. I had the monitor stop working because of the design fault right before two back to back conferences. And the keyboard has been replaced once and is now having issues again. Compiling rust on the machine is about 8x slower than my new ryzen based development machine.

We’re in a golden age of cpu improvements. A dual core 2016 mbp isn’t good enough for me any more.


Interesting. How much ram do you have ? Using a 4k external monitor ?

Just wondering, don't those cores still have HyperThreading or such, resulting in actually more cores than just 2 physical ones ?


Ah, I have a quad core with a discrete GPU. Might make all the difference.


Try compiling stuff. There's definitely a difference in speed when I use Xcode on the 15" 2016 MBP, or the 16" 2019 MBP. It's got an additional four cores, and it's noticeable to me.


I actually used a MacBook Air 2011 up until summer 2020 as my main home computer. The only reason I even bought a new MacBook was because I couldn't get the newest version of macOS which I needed for the latest version of Xcode.

Finally sold the 2011 MacBook Air a few months ago for $300. I got so much value out of that laptop.


My old 2013 is still a charming little machine. I made my GF "upgrade" to it from a random ~2017 HP plasticbook and she loves the thing. 4 GB of RAM can still fly if you only run native apps.


This late 2012 iMac is still my main machine (with a new big SSD). Oher than for gaming and Adobe Premier, it is perfectly happy with what I throw at it.


Exactly, I have a 2015 MacBook that works perfectly, unless I try to use electron-based apps.


Is there an equivalent to Parkinson's Law ("Work expands to fill the available schedule"), where software expands to fit the available performance?


"Software is a gas; it expands to fill its container." - Nathan Myhrvold

PS: apparently the quote is from 1997, go figure.



An old joke:

c - the speed of light, independent of the speed of the observer.

w - the speed of Windows, independent of the speed of the hardware.


Software is a gas that expands to fill its container. It's just science.


Could be even worse – web devs writing shitty webapps that are fast on their Macs but dog slow on non-Macs


Nah, don't worry they are using electron apps to build them as well.


But since so much software these days is cross-platform, apps/sites will still have to work performantly on Intel chips. E.g. Google can't slow down Chrome to take advantage of Mac speed increases, because it would become unusable on Windows.

So I actually think that Mac software will hold the performance edge for a long, long time.


It logically makes sense for Windows to move to ARM as well at this point. They already ported it with Windows RT, and now all they have to do is provide a translation layer like Rosetta and release the full Windows experience (instead of the locked down version RT was).


But there's no M1 chip equivalent for PC's.

It's fine to have an ARM version of Windows but there's no equivalently high-performance ARM chip to run it on. Unless you're talking about Bootcamp on Macs.


Any reason why Intel and AMD can't do an ARM chip?

I have a vague tickle in my memory that both Intel and AMD have dabbled in ARM before.

Even if they don't assign large production volumes to it now, they could still have a design team working on it for a contingency plan.


AMD was developing an ARM core "K12" in parallel with the Zen core. K12 was delayed in order to prioritize Zen, Zen turned out to be a massive hit, so K12 got delayed indefinitely.

https://en.wikipedia.org/wiki/AMD_K12

https://en.wikipedia.org/wiki/Zen_(first_generation_microarc...

Once Apple released M1, rumors of AMD ARM development started again https://www.techspot.com/news/87851-amd-rumored-working-arm-...


Microsoft went with Qualcomm, but indeed it's not-that-high performance. (Even the "new" 8cx gen2, also rebranded as Microsoft SQ2, is still basically an 855… The situation will improve somewhat with an 888 based chip.)


You are right. And I hate it.


Even today the M1 still feels slow occasionally -- hanging for a second or two, browser taking multiple seconds to launch, etc. Granted, that could be due to I/O or power management, but in any case it's clear that software is not making proper use of hardware.


What you just described only happens for me with Chrome on my M1 MBA. Try loading Safari, its instant.


Maybe for some low hangers, but compiling code will still be faster, XCode and VSCode will be a bit snappier, Safari and Messages will be a bit better. These base things need to work on a broad range of products.

The fact that Apple writes much of their software for iOS and MacOS at the same time means much of it is designed to run on fairly light hardware.

I know we’re all stuck with bloated stuff like Slack and some of us with MS Office, but just go native where you can it reap the benefits of the platform.


> but compiling code will still be faster

I don't know about that, LLVM is getting slower with each new release too [1]. Same for Xcode: startup is very noticeably slower than a few years ago on the same machine. Starting into a debugging session for the first time is much slower (it used to be instant, now it's multiple seconds). The new build system in Xcode is most definitely slower than the old one, at least for projects with more than a handful compile targets. Etc etc etc... new software is only optimized to a point where performance doesn't hurt too much on the developer's machine.

[1] https://www.npopov.com/2020/05/10/Make-LLVM-fast-again.html


I suppose that’s a fair comment. The whole build system and IDE slow down when working with Swift. It’s possible Swift wouldn’t exist if it weren’t for faster processors.

One nice thing is Swift is slow to build but quick at runtime.


The work off the back of that article has mostly halted that I think.


This is so true. My work kept providing latest Macs at excellent configurations and those Macs kept getting slow very fast :)

Also, Apple’s buggy releases (which is tradition now[0]) didn’t really help.

0. Apple essentially stopped fixing bugs and they just release another buggy OS in a year or so with a new name and changes that break everything.


We don't even need to wait for that. Right now web developers are creating the new generation of web-based apps that will consume everything that M1 can give and more. In fact these apps already existed, they were just largely ignored because they're so power-hungry (just look at electron and similar stuff).


Even worse: Developers will get an M-series Mac, write Electron apps that performs okay. They will then ship them as cross platform app, that will absolutely suck on any other platform.


Where is that inefficiency supposed to come from though? I mean there are four big inefficiencies that came with JS.

1. Garbage collection and minimum runtime

Garbage collection decreases memory efficiency, browsers need a certain minimum amount of resources to render a page, no matter how complicated leading to 500MB RAM text editors like Atom and worse once you open files. Similar problems plague Eclipse and IntelliJ which both often consume 1GB of RAM. The JVM often needs 150MB for an empty or perfectly optimized app.

2. Everything is an object with pointers

This is especially bad in Javascript where every object is basically a hashmap. This causes performance slowdowns because even something as simple as looking up a field is a pointer chase through several layers now. Raw numerical data may consume a lot of memory if you are not using typed arrays. Especially bad with ArrayList<Integer> in Java.

3. JIT

JIT compilers can only spend a certain amount of time on optimizations, which means JIT languages tend to either suffer from slow start up times or faster start up but less optimizations.

4. GUI complexity

Things like having animations and constantly recomputing layouts.

If you designed your processors for these things and made them fast at this, the only further source of slowdown is a lack of caring because you have already exhausted the unavoidable technical reasons. E.g. your processor is so fast, you write a naive algorithm that takes 5 seconds on the fastest computer available but then takes 20 seconds on an old computer.


I don't think "animations" fits here. Well-implemented CSS animations are generally fine (though this is in and of itself a high skill to do well, I think). If you're still driving animations with pure JS, you probably need a refresher in modern web dev.

Diffing a VDOM might fit here, but that's not really GUI-specific - just a combination of your earlier points.


> If you designed your processors for these things and made them fast at this

What does that look like?

There is a reason Apple doesn’t do #1 and #3 and is moving away from #2 in their own code.

They are just inefficient ways of doing things. Designing a processor to support an inefficient mechanism will still lose out to a processor which doesn’t have to.


I think the next layer is cross platform bridges.

Look at multi-platform products like ms office and how slow they run on a Mac. I suspect because there is a translation layer to bridge win32 calls to Mac equivalents. And that seems like it would be point 5 on your list.


No need to wait, just use Chrome to get a taste of the "future".


Amazon's Graviton 2 is the M1 of the server world in my experience. It's faster, cheaper, and overall a better experience than any other instance type for most workloads. You have to get through a bit of a learning curve with multiarch or cross-compiling to ARM64 depending on what your codebase is like, but after that it's smooth sailing.

Azure and Google need to really step up and get some competitive (or even any) ARM instances--fast.


It's pretty crazy how good the Graviton processors are.

What's even more crazy is that you cannot buy them. Such price/perf is literally only available to rent from AWS.

I hope Oxide does something about this soon; it is a sort of dangerous situation to have such a competitive advantage only be available via rental from a single (large and anticompetitive) company.


Ampere Altra is actually faster than Graviton 2 and you can buy it. (But apparently the motherboard is >$5,000!?!) https://store.avantek.co.uk/arm-servers.html


AMD will probably "arm" up quickly, NVidia is acquiring ARM proper, Qualcomm won't sit still.

Intel? Well, maybe 12 months after they catch up to AMD in x86.

AMD's chips I've seen are decently competitive with M1. If ARM is a not-too-bad microcode adaptation and ditching a whole lot of x86 silicon real estate, AMD might be able to come to market quickly.

Intel isn't just getting pantsed in x86 by AMD and process by TSMC, they are seeing their entire ISA getting challenged. Wow are they in trouble.


Isn't the ARM acquisition kinda stuck at the moment?


It has to go through regulatory approval in the UK but this was to be expected. Jensen (Nvidia CEO) said he expected the acquisition to be complete by 2023.


It is, but nvidia don't need it necessarily to compete in the ARM space, since they already make Arm SOCs


arguably they would proceed on an ARM design with or without the formal company acquisition. It's too good of an opportunity.


Yeah, I kinda wish Amazon would make a Graviton 2 powered laptop or dev machine. It would be really nice to develop directly on the ISA for the instance instead of cross-compiling. An Apple M1 of course works, but more ARM64 laptops need to be available. It's sad that it never got traction in the Windows hardware world.


My concern is data centers, not development systems. One can always just ssh/x11/rdp/whatever into the Graviton machines.

There's also nothing wrong with the $999 M1 Air. Isn't the Surface also ARM? And most Chromebooks?

After you've already got the code, though, it's Amazon's way or the highway.


Packet used to offer bare metal Ampere eMAGs (and old Cavium ThunderX's)… current Equinix Metal website doesn't seem to list any arm instances though :/

Apparently Oracle Cloud is going to offer both VMs and bare metal on Ampere Altras.


Future ARM Surface laptops will probably be faster than Graviton per-core but obviously with fewer cores.


Agreed. I'm busy transitioning everything I can to Graviton2 for the $/perf savings.

For my Java services so far, no change. Aurora was a button click. ECS is the next thing to tackle.


I hope they add support for graviton on elastic beanstalk soon.


As the primary AWS guy in our company I'm pushing for all our developers to get M1 macs ASAP. When that happens we'll start building/testing on ARM systems and I'll transition all of our EC2 stuff over to Graviton processors. I've already moved several systems over to t4g instance types and we are saving $100's per month.

Bonus! This makes performance review time super easy; you've saved the company thousands of dollars this year, how about a raise?!


It's great EC2 instance for perf/cost, but how is it outperformed other x86 instances? Benchmark between Gravition2/EPYC(Zen1)/Xeon(Cascade Lake) shows that all CPUs are similar level. And now C5a EPYC(Zen3) is available, Xeon(Ice Lake) is sold.

For M1, it obviously outperformed competitive.

https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...


I think you are exactly right, but we should celebrate that!

This kind of competitive pressure will inspire a response from Intel and other firms.

The result I would predict and hope for in a few years would be better chips from everyone in the market.


I'm not saying what you think is wrong, but your view seems like it may be narrow, and missing the bigger ecosystem.

Intel has certainly dominated consumer computer sales over the past decade, and until 4 years ago they were largely still selling the best chips for most consumer use cases (outside of mobile.) Intel had several missteps, but I don't think their dominant position was the only source of their problems, or simply that they thought they didn't have to try. They legitimately had some bad decisions and engineering problems. While the management is now changing, and that might get their engineering ducks in a row, the replacement of Intel with Apple Silicon in Apple's products is not likely to be some kind of "waking up" moment for Intel, in my opinion. Either they'll figure out their problems and somehow get back on an even keel with other chip designers and fabrication, or they won't.

Meanwhile other competitors in x86 and ARM have also have a short-term history of success and failure, again regardless of what Apple is doing. And the timelines for these plans of execution are often measured in the scale of two to three years, and I'm not seeing how Apple successfully designing CPUs would change these roadmaps for competitors.

For everyone involved, particularly those utilizing TSMC, there are benefits over time as processes improve and enable increases in performance and efficiency due to process rather than design, and the increased density will benefit any chip designers that can afford to get on newer processes.

I guess if I'd attempt to summarize, it's not clear who is motivated and able to compete against Apple in ARM design. In other words, is there a clear ARM market outside of iOS/macOS and outside of Android (where chip designers already compete)? And in the Linux/Windows consumer computing space, there's going to be a divide. Those that can accept a transition to macOS and value the incredible efficiency of Apple Silicon will do so. Those that continue buying within their previous ecosystems will continuing comparing the options they have (Intel/AMD), where currently chips are getting better. AMD has been executing very well over four years now, and Intel's latest chips are bringing solid gains in IPC and integrated GPU performance, though they still have process issues to overcome if they wish to catch back up in efficiency, and they may also need to resolve process issues to regain a foothold in HEDT. But even there, where AMD seems pretty clearly superior on most metrics, the shift in market share is slow, and momentum plus capacity give Intel a lot of runway.

The only other consideration is for Windows to transition to ARM, but there's still a bit of a chicken and egg problem there. Will an ARM chip come out with Apple Silicon like performance, despite poor x86 emulation software in Windows when run on ARM? Or will Microsoft create a Rosetta-like translation software that eases the transition? I'm not clear on what will drive either of those to happen.


My layman's understanding was that M1 chips had very efficient low-power/idle usage, and better memory + instructions pipelining architecture for jobs composed of diverse (inconsistent) size and "shape". And that's why it blows away consumer workload Intel chips.

In a server high load environment, is this still the case? Continuous load, consistent jobs -- is the M1 still that much better or does its advantages decrease?


I don't think Apple will go back to creating server infrastructure again. In the interim, I know Amazon created an ARM server chip called Graviton, and I'd like to run a kubernetes cluster on that


A lot of other fairly standard cloud native technologies don't yet work on ARM, notably istio, even though most implementations of kubernetes itself will work. This is the major thing preventing me from being able to use a Macbook even for development (at least an M1, but that's all I have).



I would do truly terrible things for an updated xserve RAID (the most beautiful computing device ever made) that supported SATA or NVME drives.

Right now I've got IDE to SATA adapters in mine, which leaves just enough space to also squeeze 2.5" sata ssd into 3.5" drive caddies.


I hope they do, because, ultimately, it will be good for the environment.


Yes, you can run a kubernetes cluster using it.

It's been available for a year: https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-ek...


They should do a cloud they have some very unique tech like foundationDB based SQL RDBMS and M chips and prob a lot more cool thingies.


The latest Mac Pro does come in a rack mount option, but I'm not sure what market it's targeting.


I think it’s mainly for Mac / iOS development CI systems like for CircleCI or Mac Stadium. The hardware just seems too expensive for it to replace more generic arm services that AWS is offering.


On-site music and video production. You can rack it in a portable rack with your other racks of music equipment and it fits in the back of a van.


Well, if your portable rack happens to be way deeper than most audio racks…


If you're wheeling around five to six figures worth of audio production equipment often enough to want the rackmount Mac Pro, I think the cost of a new rack is almost a rounding error.


Linus (of Linus Tech Tips) talks about the target market some in his videos about it https://www.youtube.com/watch?v=Bw7Zu3LoeG8

In short, it's for media production, not computing. You put it in a media rack.


Maybe ios ci/cd build infrastructure?


If we're honest most customers aren't going to notice the performance differences, it will mostly be people with heavy work loads.


I run a decent chunk of heavy work loads, and tbh the main things I notice on the new Air are all of the other niceties: instantly awake, ludicrous battery life, no fans. I think those aspects are going to be much more noticeable than many expect.


Agreed - most people use their computers as web browsers to access social media and shopping.

The performance boost there is extremely noticeable on new M1 chips.


Anecdotally; I’ll say this is equivocally not the case. I’ve had a half dozen of my friends upgrade and some of the aspects are absolutely surreal.

The instant turn-on. No fans. Safari alone feels like a completely different beast and a friend and I spent about two hours running through some of the most demanding web sites, side by side against my 2018 15” MBP.

Holy crap. Just...all the subtleties.

I’d actually say it’s easier to notice the immense general speed-up than I notice the difference between a 4min and 2min render time in Final Cut.

Dragging 4K clips around in FCP was infinitely smooth; and it made you immediately be able tell the UI/UX itself was dropping frames and needing to catch up to itself sometimes.

These are things you don’t notice until you’ve tried the M1 for the first time.

It truly is an undeniably insane piece of engineering. They killed it.


But they will notice that they left their charging cable at work/home and were still able to work all day without it.


Not true at all. M1 and follow-ups are about single thread performance, the only actual performance most consumers are likely to notice.


> the only actual performance most consumers are likely to notice

Remember when Apple made the Power Mac QUAD, to indicate that the system had FOUR processors, because "men should not live of single thread performance alone"?

> "With quad-core processing, a new PCI Express architecture and the fastest workstation card from Nvidia, the new Power Mac G5 Quad is the most powerful system we've ever made," Philip Schiller, Apple's senior vice president of worldwide product marketing, said in a statement.

Of course they can't advertise the multi core performances, because they are lower compared to similar systems, just like with the Power Mac they could not advertise that the system was greener, because it wasn't.


The multicore performance is good for a portable, because background threads can run on the efficiency cores without making the whole system thermal throttle. Intel can't do that.


Not right now, but the upcoming Intel thing will combine Atom cores and Core cores (lol) like that.

AMD is the one left without an "efficiency core"… but I wouldn't be that surprised if they somehow manage to squeeze almost the same efficiency out of just low power modes on the same Zen core, which would leave them with better multi-core performance as usual.


I was literally saying the same thing, it's OP that said

> M1 and follow-ups are about single thread performance, the only actual performance most consumers are likely to notice

I beg to differ, multi core performances will be very much noticeable as well.

Especially in this heavy media era

video encoding, image editing, music production etc. are things that nowadays every kid does

single core performances are good only for benchmarks and marketing


You can definitely notice the difference between a formerly top of the line Intel MacBook Pro and an M1 Air. The Air is way faster for opening apps, photo library, and other common daily activities.


I work on a 25-year-old research hydrology model written in C. My M1 Mac mini runs the model about 68% faster than my 2019 i9 MacBook Pro. Definitely thinking of trading that in once a 64GB M1-based system is available.


And M1 Macs won't burn a hole through your desk.


That includes anybody who uses Electron or Chrome... :-)


They will notice the battery. Which is performance.


From some the demos I've seen on YouTube, the Nvidia Jetson AGX would make for a really nice desktop. If Nvidia could release a desktop oriented ARM machine with its graphics chips integration, it could make for a really nice workstation.


Most server infrastructure isn’t designed to be efficient to the degree that Apple’s OS is. I think until we see a significant increase in datacenter power costs due to carbon neutral, we’ll start to see developers focusing more on ekeing out the most performance per watt. Apple’s been doing that for decades and has dragged developers along, kicking and screaming about the upgrade cycle, until they reached today’s pinnacle M1 chip and macOS and iPadOS. The unixy community has refused for years to use macOS APIs unless commanded to by Apple, and only the upgrade to TLS 1.3 finally started compelling app developers to bundle OpenSSL (rather than use SecureTransport), and open source server-type developer adoption of Metal and MLKit and whatever the HTTP request library is (I forgot, sorry) is essentially nonexistent. So the apps that “server infrastructure” might use M1 for are, bluntly, not equipped to perform very well on M1 today.

I’ll know that Apple is a serious contender for datacenters when PHP is updated to use native macOS APIs when running on macOS, because that’s a great litmus test for whether open source is ready to compromise on “generic computing is king” in favor of “high-performance OS APIs for specific hardware platforms”. I’m not holding my breath, and I don’t think most open source projects are willing or able to either decide to take these steps or actually deliver on any such intent. Apple seems content to offer rackmount macOS and that’s more than sufficient for serious businesses that need serious amounts of Apple-accelerated hardware. But I don’t consider most “Internet cloud server” businesses to need that, and I don’t see the developers of our clouds as being willing to give up their dylib freedoms in exchange for higher performance per watt.

EDIT: Think of M1 as a hardware platform dedicated to code that's been ported to Apple’s microkernel, if that helps any. Yeah, it’s not “micro”, in theory; except they run a stripped-down iOS on their HDMI dongle (yes, seriously), so clearly they’re quite capable of “micro” kernel. More on kernels today: https://news.ycombinator.com/item?id=26954547


After the XServe, I wouldn't touch Apple servers with a 10 foot pole.


What makes you think that M1 would be better in servers than Ampere's chips or Graviton for instance? Desktop and server chips have different design goals.


But it's not really the chip, we have powerful ARM CPU every except on desktops. It's the combination of software and the chip that makes this a huge computing jump. No one wanted to invest in rewriting software to switch away from x86.


Exactly this. Nobody cares about the M1 except for tech enthusiasts and Apple fans.

The problem is that you effectively have to run macOS to take advantage of it and that's a no-go for a wide variety of people. I don't even care if I can ever run Windows or Linux on an M1 because Apple will make it a pain in the ass to do so. They don't even have a fucking bios UI... Imagine enterprise IT shops rolling Windows out on that garbage? It'll never happen.

And I don't want ARM at all unless the systems built around it are as modular and open as the x86 ecosystem.


Apple will never sell it's chips standalone. Phone, desktop or server, whatever. I can see them building Mx cloud servers and trying to peddle it for CI pipelines, but not selling actual iron.


Well the problem there is you can't really have the traditional mix and match part selection and get that kind of performance, right?


I think Apple's been making major leaps in hardware but slow as a turtle when it comes to software. iPad is restricted by the software despite having the best hardwares in its category. Really wish they would pay some attention to it.


1 - Apple is not going to enter the server market ever again, especially not the generalist cloud infrastructure

2 - 90% of the laptops sold Worldwide aren't Apple laptops, most of them are sub $500 laptops. Amazon best selling is ASUS Laptop L210 Ultra Thin Laptop, 11.6” HD Display, Intel Celeron N4020 Processor, 4GB RAM, 64GB Storage priced at $199

Basically only Mac users who upgraded to M1 are going to notice the difference


> 90% of the laptops sold Worldwide aren't Apple laptops, most of them are sub $500 laptops.

Apple’s share of the total PC shipments is closer to 15% now and their share of the laptop market is even higher.


... in the US. It's a different story worldwide where Apple is around 8% (source: https://9to5mac.com/2021/01/11/mac-huge-q4-growth-49-percent...).


I tried a variety of Google searches, but I can't find information specifically narrowed down to Apple's share of the notebook/laptop market in the U.S. Can you help me?


> 2 - 90% of the laptops sold Worldwide aren't Apple laptops,

This is perhaps the wrong metric to look at. What % of total profit is Apple capturing? Several years ago the accepted wisdom in phones was that Apple was getting ~30% of the unit volume and 90% of the profit.


> What % of total profit is Apple capturing?

Worldwide 9.6%.

I rounded it at 10%.

> Apple achieved a global desktop/laptop market share of 9.6% and ranked fourth behind Lenovo, HP, and Dell. In the United States, Apple accounts for 12.7% of the PC market


I really want the next MacBook Pro to support driving two 4K monitors over thunderbolt and have an option to buy 32 gigs of ram.


I would prefer just one 6K or 8K monitor personally at a size of say 55" with a wide ratio. Simpler setup. No post in the middle. Something like this but bigger with higher resolution, and maybe a bit more height vertically (this one is a bit too wide for its height in my opinion): https://www.samsung.com/ca/monitors/gaming/super-ultra-wide-...

I think that dual 4K is the old way, single 6K/8K ultra wides are the future.

That said I am rocking the dual 4Ks on the Mac Mini M1 and it works great: https://twitter.com/BenHouston3D/status/1384693982249340935


I use a 55" 4K Samsung TV as a monitor. It cold boots quickly and has no latency or subsampling issues in the right mode. I can recommend it as a more cost effective alternative to multiple monitors.

The amount of available screen space at 1:1 is refreshing (not as crazy as imagined though), but it makes me realize how grainy "normal" monitors are. In some years 8K TVs might be cheap enough for an upgrade, but on that time scale I can see VR being a serious contender as well (it's really good already).


One thing about TV's is that they are grainy up close. Being a 55" you probably don't sit very close to it, but how does it overall feel e.g. in font quality? And I also wonder if it's good for your eyes to focus on something that's constantly far away. I would think a mixture of the two (normal distance to a PC monitor with frequent breaks) would be preferred.


> how does it overall feel e.g. in font quality?

> they are grainy up close

Hmm, maybe it's its "TV-ness", but the individual pixels look pretty sharp to me at a close look (no quantum dots or weird pentile arrangement, that should definitely be looked out for), but I have no other low DPI display to compare it to. My reasoning was always "it's just an LCD panel, why would it be different" and so far I feel proven right.

> I also wonder if it's good for your eyes to focus on something that's constantly far away

My appeal-to-nature-argument would be that the natural human mix would probably have been much heavier towards looking into the distance than it is now (see the massive rise of near-sightedness in cities), so it can only be better than what we're currently doing.


A year ago I was looking to get a big screen for my iMac Pro. I looked at many monitors in the 40" range, then on a whim I put my 55" LG OLED TV on my desk 30" from my face and hooked it up. Wow, it is better and easier on the eyes than any monitor I have had. I had planned to use it beside the iMac screen, but now I just mirror to the TV and have the iMac under my desk. I have used dual monitors for decades but I like one big one better.

I run it at 3840 x 2160 and use a trackball to get around. I like it a lot. LCD TV screens do not work well extremely close up but an OLED is fine even at 12" away. Text in the corner of an LCD TV is dim and poorly defined but on an OLED it is just like it would be on a 55" sheet of paper.

Of course I was concerned that the OLED screen would burn in, but it was an older TV so I took a chance.

Most days the computer and TV are on for about 12 hours. I have run them like this for 14 months now. There is no burn in at all. The overall brightness is slowly declining, but it has been doing that since I got it in 2016. (LG OLED55B6P)


It's best for your eyes to change focal distance. Staring off into infinity, staring at 75cm, and staring at 30cm are all bad for you. Find something else to look at every so often.

There are few differences between a 4K 42" screen and 4 1080P 21" screens: the smaller screens can be angled, they have bezels, and are probably more expensive in aggregate.


I use a 55" Curved Samsung 4K TV. The DPI is similar to a 24" 1080p in a 2x2 configuration. The curve provides a similar effect to turning 2 monitors toward the viewer. I don't use the height very often, as I have progressive lenses and have to really look up to see the top. But I can lean back for watching a full screen video quite comfortably. IMHO it's fantastic. For those that can still see a difference in Hi-DPI an 8K 55" would be for them. I really don't need it though and this thing costs about $600 at Wallmart.


> One thing about TV's is that they are grainy up close.

Most TV's have a "gaming" setting that doesn't try to "improve" the video feed and just passes it through.


How far back do you sit from this monitor? Do you use resolution scaling?

The last time I tried a large format display, I couldn't move back far enough to make it work but I really loved working in 4k without scaling.


I use a 4k 43" Dell monitor without scaling. I just measured and my "eyeball to glass" distance is 30 inches. Works great and gives me the real estate of about 4 smaller screens.


I have the same setup, no resolution scaling on macOS. And also, like the parent you replied to, sit about 70 cm (28 inches) from my display.

It's so big that I actually don't sit in the middle but to the left of the display. I use the right of the display as a second monitor.

As an aside, I have the Philips Momentum (model 436M6VBPAB) which is sold as a monitor but can double as a TV. It has HDR, which is very nice in a light office. I suspect it is no longer produced in any significant quantities. The price has been going up.


I'd be curious about that too, once I sat in front of a 30" monitor and really hated it due to proximity (60-80cm) and size. A single 27" @4k is the sweet spot for me.


Currently around 70cm due to the size of my desk, but I'd like to eventually move back to something around 120 eventually, as that's what provides maximal comfort.


I have the exact same setup as you (samsung 55) and my monitor is 76 cm from the front of my desk.


Are you saying it's a 55" display?


How do you think you would feel about 65" vs 55"? I've been using dual 39" 4k monitors (no longer available on the market), and when I replace them I want to go to a single 55" or 65". The current area in square inches is almost identical to a single 55", so at 65" I think I need to lower the screen in order to have full visibility.


I do the same thing in my "living room" setup. I have the entertainment center with the television. Then a small desk in front of that.

I can slide the desk out of the way when I want things back to normal in my living room.


I've always wondered if a far-sighted person wouldn't be better off with a huge monitor at a distance.


Mind linking to the exact model you use?


>I think that dual 4K is the old way, single 6K/8K ultra wides are the future.

In the near future 16:9 8ks are going to be standard (too bad it won't be 16:10) but right now you can get a decent triple 4k monitor setup for half of what an 8k monitor costs.

IMO ultra-wide is great for productivity in a situational sense but it sucks for gaming and media consumption. Also getting proper support for ultra-wide to be standard is probably going to take another decade.

I think there's something to be said for each monitor being a separate entity. I think some people will still prefer multiple monitors over ultra-wide even when support for ultra-wide is a solved problem.

My personal setup now is a 28" 4k@60hz, a 32" 1440P@240hz as a primary and a 24" 1440p@144hz. I have my work computer, my desktop, a mac-mini, a switch, and a PS4 running off of them so having separate monitors is ideal for me.

Ultra-wides and super-ultrawides are cool, but IMO they aren't as practical yet.


I rock a 38" dell ultrawide. It's not HiDPI but it works fantastic for me. It's just about big enough to make me not miss dual screens, and it's awesome for games.


I’ve been playing games on an ultrawide since 2015 so I’m going to have to disagree hard with you. In the beginning ultrawide support was hit or miss but at this point I’d safely say 90-95% of games support ultrawide. There are also of plenty gaming specific ultrawide monitor options. I still have a second 16:9 monitor on the side because I do agree with your point of monitors being a separate entity. There are programs from dividing the space of monitors but if you’re used to multiple monitors I don’t think switching to a single monitor of any size will be a good replacement. I think it’s also worth throwing in that watching movies on my ultrawide is my favorite place to do it as the 21:9 aspect ratio means that movies stretch to the entire screen. It’s definitely an amazing experience.


>In the beginning ultrawide support was hit or miss but at this point I’d safely say 90-95% of games support ultrawide.

There's support, and then there's that support being so flawless that its the same quality of experience you would get with a standard aspect ratio. Each of us has our own standard of what's good enough. Its working out for you but the people I game with that were using ultra-wide monitors switched back due to the number of issues they were having, as recently as last year. I did some research myself when I was upgrading my monitor and some of the games I played would have had issues so or me personally it wasn't good enough.

Another thing to consider is a lot of game content is designed with standard aspect ratios in mind, so whether expanding the viewpoint makes it better is going to be a personal standard. It will be interesting to see if UW monitors do become a standard in a couple of decades whether game developers start specifically making content that utilizes the extra space to offer something that isn't possible with existing games.


That’s a fair point because there are definitely certain developers, blizzard is can think of, where most games don’t support UW and they don’t seem to have an interest in it. Even still, in the games where there are issues it’s easy enough to switch to 16:9 and not miss out on anything by having the UW. I’m not sure what issues they were having that caused them to switch back from having one but I can’t imagine ever going away from one as the advantages of them are just so great. If you don’t mind me asking, what games do you specifically play and what issues did you read about?


Do you have periodic weirdness on the thunderbolt monitor? Every so often, my m1 mini wakes or boots so that there’s a vertical misalignment by a pixel or two about halfway across. It’s like the desktop was cut in half and taped back together slightly crooked.


I haev this problem! For me it's a strip about 2 inches wide, down the middle, one pixel higher than the rest of the image. I am using an LG 5k monitor, and I have been unsure if it's a problem with the monitor or the machine. I'd love to know what kind of monitor you are using; this gives me hope that it might be an apple bug that will get fixed eventually.


I also have this problem--also an LG 5K. Disconnecting and reconnecting seems to fix it for now.

No idea if it's the monitor of the laptop, but it's the exact same problem and hopefully hardware doesn't glitch in such a consistent way and it can be fixed in software.


I have an LG 5k, too! And it only happens on that monitor - my LG 4k over HDMI is fine.


It's because the 5K monitor is actually two monitors taped together ("multi-stream transport").


It is two streams but it actually isn’t MST. It’s two independent HBR2 links; MST would have been two streams over one HBR2 link, which doesn’t have the bandwidth for 5k.


YES -- it's not just me! At first I thought my monitors were somehow broken (which is unfortunate as I paid a bit extra for name brand LG). I suspected something, as each monitor is plugged into an M1 computer (one's a Mac mini M1 and the other a MacBook Air m1). Both exhibit the visual problem you describe on a random basis.


pairing this with another comment by myself (about multi-monitor support beyond only one external monitor). I wonder if this is the main reason that Apple didn't support more than one external. Does the issue exhibit itself even more when additional monitors are connected? To be clear it never occurs with my main MacBook Air screen (only the external).


Just as a data point, I have an MBP M1 and an LG external monitor and this has never happened to me in almost 6 months usage.


I didn't mention this but it didn't happen initially, only after a couple months did the random screen errors happen. And power cycling, disconnecting/reconnecting/changing cables, etc to the monitor changes nothing.


I have an Acer Predator that has this issue (Whether using my Mac or PC attached.) Power cycling the monitor makes it go away. Basically the left side of the screen is shifted one pixel higher than the right side of the screen, making a vertical line down the middle where the shift changes.


in my case power cycling does not change anything, could be a separate issue for you. In my case it is also retaining a "ghost" of previous pages I've had open. It does eventually go away (and most of the time is not there).


I have zero weirdness at all. It just works perfectly all the time.

Neither monitor supports Thunderbolt. But I run off of a USB-C to Display Port cable and one is using the HDMI port. Both run at 60Hz as it is HDMI 2.


I run from display port to usb-c adapter -- I wonder if it is my adapter which is a simple Cable Matters adapter (which gets somewhat warm). Be nice if you shared what adapter you're using for comparison.


I don't have an M1 machine, but I have had periodic weirdness on external monitors (of every flavor) when waking from sleep on all the Mac laptops I've owned over the last 10 years.


that is ... bad luck. Or I have really good luck (having used mac's for over 20 years). This situation is a first for me (and in my case happens during normal usage, ie not upon waking) - though given Apple's recent track record screams buggy software


I have this happen on my Windows PC every now and again. It’s on the monitor that’s hooked up via DisplayPort.


I have run into poor quality DisplayPort cables in the past. I now only buy ones I know are brand name.


8k is 4x the resolution of 4k at a 16:9 aspect ratio. It would require supporting 4x 4k displays.

Example: https://i.pcmag.com/imagery/articles/07toBDd6lpyucCyM0xWrcQv...


1000x this. For several years I've been eyeing the 49" ultrawides as a replacement for my current 34" ultrawide. It would definitely be a big upgrade, but I keep thinking that something high DPI must be around the corner in that format. The years just keep rolling by and it seems like display tech barely advances.


I purchased the Dell 49” ultra wide to replace a Thunderbolt Display and 27” 4K. It is effectively the same resolution as I was using. It is nice not having the break and being able to better center windows. Split window views are much easier to work with when being able to see the entire line of code.


> I would prefer just one 6K or 8K monitor personally

Sure but the Air and MBP already support 6k external displays.

They don’t support 2x4K.


I do agree. But what I am trying to say is that we should be pushing for 6K and 8K ultra wide monitors rather than the old hack of 2 monitors to one computer.


There are times I actually prefer 2 separate monitors. Allowing one monitor to quickly go full screen while continuing to use the other monitor is quite useful in my workflows.


Virtual desktops are also more useful with two separate displays, or at least lend themselves to different use cases.


it's not a hack though. i prefer 2x4k. i hate the curve (no straight lines) and UW is too wide to be comfortable (eye and neck angle). 2x4k, one straight on and one angled is ideal. i spent 2 weeks and spent $8000 on monitors to test this out. 34UW at normal desk distance is perfect, however 2x4k is better overall. I actually use 3, as I use the macbook screen as the 3rd.

that said, i have no issue with your choice. for some applications (eg video editing is the default choice for ads of UW monitors) it is really better.

i do have issue with you denigrating 2 monitors. it is not at all "an old hack".


Out of curiosity, when did you last use a single monitor instead of your current dual monitor setup?

I’ve been using an ultrawide since 2015 but have almost always had a second side monitor with it. The period where I didn’t have a second monitor was short as having a separate monitor has always come in handy. All of the extra space on an ultrawide is great but when you’re doing something that takes the whole screen, it still leaves you in the same position you would be with a single 16:9 display.


Not the GP, and I really don't agree with their position, but I did switch from two monitors to a single UW last year.

TBF my old setup was a decade old so it was merely a 24" (1920x1200) and a 19" (1280x1024 in portrait).

Though my new display has a much larger logical size, the inability to put something fullscreen on one of the monitors (whether the small one or the large one, depending on the content) is definitely one of the drawbacks I had a hard time getting used to, not only can it be inconvenient, it actually wastes space as I can't get rid of window decorations, which putting e.g. a video player or browser window in fullscreen allowed.


I much prefer two monitors. I run each one as separate desktops, which means I can have "left screen apps" and "right screen apps" and cycle them independently. It also means I can switch desktops on my left screen while my apps on my right screen stay where they are.

Also, with two screens, I can adjust the angle between them to be more or less severe.


Yeah, I even actually kind of like the "post" between monitors that the GP mentioned (as long as the bezels are thin :) - it help as a kind of mental division between types of work.

Also, 2 monitors means more choice in layout; for example, my 2nd monitor is in portrait orientation.


Only if I can have a nice tiling WM.

I work on mac all day and on my 1440p widescreen I am constantly trying to put stuff so I can use everything I want to at the same time. `[Code|App|Docs]` is so common we ought to be able to split a wide screen 3 ways the way we do a normal 2.


Try out Rectangle.app. It saved my bacon mentally and can tile windows how you want. Just not automatically like a full tiling window manager.

https://rectangleapp.com/


+1 on Rectangle recommendation. Learn a few hot-keys (or rely on the menubar dropdown) and you can very easily get any layout of windows you want.

I was using Spectacle before, but I think the project died, and Rectangle is what I use now.


Rectangle was mentioned, but I'm currently using Amethyst: https://github.com/ianyh/Amethyst It was recommended to me here on HN, and it's been very, very nice.


Here's an old picture of how I use 2 4k monitors at home https://i.imgur.com/Wwr6G42.jpg (I've upgraded my desk and keyboard since)

I'd strongly consider getting a 6k monitor, or maybe a 5k2k ultrawide if I didn't already have two my two Dell 27inch 4ks.


> No post in the middle.

I personally prefer the visual break as I find it useful for creating fixed 'work areas': terminals/xterms, browser, mail, etc.


With a single screen, you can add any virtual (stripe of pixels) or physical divider (piece of tape) that you like. With two screens, there's no way to remove the gap.


I'd prefer two distinct monitors. I use one more expensive colour-calibrated one, and then a standard one for work where colour isn't as critical. A large colour-calibrated screen would be prohibitively expensive.


How do you deal with full screen video, games, etc on a single monitor?


Something like this... but a lot cheaper please. Like <500€ please.


The current RAM restrictions in the M1 are dumb restrictions resulting from the fact that you can't get LPDDR4X chips larger than 8GB in size (the M1 has two on SoC LPDDR4X chips).

This year we should see a lot of CPU manufacturers change to (LP)DDR5 for memory - high end Samsung and Qualcomm SoCs are already using LPDDR5. It's a safe bet Apple is also switching to LPDDR5 which would put the upper limit on 64GB for the M2. This is notably still lower than the 128GB you can put in the current 27" iMac (for an eye watering $2600 extra), but is an amount far more pro users can live with.


Samsung makes a very large 12GB module. I’m sure that offering would be taken advantage of by a lot of people if they’d just put it in the table.

The real issue is not shipping the M1 with LPDDR5 support. We already have tons of Qualcomm phones and even Intel processors shipping with since last year. If Apple had done the same, we wouldn’t be talking about this today.


> The real issue is not shipping the M1 with LPDDR5 support.

I brought this up before but it was pointed out to me that the only manufacturer of LPDDR5 last year was Samsung, which was producing not-completely-standardized LPDDR5 at the time and probably didn't have enough spare volume for Apple anyway. Having 12GB LPDDR4X modules from one vendor (24GB total) probably is not enough reason for Apple to switch up their supply chain either, not for the M1 at least.

And, to be fair, I think Apple did get away with shipping low memory computers by targeting only their cheapest computers.


16GB is not "low memory" for the vast majority of users.


It’s a deal with the devil (especially on the pro machines) where you trade 50% faster CPU for 50% less RAM.

Everyone raves about how 8GB machines feel, but the massive SSD usage and pitiful lifespan shows that there’s not a free lunch here. 16GB (or a 24GB option) goes a long way toward preventing early death. I actually suspect they’ll be launching a free SSD replacement program in the next couple years.

I’ve also heard claims from a couple devs that side-by-side comparisons with their x86 macs show a massive increase in RAM usage for the same app for whatever reason. I’d guess that’s solvable, but could contribute even more to the SSD problem. On the bright side, all this seems to indicate a great pre-fetch and caching algorithm.


> I’ve also heard claims from a couple devs that side-by-side comparisons with their x86 macs show a massive increase in RAM usage for the same app for whatever reason.

Partly because the GPU works differently, partly because some inaccurate methods of counting memory (Activity Monitor) have cosmetic issues displaying Rosetta processes.


> I actually suspect they’ll be launching a free SSD replacement program in the next couple years.

Not even the most alarmist estimates suggest that the SSDs on the 8GB machines will start dying after two years!

At the moment the SSD lifetime concerns are little more than FUD. Everything we know is perfectly consistent with these machines having SSD lifetimes of a decade, even with heavy usage.


I was going to say! People seem to get all paranoid whenever swap is brought up on SSD machines, but modern SSD lifespans are pretty awesome.

https://techreport.com/review/27909/the-ssd-endurance-experi...

700TB was the minimum that any drive in the above link managed. If you used 100gigs of swap per day it would take you two decades to hit that level.


You'd be surprised how quickly a regular user can eat that these days with electron apps and a browser. My MIL would pretty routinely get out of memory errors at 16GB and she's hardly a power user. Somehow her Facebook tab would clock in at over a GB alone.


I pretty regularly have Slack running while compiling Erlang in the background, running a Docker container, several dozen Firefox tabs, and a Microsoft Teams call going, and I do not have problems with 16GB on x86. Perhaps ARM Macs use more memory or something. My next MacBook will definitely have more than 16GB if it’s available, but that’s more for future need rather than present.


She has problems with 16GB on x86. A single docker container and compiling erlang really don't use that much. It's many tabs of office 365, google docs, and social media, and how every app is an electron app now that eats memory.

I think we as programmers have lost sight of how a single gif in memory is about the same size as a whole docker container.


I don’t think that’s accurate, because Docker on macOS involves virtualization. And Teams is the worst resource hog I’ve used. Though I don’t use Office or social media much (Twitter occasionally) so maybe that’s it.


> because Docker on macOS involves virtualization

Go check out the resource usage. Yes, it involves virtualization, but the stripped down virtualized kernel uses very little resources. Getting that information is a bit complicated by the fact that it's difficult to see the difference between the virtualized kernel and it's applications in a way that's consistent with the host OS, but it's really in the dozens of megabytes range of overhead for something like docker on mac.


I guess I should say I don't disbelieve that your mother in law was having problems with the memory in her computer, I just find it baffling because I use mine extremely heavily and my RAM usage rarely peaks like 12GB.

Just to check, I opened simultaneous projects in Logic Pro and Ableton Live, I have a Rust project open in VSCode (Electron), compiling Erlang again, I have Slack, Discord, Signal, and Hey running (all Electron apps), and I have Firefox open with Twitter, Hacker News, Facebook (a dozen tabs, just for good measure), Instagram, FBM, Techmeme, NYTimes, CNN, and MSNBC. This is a 15" 2018 MBP with a 6-core i7 and 16GB of RAM, and the CPU is working pretty hard but RAM usage is currently at just over 12GB.


Unless you turned off swap, out of memory actually means out of swap, so probably the machine was out of disk space.

Safari shows a memory warning when a tab uses ~2GB memory, and can be too aggressive about it.


Activity monitor showed that RAM was full. She did not use Safari, but instead Firefox.


Well that’s fine, RAM is supposed to be full. You’re wasting it if it’s not full. If you’re really using too much, the system will either get horribly slow or you’ll get a dialog asking you to force quit apps. (which tends to come up way too late, but whatever)


> If you’re really using too much, the system will either get horribly slow or you’ll get a dialog asking you to force quit apps.

Yes, she was getting both. To the point that I asked her to keep activity monitor running in the background so we could see who the problem applications were.

I'm not sure why this is in question.


Because "physical memory" being high in Activity Monitor doesn't actually tell you things that cause that dialog to appear. (Unfortunately.) That and a 1GB Firefox tab isn't quite enough to do it.

If "compressed" or "swap space" approach 32GB+/64GB+ (depends on amount of physical memory installed) that can do it, but running out of disk space is a more common problem.

Of course, quitting the top app in the memory list probably is going to fix the issue.


I literally just confirmed that she was getting force quit messages.

I had to have her leave activity monitor open in the corner because it was running so slow it wouldn't open.


Just to side note, the 27" iMac RAM is the rare component that is upgradable by the user. You should be able to get 128GB for under $700 if you do not buy the RAM from Apple.


My current almost 2 year old 16" has 64gm ram already. Is this new version faster despite lower amounts of storage?


It’s faster ram, and a faster CPU. Memory only really gives you “number of active things that the CPU can switch to” (meaning, open idle programs) and cache.

If the nvme drive is fast enough, the filesystem cache is not as noticeably effective, and if you have aggressive “suspend” of processes (like iOS) then lack of RAM capacity does not really impact performance at all. But memory latency, speed and the CPU do.


very cool thank you!

for me video cache would be really helpful, but after effects and especially premiere right now seem absurdly slow.

premiere especially i can hit play on a non-edited clip in preview and it takes multiple seconds to start... maybe i have a wrong setting somewhere


That could make an M2 Mac Mini into a scary on prem machine.


I suspect that the MacBook Pro 16 will probably one of the first M2 offerings and it already offers up to 64g of RAM, so hopefully you’ll get your wish.


This is exactly what I've been waiting for!


> have an option to buy 32 gigs of ram.

I think it's quite likely that M2 will have a 128-bit lpddr5 interface. Using widely available modules, this would allow them up to 64GB ram.


I would love that! I got an M1 MacBook Air - unfortunately it was so good I found it could run the games I like perfectly well, which I hadn't planned on doing. It ran everything just fine except for Cities:Skylines. I have way to many mods and assets for 16GB of RAM to be reasonable, so it was with much reluctance I took it back on the last day of my return window. Being able to drop my Windows gaming PC and consolidate everything onto one machine will be very nice though! And I may spring for a larger size screen if I am going to have to abandon the MBA class of machine.

Returning it still hurts. I was really hoping for something that would take more than 32GB of RAM but am not surprised that it's still going to be later this year. The guts in the new iMac would be perfect for my Mom if they just offered a 27" replacement too.

Oh well. A few more months won't kill me.


Really hoping for 128-256gb ram limit, perhaps on a separate ddr. E.g., any sort of serious data science work is now impossible on m1s simply because of ram limits.


You would probably need to wait for the 3rd level of M SOCs predicted for the Mac Pro. The M1 is for the lowest performance tier machines like the Air, the low end iMac, the low end MacBook Pro. This next chip M2/M1X is for the middle tier like the 16” MacBook Pro and the 27”/30” iMac. It will probably take a third tier to handle the large RAM and GPU needs to the Mac Pro.


Yep, that's exactly what I've heard as well, and it makes perfect sense. I'm actually silently hoping 128gb would fall into the 'mid tier' - like it currently does with the old Intel iMac 27''. You don't always need a $15k cheese grater when all you're looking for is just a bit more memory on a Mac platform...


In a laptop?


Maybe the commenter has a very interesting use for it, but why would you buy a 256 GB RAM machine (which isn't a lot of memory for this arena either) to develop the model, instead of using something smaller to work on it and leasing a big cluster for the minutes to hours of the day that you'll need to train it on the actual dataset?


I don't do this kind of thing any more, but back when I did, the one thing that consistently bit me was exploratory analysis requiring one-hot encoding of categorical data where you might have thousands upon thousands of categories. Take something like the Walmart shopper segmentation challenge on Kaggle that a hobbyist might want to take a shot at. That's just exploratory analysis, not model training. Having to do that in the cloud would be quite annoying when your feedback loop involves updating plots that you would really like to have in-memory on a machine connected to your monitor. Granted, you can forward a Jupyter server from an EC2, but also the high-memory EC2s are extremely expensive for hobbyists, way more than just buying your own RAM if you're going to do it often.


I think there are studies showing that one hot encodings are not as efficient as an embedding, so maybe you would want to reduce the dimensions before attempting the analysis.


Kicking learns off on a cluster is surely a thing as well. And in some fields, as you correctly mentioned, memory requirements may be measured in terabytes. It's more of a 'production use case' though - what I meant is the 'dev use case'. For instance, playing with mid/high frequency market data, plotting various things, just quickly looking at the data and various metrics, trying and testing things out often requires up too 100gb of memory at disposal at any time. It's definitely not only about model training. And 'something smaller to work on' principle doesn't always work in these cases. If the whole thing fits in my local ram, I would of course prefer to work on it locally until I actually need cluster resources.

(But seriously though... what is 16gb these days? Looking at process monitor, I think my firefox takes over 5gb now since I have over a thousand tabs in treestyletab, clion + pycharm take another 5gb, parallels vm for some work stuff is another 10gb; if doing any local data science work that's usually at least a dozen gb or a few dozen or more)


>Maybe the commenter has a very interesting use for it, but why would you buy a 256 GB RAM machine (which isn't a lot of memory for this arena either) to develop the model, instead of using something smaller to work on it and leasing a big cluster for the minutes to hours of the day that you'll need to train it on the actual dataset?

A 128GB of a ram in a consumer PC is about $750 dollars (desktop anyway, laptop may be more?). That's less than a single high end consumer gaming GPU. Or a fraction of a Quadro GPU.

So to the extent that developers ever run things on their local hardware (CPU, GPU, whatever) 128GB of RAM is not much of a leap. Or 256GB for Threadripper. It's in the ballpark of having a high-end consumer GPU.


I have 128GB in my desktop, which is the same amount of ram that our cluster compute nodes have. I use it for Python/pandas forecasting work.

The extra ram in my machine means I can work on the same datasets we use in prod locally, which is a huge productivity boost. Most of our developer time is actually spent building datasets. loading a random sample doesn’t really work when doing time series or spatial transforms and using a time or space limited subset makes description (graphs, maps) a chore. More ram is a huge productivity enabler when working with in memory tools like pandas.


The same M chip is used in 4 product lines so I'm going to assume a Pro version of the iMac or Mac Mini is what the parent means, but if you need that much memory, setting up a VM should be worth it. Same if you need a GPU.


I really want Apple to actually support external monitor properly.

Here's an incomplete and continuously updated list of monitors (not) working with Apple hardware: https://tonsky.me/blog/monitors-mac/


I finally bought an external graphics card bay for my i9 15", and now it's obsolete. =)


I'm no silicon architect but isn't this why performance is so good? They can cut out all those extra pcie lanes and extraneous stuff and focus on performance. If you start adding back big I/O won't you just be back down to Earth with everyone else?


Not really. The main reason the M1 doesn't have good IO is that it's a beefed up ipad CPU. This was the first generation where they were putting the chips in systems that can use IO, so they probably are just behind on the design for those parts of the chip.


What do you mean the M1 doesn't have good IO?

A 1 Tb Mini with all the SSD write lines benchmarked at

Write: 2,983 MB/Sec Read: 2,710 MB/Sec

That's as fast or faster than Intel Macs, and way higher than most PC SSDs.


Chip I/O as in having lots of memory channels and lots of PCIe lanes.

That said, Intel mobile CPUs don't have fantastic I/O either, this will matter more when Apple has to scale up to iMac Pro/Mac Pro territory.

"SSD performance" is a very high-level view of I/O, and depends more on the size and configuration of flash chips used than the CPU design.


> The main reason the M1 doesn't have good IO is that it's a beefed up ipad CPU. This was the first generation where they were putting the chips in systems that can use IO

Could you explain what you mean by this? I don't think I understand.


All of Apple’s prior ARM experience comes from iOS devices. They had no need to develop io capabilities for iPhones and iPads with a single port. Now they put the chips in computers but most likely haven’t fully yet developed extended io capabilities.


Hmm, but wouldn't features like wifi, all the cameras, the touchscreen, Cellular data, NFC, touch ID, speakers, bluetooth, and so on all come in via IO as well? You're right that they only have one port, but they still connect to a lot of different components.


The problem with this analogy is that NFC, touch ID, speakers, and bluetooth are all in the MB/s or less range. On the desktop side, you have to deal with 10gb/s ethernet and 50-100gb/s for ram. It's just a whole different ballgame.


Ah, that makes sense. Do iPhones connect to their RAM differently? I know it’s physically different than desktop RAM (for obvious reasons), but I don’t really know enough about hardware to know how it connects.


Those usually connect over serial-esque busses, and they connect via a security chip. (Akin to the t2 chip but much lighter)

A far cry from the kind of tolerances and sustained high throughput of PCIe.


Does it support two 4K monitors now?

I have an old Macbook from 2013 that runs Linux now with an Intel integrated GPU and it supports dual 4K monitors at 30Hz. I use the Thunderbolt ports as Mini DisplayPort outputs and connect to two 4K monitors with DisplayPort and it works fine.


M1 laptops can only support one external display at 6k and 60Hz. The other video goes to the laptop's own display. Even if you use the laptop closed in clamshell mode you can't connect or daisy chain to more than one external display.

The mini can support two external displays. One via thunderbolt at that same 6k and 60Hz and the other a 4k at 60Hz through the HDMI port.


Note that the restriction on external monitors can be worked around by using a DisplayLink compatible dongle/dock, since it uses a custom driver (I assume it does something in software that would otherwise be limited in hardware).

I use the Dell D6000 and I run three (1080p) external monitors in addition to the built in monitor.


I've been trying to find a DisplayLink dock that can output video via USB-C or Thunderbolt. Everybody's shared solutions always involve HDMI connections, never USB-C.

I have two Thunderbolt/USB-C monitors that I was hoping to daisy chain with one wire from my Mac. Alas it's not possible.

My hope is power into a dock. Thunderbolt from dock to laptop to power laptop. Thunderbolt/USB-C from dock into first monitor. Second Thunderbolt/USB-C from dock using the DisplayLink tech to second monitor.


You won’t find a Displaylink adapter that supports Thunderbolt monitors. It just won’t work from a technical aspect.


Three 1080p displays add up to 3/4 the bandwidth of one 4k display, at the same framerate.


I use a Wavlink 4K dock to drive 2 4K monitors on my Mac.


I've got a Caldigit TS3


Sansibit MM4 for me


Compared to M1? Because my Intel MBP has done that for years.

I now use an M1 mac mini with two 4k monitors (not from one cable though)


I have a 2018 Mac Mini driving two 4K displays with Display Port over USB-C cables. I previously was driving 2 x 24" 2K displays (same cables) and had no performance problems. Once I upgraded to the 4K displays I've observed very noticeable display draw performance issues. It's not enough for me to give up the 4K displays but I'm wondering if this is improved on the current M1 Mac mini?


I haven’t noticed any display lag on my end. But I don’t do latency sensitive things like gaming.

1 monitor (Dell P2715Q) is hooked up using DisplayPort. The other (LG 27” IPS 4K) is using HDMI 2.0. Both are running 4K retina at 60Hz without any issues.

I had so much trouble finding this out before I bought it which is why I am vocal about the fact that it works.


Thanks for the information! I am tempted to upgrade my 2018 when the next revision (M1X or M2?) Mac mini comes out to improve the display refresh rate issues.


The M1 MacBooks can only drive 1 external monitor because they're already driving the internal display.


> buy 32 gigs of ram

Even their Mac mini doesn't offer a 32 gigs of ram. One of the reasons I had to go with Intel version instead of M1 because at least I could upgrade to 32gb ram myself (plus was cheaper). There is no reason why they can't offer 32 gigs ram on non-laptop lines.


or 16gb high speed on die and another pool of DDR4 memory outside.


My current MBP drives 3 4K monitors and has 64 gigs of ram. Perhaps I’m not clear on the point you’re trying to make.


They're talking about M1 or better. My M1 13" MBP with 8GB RAM is far superior to my 2015 MBP (with more RAM) when it comes to video editing, etc. First in my wishlist would be re-adding two external displays, then more RAM.


Me too


You can achieve that and even more if you do the right thing and stay with the x86-64.


The latest Macbook Pro 16" supports 32GB or 64GB RAM [1]. It's been that way for a few generations. I have the 2018 model and it has 32GB. I think that might have been the first generation where that much RAM was supported.

[1] https://www.apple.com/macbook-pro-16/specs/

edit: oh right, that's not M1 brainfart


Those are intel based Macs. This being an article about the M1 going into mass production, it's probably fair to say the parent comment was referring to those specs in an M1 Mac, not an intel Mac.


Yea but that one uses the intel cpu. The problem with M1 is that it only supports up to 16 right now. And no eGPU. :/


yeah but they want that with apple silicon


This would be quite an accelerated timeline if Apple ships its second-generation M-series chip only eight months after the first. Typically, they’ve followed a sort of six-month tick-tock pattern for the A-series, launching a new major revision in the fall with the new iPhone, and launching an “X” revision in the spring with new iPads.

I think most observers have been expecting an “M1X” for Apple's first pro-oriented ARM Macs, so an M2 already would be a surprise.


> "This would be quite an accelerated timeline if Apple ships its second-generation M-series chip only eight months after the first."

The M1 was available on Nov. 17th, 2020. The article states that the chip is entering mass production, and due for release sometime in 2H. This could easily be released a year after the M1, if not 13 months later.


We're in a brave new world. The early prognosticators thought the M1 was more a proof of concept (shove it into existing designs to get it out there).

But now we know that it was always intended to be a real player (it's in the iMac and iPad Pro).

So this news is interesting to me because now it seems to cut back the other way that maybe the M1 was designed for a short shelf life.

In a world where Apple is controlling so much of its silo, "normal" design and building deadlines will be upended.


> The early prognosticators thought the M1 was more a proof of concept (shove it into existing designs to get it out there).

Which is a weird take when you consider the thermal issues that Intel macs were plagued with. It's almost like the chassis was designed with 10w of dissipation in mind which Intel couldn't operate within, but the M1 could easily.

I had assumed that Apple designed for the M1 and then fit Intel chips into those designs.


My private conspiracy theory (supported by nothing) is that Intel promised to Apple good 10w processors back in 2014-ish, and Apple designed 2016 MBP based on that promise. And when Intel didn't delivered, they shipped Macs anyway, and either started working on M1 or cleared any doubt about should they continue working on it.


that’s not just your (conspiracy) theory, it’s exactly what happened (something i’ve also noted before). intel screwed apple years ago and apple decided to move on. it just took many years of chip development to get to this point.


Honestly wouldn't be surprised if it came out that they started working (at least theoretically) on Apple Silicon when they transitioned to Intel in the first place and it just took this many years for all the pieces to be ready and lined up.


Not only plausible, I'd say this is the most likely way it played out.

At the time of the Intel transition, Apple had already gone through the process once before with 68k to PPC. It had to be clear to the long-game thinkers at Apple that this cycle would keep repeating itself until Apple found a way to bring that critical part of its platform under its own control. Intel was riding high in 2006, but so had IBM in 1994.

Within two years of the Intel transition, Apple acquired P.A. Semi. The iPhone had barely been out for a year at that point, and still represented a fraction of the company's Mac revenue – and while it looked to us outsiders like the acquisition was all about the iPhone and iPad, in retrospect, a long-term replacement for Intel was almost certainly the endgame all along.


possible, but as outsiders, it's hard to be sure of that sequence of events with those sets of facts, to draw that conclusion definitively. perhaps that was a backup plan that quickly became the primary plan.

but with the 2016 line of macs, it was obvious that apple was expecting faster, smaller, cooler, more power efficient 10nm chips from intel, and intel fell flat on their face delivering. it's not clear how far before that that apple knew intel was flubbing, but 2014 seems a reasonable assumption given product development timelines. as intel's downward trajectory became clearer over the following months, along with the robustly upward trajectory of apple silicon, the transition became realizable, and eventually inevitable.

as an aside, i'm using a beat up 2015 macbook pro and eagerly awaiting the m2 version as its replacement, seeking to skip this whole intel misstep entirely.


I think Apple would have been perfectly happy buying CPUs from Intel as long as Intel kept their end of the bargain up.

After the PowerPC fiasco and IBM leaving Apple high and dry, I have zero doubt that there was a contingency plan always under way before the ink even dried on the PA Semi acquisition, but it wasn't probably a concrete strategy until about the third time in a row Intel left Apple high and dry on a bed of empty promises.

Apple has so much experience with processor transitions they don't have to stay on ARM either. And they have the capital to move somewhere else if it makes enough sense to them. I find it highly unlikely - but if it made sense it would be highly probable :)


I actually wonder if the end-game isn't going to be a Intel/M1 hybrid on the high-end ... a system with multiple chips and architectures.


That sounds terribly complicated.


Fascinating. It's amazing the 3D chess these companies have to play effectively.


> "I think most observers have been expecting an “M1X” for Apple's first pro-oriented ARM Macs"

I'm pretty sure that's what this is, rather than a next-generation ("M2") chip. It will likely have the same cpu core designs as the M1, just more of them. And possibly paired with the new, long rumored "desktop class" Apple GPU.


Given the timing, I doubt it. Apple has established a fairly stable cadence of core improvements every 12 months. Enough time has elapsed between the M1 and this new chip that I'd expect it to have more in common with the "Apple A15" generation SOC than the A14/M1.

As to what Apple's choice of marketing name, that's entirely arbitrary. (For what it's worth, my guess is they're ditching "X" suffixes and will designate higher spec variants with other prefix letters e.g. the "P1" chip.)


Considering M1 doesn't support LPDDR5, and A15 will be in a similar time frame. I would not be surprised it will be a M2 ( Based on A15 ), or more likely a M2X.


I'm not sure about that, because the M1 cores are, in essence, quite old at this point; they're pretty similar to the A12 cores. Apple usually does quite a major micro arch refresh every few years; it's probably coming up to time.


> I think most observers have been expecting an “M1X”

An M2 name implies some architectural differences like extra cores or more external bandwidth. I'd be totally happy with an M1x with some tweaks like more external connectivity and more memory.

Which, for me, would be quite perfect. The only reason I'm holding back this purchase is the 16 GB memory limit.


Same here. I want a replacement for my 27in iMac and would have held my nose at the slightly smaller screen, but really want more memory than 16GiB (docker, etc...).

So Apple will just have to wait to get my money until fall (or whenever they announce the successor to the 27in iMac).


I'm excited for the 27 (or maybe it will be 29") variant


I'm not sure you'll see a true second generation chip, I would be expecting it to be mostly the same thing, but with more cores and some solution to providing more RAM.

Having said that, Apple does have something of a history of pushing out v1 of a product that sets a high bar for everyone else to try and catch up with, then immediately pushing out a v2 that raises the bar well above where everyone else was aiming.

Overall though, it's awesome that the Macs now get to benefit from the vast investment going each year into making faster/better CPUs every year, for hundreds of millions of new iPhones.


Apple forecast a 2 year transition a year ago at WWDC. That means they need a processor for their base laptop/ consumer desktops. One for their high end laptops and many of their pro desktops. And arguably one to replace the Xeon in the iMac Pro and Mac Pro.

Unless they are going to use this same CPU for the Mac Pro, this is right on schedule.


I think getting out the M2 before the fabled "M1X" actually makes sense. This could explain the decision to put the M1 into the new iPad Pros, to re-use the M1 chips elsewhere once the new M2 becomes available.

Main reason being the M1 was a more proof of concept and rushed out (despite being as good as it turned out to be). The M2 will be a more refined M1 but with notable improvements such as LPDDR5 support - akin to AMD's Zen1 and Zen+ releases.

On the other hand, there could be a M1X being readied for release in the upcoming June WWDC. It may be architecturally older than the M2 but still superior performance on a big cores differential, e.g. M1 only has 4 big cores and 4 small cores, the M1X just needs more big cores to be notably more performant.

All highly speculative of course, will have to find out in about a month.


When they switched to Intel they released the first Macbook Pros in January 2006 (32-bit Core) and in October 2006 shipped 64-bit Core 2 Duos.


Imagine they put Macbooks on a yearly upgrade cycle like the iPhone - OMG, that would be impressive.


I don't know if you're being serious but, given the lack of improvements in chip design lately, that would indeed be impressive.

I don't mind upgrading every other year. I just want the upgrades to be meaningful.


It’s not that you, as a user, need to upgrade, but Apple could upgrade the SOC in their machines each year. It’s like the phone, small incremental updates each year. If you wait a few years to buy a new one, the change feels dramatic.


Some people (a lot?) are now waiting for the new Pro devices and won't buy new until then


I'd buy a MacBook Air in a heartbeat if I could get 32GB of RAM in it. RAM is the only thing causing me to turn up my nose at the M2.

If they would have released the new iMac with a bigger panel so there were options for the 27" as well as the former 21" then my mother would be rocking a new iMac next month.

I know they said up to two years for the transition but I want to transition now :)


Not really, because M1 was probably meant as a stop-gap, and it's mostly a rehash of A12.

M2 is probably based on the Arm v9 ISA and has been in design for years.


The M1 is no stop gap. When you have people criticizing it because it only bests 90% of the current PC market but not all of it...

Well, if that is indeed a stop gap then I can't wait to see their first "real" chip :)


All very impressive, but here's my question: what are they going to do about graphics cards? Will they find a way to connect existing graphics cards to their CPU? Will they make their own ARM-based graphics cards? Will AMD or Nvidia?


Nvidia? Ha, never in a million years.

Support for one of the recent Radeons was recently added to macOS, so it's a possibility. No reason the M1 can't do PCIe, as far as I know the only thing keeping eGPUs from working on the M1 right now is software support. It could also be that the driver was added because of the extensibility of the Pro, though.

My expectation is that they'll keep the GPU on the same level, which is "good enough for most users", and focus on hardware acceleration for tasks like video and audio encoding and decoding instead. With an ML chip and fast audiovisual processing, most consumers don't need a beefy GPU at all, as long as you stick to Apple's proprietary standards. Seems like a win-win for Apple if they don't add in an external GPU.


Yeah I imagine the Radeon support was for the Pro and the existing Intel Macs (though I don’t know if those Radeon GPUs are really supported via eGPU. Are there enclosures where they fit?)

Still I can’t see Apple only developing one integrated GPU per year unless they somehow figure out how to magically make them somewhat approach Nvidia and AMDs modern chips. What would the ARM Mac Pro use?

It seems that Apple has put in a lot of development resources into getting Octane (and maybe Redshift and other GPU accelerated 3D renderers) to support Metal (to the point where it sounds like there may have been Apple Metal engineers basically working at Otoy to help develop Octane for Metal) and I can’t just imagine that happening just to support the the Apple Silicon GPUs. I wouldn’t be surprised if we see eGPU support announced for ARM Macs at WWDC (and maybe even the iPad Pros that support Thunderbolt. Yeah the idea of plugging your iPad into an eGPU enclosure is funny, but if it’s not to hard to implement, why not?)


>It seems that Apple has put in a lot of development resources into getting Octane to support Metal...and I can’t just imagine that happening just to support the the Apple Silicon GPUs.

At the start there will still be a lot more Mac Pros running AMD hardware that must be supported.

It may not be obvious, but Apple has repair work to do in the pro community. Four years ago this month, Apple unusually disclosed that it was "completely rethinking the Mac Pro." [1]

This new Mac Pro design wasn't announced until June of 2019 and didn't hit the market until December 10th of 2019. That's just _six months_ prior to the Apple Silicon announcement.

So, unless Apple simultaneously was trying to honor pro users while also laying plans to abandon them, it is hard to imagine that Apple spent 2017-2019 designing a Mac Pro that they would not carry forward with Apple Silicon hardware. Keep in mind, the company had just gotten through a major failure with the Gen 2 cylindrical Mac Pro design.

The current, Gen 3 2019 Mac Pro design has the Mac Pro Expansion Module (MPX). This is intended to be a plug-and-play system for graphics and storage upgrades. [2]

While the Apple Silicon SoC can run with some GPU tasks, it does seem it does not make sense for the type of work that big discrete cards have generally been deployed for.

There is already a living example of a custom Apple-designed external graphics card. Apple designed and released Afterburner, a custom "accelerator" card targeted at video editing with the gen 3 Mac Pro in 2019.

Afterburner has attributes of the new Apple Silicon design in that it is proprietary to Apple and fanless. [3]

It seems implausible Apple created the Afterburner product for a single release without plans to continue to upgrade and extend the product concept using Apple Silicon.

So, I think the question isn't if discrete Apple Silicon GPUs will be supported but how many types and in and what configurations.

I think the Mac Mini will remain its shape and size, and that alongside internal discrete GPUs for the Pro, Apple may release something akin to the Blackmagic eGPU products they collaborated on for the RX580 and Vega 56.

While possibly not big sellers, Apple Silicon eGPUs would serve generations of new AS notebooks and minis. This creates a whole additional use case. The biggest problem I see with this being a cohesive ecosystem is the lack of a mid-market Apple display. [4]

[1] https://daringfireball.net/2017/04/the_mac_pro_lives

[2] https://www.apple.com/newsroom/2019/06/apple-unveils-powerfu...

[3] https://www.youtube.com/watch?v=33ywFqY5o1E

[4] https://forums.macrumors.com/threads/wishful-thinking-wwdc-d...


Nit: Afterburner is built on FPGAs, they are architecturally different from the M-series chips and GPUs.


I'm not sure what you mean by nit.

Apple designed and released custom hardware that used a new slot to accelerate compute. My point is that this illustrates Afterburner as a product shows clear direction for Apple to put Apple Silicon into discrete graphics or other acceleration compute in the Mac Pro.


> So, unless Apple simultaneously was trying to honor pro users while also laying plans to abandon them...

You say that like Apple doesn’t do stuff like that all the time.


Stuff like what? Can you give examples where you’ve known the company’s plans and intent?


> Still I can’t see Apple only developing one integrated GPU per year unless they somehow figure out how to magically make them somewhat approach Nvidia and AMDs modern chips. What would the ARM Mac Pro use?

What do mac users need a beefy gpu for?

AFAICT apple just need a GPU that's good enough for most users not to complain, integrated Intel-GPU style.


What I said in before, 3D rendering (and video processing and anything else you might want a powerful GPU for).


Do people do 3D rendering on macs ? (there are no GPUs available with hardware raytracing support...)

Most people I know doing 3D rendering nowadays just connect their UI to a remote rendering farm. So a Macbook Air would be more than enough for that.

For video processing you don't need 3D rendering capabilities, just hardware acceleration for the video formats you are using.


And are you thinking the solution for people who do need a powerful GPU is eGPUs and Mac Pros?


I don't think Apple cares much for those people, they can buy the Mac Pro or a PC if they really need the GPU power.

eGPUs can be a nice addition, but I doubt Apple will release an official eGPU system. You're already limited to AMD GPUs after the clusterfuck of a fight Apple and Nvidia had, and I doubt Intel's Xe lineup will receive much love for Apple right after the Intel CPUs have been cut from Apple's products.

Honestly, for the kind of work that does need an arbitrary amount of GPU horsepower, you're barking at the wrong tree if you buy Apple. Get yourself a Macbook and a console or game streaming service if you want to play video games, and get yourself a workstation if you want to do CAD work.

I don't think the work Apple would need to put into a GPU solution would be worth it, financially speaking.


> I doubt Apple will release an official eGPU system

They already have one[1], and you can even buy eGPUs from the Apple Store[2].

1. https://support.apple.com/en-us/HT208544

2. https://www.apple.com/sg/shop/product/HM8Y2B/A/blackmagic-eg...


That's a Radeon Pro 580, AFAIK this eGPU offering hasn't been updated in several years.


That specific one is, yes, but you can also buy third party Thunderbolt 3 sleds (eg Razer makes one) and use more recent cards.


How would you fit Apple's AR/VR ambitions into this perspective? (I.e., given AR/VR has steeper GPU requirements, both on the consumption and creation side.)


Well unless Apple can pull an M1 and do with their GPUs what they did with their CPUs and start to embarrass Nvidia and AMD with lower power, higher performance GPUs.


Kinda feels like Apple's choice at the moment is just their own integrated GPUs. eGPU is also a possibility.


Will probably not be great for battery life


To make a Mac Pro-scale system with real gains, they would roughly need the equal of 9x the number of performance cores of an M1 (~36 to 48 cores), if they were to scale GPU in the same way (72 core GPU) you are looking at a 72 core GPU with over 23 TFlops (FP32), they could also find room in clock speeds and 5nm+ to get an additional 10 out of it I imagine. In general that would be enough for many but I wouldn't be too surprised to see them do something more exotic with their own GPU.


They are R&D’ing their own GPUs to vertically integrate according to some rumors from my Apple friends.


Why would Apple build fancy graphics cards? They have no meaningful gaming market and haven't cared about it for years. For machine learning?


They don't need to build them, but they need their machines to be able to use them (for the same reasons their current pro machines use them).


They already have. Their integrated graphics now rival that of discrete gaming laptops.


Rival how?

A Surface Book 3 with an intel processor and an outdated Nvidia 1650 TI laps around M1 in games. Almost 2x performance. I'm not even going to compare it to laptops with modern GPUs.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


Option 1 - yes, they will

Option 2 - no, but does it matter? It's not like the previous gen Macs had great GPUs and no one is gaming on a Mac anyway.

Option 2.5 - bring back eGPU


> no, but does it matter? It's not like the previous gen Macs had great GPUs and no one is gaming on a Mac anyway.

True, but previous macs were never really competitive with PC alternatives on the hardware side, since they all used the same chips just with a higher price tag. With M1, that's starting to change, and Apple has the opportunity to attract a much large customer base for Mac than it ever has.

And of course, they're much more interested in gaming nowadays thanks to iOS. Maybe not interested enough to suck up their pride and apologize to Nvidia for giving them the finger, but probably enough to at least stick a beefier GPU into macs.


Even putting aside the performance issue, Apple and gaming have never worked together quite well.

Apple's modus operandi of quickly and frequently deprecating old architectures and requiring app developers to constantly keep up goes down very badly with the traditional video game development model - of spending time and budget finishing up one game, releasing a few patches, then moving on to the next game with little further upkeep or maintenance of the now-done game. (Yes, games as a service is more common nowadays, but a huge number of games still go by this old model.) This model relies on long-term compatibility of old binaries on the platform being pretty stable, which is fairly true for consoles and Windows, but Apple platforms are anything but.

There are massive piles upon piles of only slightly old games that are not just unsupported but simply refuse to run on both the iOS App Store and Steam for Mac (including Valve's own back catalog!), due to the abandonment of 32-bit binary support a few versions back. And even if the developer is willing to do bare minimum upkeep work to recompile an old game and make it run on current hardware, chances are that between the time of release and now, lots of new mandatory hoops (eg. natively support a certain screen size) have been added to the app store checklist so that passing store certification requires tons more work than a simple recompile, further deterring the dev.

Perhaps you could chalk it up to the dev being lazy for not doing regular maintenance of the game, but the rest of the game industry doesn't force you to do that, while only Apple does.


You also need GPUs for rendering video, which people do use Macs for.


I hate to say that, but I am likely going to buy M2 Mac. I don't like Apple and their anti-competitive tactics, but I admit they won their spot for now. However, as soon as good PC competitor comes in, I'll drop Apple like a hot potato.


The Apple's second version of everything is always the one to get. The first version is always exciting, but usually comes at some large expense.

- iPhone 2 had 3G and was so much better than 1 - iPad 2 was about 3 X slimmer/lighter than 1

Lots of other examples if I though about it.


Obviously, we won't know if the M1X/M2 has a similar big advantage over the M1 until it ships, but...

You can also look at it this way: The M1 is not the first version. It is (IIRC) an enhanced version of the A14 SoC family, and a clear evolution of the work Apple has been doing on its silicon for the past decade.


What, in your opinion, is the drawback of the M1 product series?

I haven’t read many negative reviews.


The most commonly cited limitation I've heard is a max of 16GB RAM on the current M1 Macbooks. The limitation of a single external monitor is probably the second most common.

A lot of users would like to run multiple external monitors and have > 16GB of RAM. I know I'm in that group.


The RAM is limited to 16GB, IIRC the max I/O throughout is also somewhat limited as well, so you have to compromise on number of USB4/10Gb Lan etc


* Limited IO * Max 16GB Memory ( at unjustified cost ) * Limited Multi monitor support * No eGPU support ( as of now )

* Only about 50 a 55% of all the software is M1 ready ( looking at some tracking sites ). Technically this is not a M1 flaw but you need something to push the market/iron out the bugs. While when the M2 gets introduced, you may be at 60 or 70%. As in less likely to run into issues as the M1 users are the real beta testers. Even Parallels only recently got good M1 support ( with massive speed increases ).

As of this moment, buying a M1 laptop is a less beta tester feature then it was several months ago. If you ended up buying in Nov/Dec, you spend a lot of time under Rosetta 2 or dealing with issues.

> I haven’t read many negative reviews.

The above was kind of always overglanced by a lot of the reviews, as the youtubers mostly looked at their own us for video editing etc and there most software was on point very early in the release cycle.

You see a lot of reviews in Jan/Feb from people going back to Windows laptops after a month, not because of the CPU being bad but because they ran into software issues.

In the mean time the software situation has evolved a lot more but the progresses with software being made M1 ready has also slowed down a lot.

As a PC user i want a M1 like laptop, that has long battery life, is silent and still powerful ( unlike a lot of Windows laptops where its always the old saying: Pick 2, you can never have all 3 ).

But i prefer one with 8 performance cores, double the iGPU cores ( with preferably DDR5 ) for light gaming and standard 16GB. So some macBook 16 Pro or whatever, if the price is not insane. We shall see what Apple introduces...

So far the new offerings from AMD and Intel are fast but still power hungry and heat generating ( aka fan noise! ). AMD is only going little.big in 3nm.

Intel's alder lake may be a M1 competitor ( for battery life under light loads ) but its again first generation product so expect to be a beta tester until Windows gets fine tuned for a long time to properly use the little cores! For heavy loads, ... well, 10nm is 10nm, no matter how many +++ you add.


Memory, ports. That might be more of a product level issue but its a blocker for me.


In the case of CPU architecture switches, Apple's gone with the same strategy every time so far: switch out the guts, keep the outside design. So maybe not negative reviews regarding the CPU, just a bit boring design.

I disagree with OP though, not second, but third generations have been the one to get for me: iPod 3rd gen, iPhone 4, Apple Watch Series 3. OK, iPad 3 was a bit of a failure, but still, first retina.


The new iMac diverges dramatically from this model. It's a complete new design, one that could only have been done after a switch off Intel.


It would be fine for 98% of the things I do but I still need to do that 2%. With the x86 CPUs I always had virtualization with USB pass through as a final workaround solution but with the M1 there are things I absolutely can' do.


Don’t get me wrong, it looks great. But so did iPad 1 and iPhone 1, until v2 came out.

The main things I would hope for though are more RAM, and probably a much beefier GPU


It’s got a better GPU than anything which uses less than 35W, probably even higher than that.


Yeah it's great. I've recommended 2 people get these laptops and i probably would have got one too if there was a 15 inch one.

I'm just hoping there are some more surprises in store with that slightly bigger battery.


Dedicated graphic cards doesn't work. No Linux support.


> ”No Linux support”

It’s coming:

https://arstechnica.com/gadgets/2021/04/apple-m1-hardware-su...

(Virtualized Linux is already well-supported on M1 Macs)


multi monitor support. You can have two monitors max.


The original Intel MacBook comes to mind as well

- released with the (32-bit) Core Duo processor in May 2006 - replaced with (64-bit) Core 2 Duo model in November 2006

It only was supported through to 10.6 Snow Leopard as 10.7 Lion went 64-bit only


I got shafted with a similar 6 month duff product - the iPad 3. Now that was a bad decision. The iPad 4 came out later that year, with about 4X better performance and a switch to the lightning connector too.

I was so burned by that experience that I only bought an iPad again a month ago.


Yes! Good example. I was thinking about that too but couldn’t remember my history enough to explain.

AirPods VS AirPods Pro is another I just remembered

I think watch v2 was a big improvement too.


My parents owned (still have) an OG Intel Mac Mini - the box said "Core Solo". Seems like that was one of the few devices sold with that chip.


I still have an iPad 2. I use it in the kitchen to browse recipes.


I still have another one near bed for reading eBooks


I have a first-gen iPad Mini that still sees light use. Approximately the same internals as the iPad 2, AFAIK.


I'm really hoping Alder Lake or a similar AMD product gets PC closer to M1+ performance and battery consumption.

The M1 chip is amazing but I'm a tiling window manager man.


This will obviously not be comparable to a tiling window manager, but I've been pretty happy with Rectangle [1] on my mac. The keyboard mappings are pretty easy to configure and i've found it to work well even in multi monitor setups.

[1] https://github.com/rxhanson/Rectangle


+1 for Rectangle; I've been using it ever since it was Spectacle. There's nothing I really miss about a proper tiling window manager (though I'm sure hardcore users would disagree)


> There's nothing I really miss about a proper tiling window manager (though I'm sure hardcore users would disagree)

Agreed. I mostly just needed the keyboard driven window snapping I was used to on Gnome and rectangle has filled than need 100%.


This is Mac OS:

- https://www.reddit.com/r/unixporn/comments/jupmda/aquayabai_...

- https://www.reddit.com/r/unixporn/comments/mvuplf/yabaimacos...

Its called Yabai (+ skhd): https://github.com/koekeishiya/yabai

That is, you can have a tilin WM today with all the advantages of running MacOS.


From your third link:

   System Integrity Protection needs to be (partially) disabled
   for yabai to inject a scripting addition into Dock.app for
   controlling windows with functions that require elevated
   privileges. This enables control of the window server, which
   is the sole owner of all window connections, and enables
   additional features of yabai.
The risk Apple kills yabai after you're adjusted to it is real.


> The risk Apple kills yabai after you're adjusted to it is real.

This holds for anything in the Apple ecosystem, up to Fortnite.

Yabai has been going for long, and every issue could be worked around relatively painlessly.


> This holds for anything in the Apple ecosystem.

Holds for anything in almost any ecosystem. With that said, Apple has stated over and over the Mac will remain developer friendly. Being able to disable SIP, IMO is part of that.


I use yabai without disabling sip. You get most of the features. It's the first tiling WM I've used, so it's possible I'm missing something critical without disabling sip, but so far I'm quite happy with it despite whatever features are missing. ymmv, of course.


FWIW, I run yabai without having disabled SIP and it works great. There is probably some subset of functionality I am missing out on, but it does what I need it to.


It's not exactly a tiling window manager, but if you can program some simple Lua then Hammerspoon is a godsend. You can program anything any of the other window managers for Mac (like Rectangle, Spectacle, etc.) can do and have complete freedom to set up your own keyboard shortcuts for anything.

I have some predefined layouts[1] for my most common usage. So, one keyboard shortcut arranges the screen how I want, and I have other keyboard shortcuts[2] (along with using Karabiner Elements for a 'hyper' key) to open or switch to common apps.

[1] https://github.com/kbd/setup/blob/1a05e5df545db0133cf7b6f1bc...

[2] https://github.com/kbd/setup/blob/1a05e5df545db0133cf7b6f1bc...


Alder Lake will get closer because of the big.LITTLE structure, but I don't know if we will really see a contender from Intel until Meteor Lake. Lithography size gets too much attention for its nomenclature, but it actually matters for battery consumption and thermal management. Intel must execute flawlessly on 7nm and spend generously on capex to keep the ball rolling.


It seems like apple is TSMC priority customer so I suspect they will always be a node ahead of AMD.

Doesn't matter if Apple does mobile and low power stuff exclusively, but if they can scale this design into a higher TDP/core count it's going to get interesting.



You’ve gotten plenty of recommendations, but I’ll add one for Magnet [1]. I’ve been using it for years, and I love it. One of the best software purchases I’ve ever made - small, lightweight, does exactly one thing and does it very well, and made by a small developer to boot.

[1] https://magnet.crowdcafe.com/index.html



I also am a tiling window manager man and tried that a few years back (as well as everything else on the market) when I had a mac from work, unfortunately, without real support from the OS, these are all just poor mans windows managers and can't compare to the real thing. I gave up trying to use any of them and ended up installing a virtual machine where I could actually use one.


I'm a gnome addict. If you ever asked yourself "who are they building this monstrosity for?". That would be me, it fits my exact preferences in a workflow out of the box. Unity irritates me to no end, and my wife's mac is almost unusable for me which she greatly appreciates.


Not sure why you're being down voted, I use Amethyst and love it.


Zen 4 will probably at least match the M1, but it will be a while before those chips come out and Apple will soon improve even more.


I have a feeling that Apple has pulled ahead and will stay ahead for a LONG time. They are extending their moat. And applying Mx to VR they will create a new moat.


They started pulling ahead in the A6-A7 days and never looked back. Amazing progression.


I'm curious to see if Zen 4 manages actually. It'll be an interesting datapoint as to what actually makes the M1 better.


Well, here's hoping that both Linux and Windows work flawlessly on their hardware at some point.


Not even macOS works flawlessly on the hardware, why should Windows and Linux do. But as far as it goes for Windows, not having every other update completely break my system would be a welcome change.


> Not even macOS works flawlessly on the hardware, why should Windows and Linux do.

This is just pedantry and needless nitpicking. Replace "work flawlessly" with "work well" in my previous comment.


Because with Linux excitement can change things. What are you gonna do if you miss something in iOS/macOS? The right people can in principle make anything work in Linux but with macOS you are left praying Apple decides your use case is their business case.

Imagine what would happen if the compute/$ M1 laptops would perform in some respect better with Linux than macOS. Things may get out of hand, when huge chunks of the Linux community gets involved.


Of all the operating systems, I'm finding macOS to be less annoying than the rest. So far Apple did not make a single computer suitable for me, but if they would release something like Mac Pro Cheap Edition (or whatever, I just want workstation-level specs for $2k), I'll switch to it. I don't really care about M1 or Intel, I think that any modern CPUs are fast enough for any tasks.


No need to hate. Nowadays people in general do make necessity out of convenience and virtue out of necessity.


I almost went for the M1 but "1.0" kept me sane. I will definitely go for the M2.


"Antitrust is okay if you have a fast processor"


"It's not my personal responsibility to attempt to enforce antitrust against companies by boycotting them at significant personal expense".


Exactly, there are bodies that are supposed to protect consumers from that behaviour, unfortunately they failed everyone massively. That in itself begs for an inquiry how those institutions actually work and whether it is worth spending tax payer money on them if they consistently fail to deliver.


...and if that exact kind of navel-gazing is why they don't work? What then?


Charge people for failing to deliver, dispense huge fines and jail time. Then rebuild it in a way to avoid mistakes why the previous solution didn't work.


Additionally, it is still (for the moment) possible to engage with Apple computer hardware without using any of their bullshit services (like the App Store) into which all their anticompetitive behavior has so far been constrained.

This is of course not so on Apple mobile devices: it's dox yourself to the App Store or GTFO over there.


For CPUs Apple is actually upending the AMD Intel duopoly, isn’t that good for competition? Furthermore, AMD only recently broke back into the market, which Intel had a stranglehold on. This is the most competitive the CPU market has been since the early 00s.


What antitrust are we talking about.


It would seem to me there is fairly healthy competition between apple and windows laptops?


The thing is, every competitor is going to upgrade to these and if you stay with inferior Intel products, you give yourself competitive disadvantage. Unfortunately this is the current state of play. If I won't be able to achieve something the same speed competitor can, I put myself in a bad spot. I try to not mix emotions and business, however for a personal machine, I am rocking 5950X and it is awesome.


> I am rocking 5950X

Irrelevant from a mass market perspective. That chip is still out of stock at Best Buy and the Amazon third party version is marked up 54% versus MSRP.

Incidentally the 11900k is also out of stock at many retailers, but it's so much cheaper. You can still buy a pre-binned version that clocks 5.1GHz; even with markup that costs 30% less than the aforementioned third party 5950x.

Availability and price matter. My take on AMD's heavy focus on the enterprise segment right now is that they have no choice. If your enterprise partners get faced with scarcity issues, they will lose faith in your supply chain. You can tell a retail customer to wait patiently for 6 months, and even an OEM that makes retail devices (eg Lenovo) may forgive some shortfalls as long as there's a substitute available, but Microsoft and Google aren't going to wait around in line like that.


Mass market isn't buying individual PC parts and assembling the PC, they're buying prebuilts or using whatever their office mass-purchased. Go on dell.com right now and you can order a PC with a 5950x and RTX 3080. Good luck buying either of those individually without writing a web bot.


I just did. Fastest delivery (express) for ALIENWARE AURORA RYZEN™ EDITION R10 GAMING DESKTOP with base specs aside from 5950x and the cheapest liquid cooler (required by Dell for that CPU) is May 26. Some of the higher end default/featured configurations would deliver in late June. Not sure whats up with that.

Honestly the price/performance ratio there is pretty nice in the eyes of a Mac user like me, but I don't know what office is buying Alienware, and a bulk order would no doubt take longer to deliver. Those are the only machines popping up when you filter for AMD 5000 series on dell.com.

Considering that 5950x is AM4 compatible, folks who had bought a pre-built machine and want to upgrade are also part of the mass market. And I think you can't discredit the homebuilt PC crowd for a high-end desktop chip. The people who care enough to want this chip can probably figure out how to clip it into a motherboard and tighten a few screws and connectors here and there.


While this news is about "a rumor according to sources familiar with the matter" it's obvious that Apple will be doing this at some point. Whether it's the M2 or if there will be a new letter designator (X-series silicon for eXtreme performance? Apple X1?) I am very interested to see what the performance numbers will be for an ARM-powered workstation rocking 16 to 32 high power cores. Aside from the Ampere eMAG, is a 16+ core ARM-powered workstation even a thing yet? (I don't count Amazon's Gravaton chips in this discussion because I cannot own a machine on my desktop powered by one).


M2 seems very unlikely to me because that will create product misinformation easily. Imagine the following

M1 -- 4 A14 cores

M2 -- 8 A14 cores

M3 -- 4 A15 cores

That "third generation" sounds better, but is actually inferior to the second generation. X1 or M1x seem much more likely. It's the same reason why they made an A12x instead of calling it A13.

They probably need 3 designs, but economies of scale begin to be a problem. A14 and M1 are now selling in the millions, so they get a huge benefit. More importantly, they are in devices that get replaced a bit more frequently.

M1x (8-16 cores) will be in bigger iMac and laptops. They don't sell nearly as many of these. In addition, yields on the larger chip will be lower. Finally, people with these devices tend to keep their machines longer than phones, tablets, or cheaper laptops.

The 16-64-core chips are a major potential headache. If they go chiplet, then no big problem (actually, the M1x as two M1 chiplets seems like a desirable direction to head). If it is monolithic, the very limited sales and production will drive prices much higher. A logical way to offset this would be bigger orders with the excess being sold to Amazon (or others) to replace their mini cloud, but that's not been mentioned publicly.


I always assumed the "M" in the M1 designation meant "mobile" and that higher-powered (in terms of processing, electricity, and heat dissipation) would be coming later and have a different letter designator. Either that or we'll get Air/Pro suffixes to the chip numbers (eg: M1 Pro, M2 Air...)


...I always thought it was for "Mac".

Though the fact that they've just labeled the chip in the latest iPad Pro as such does add a bit of confusion to that.


M for mobile could make sense given they've just put an M1 in an iPad Pro. I assumed it was Mac before that and we'd get an M1X or something for higher end models but that seems wrong now.


...but they also just put it in the iMac...


The variant of it that uses laptop parts, but fair point. Mobile does seem a stretch for iMac. Let's just call it Apple's unique naming convention :)


The iMac has long used laptop grade parts.


Another possibility is that they skip M1x altogether; if the Pro machines are coming in the autumn and not for WWDC, then the frame will be closer to the iPhones and it would make sense for them to use that later tech.

M1 (winter 2020) — 4x A14 cores

M2x (autumn 2021) — 8x A15 cores

M2 (winter/spring 2022) — 4x A15 cores

etc.

There’s really no reason for the naming and timing of Pro machines to lag behind consumer machines just because of arcane numbering. And there’s precedent, Intel Core Duo Macs were released before Intel Core “Solo” Macs.

But if they’re actually ready for WWDC, then no, it’ll just be called M1x.

As for the Mac Pro… we’ll it’s definitely going to be weird. I think the existence of the Afterburner card proves that Apple sees a future in highly specialized modular add-ons providing most of the differentiation, but buyers would still want a bigger chip than in the 16” laptops, so who knows… of course nobody even knows how an M1 would perform with proper cooling and a higher clock!

[edit] also making M2x ahead of M2 will give them the benefits of “binning” some chips


There’s a couple good reasons for a M1x using A14 cores.

With new nodes, it takes time to get things right. A smaller chip is generally preferred as easier to work with and test. It also gives the node more time to stabilize so your big chips get better yields.

Addressable market is another factor. There are over a billion iPhone users replacing devices every 1-3 years on average. There’s around 100 million Mac users as of 2018 replacing their computers every 5-8 years and that being split between 2 and 3 major market segments.

That means making over 250 million A14 chips, but only around 15 million M1 chips (aside from iPads which are there to improve economies of scale by reducing two designs to one). The M1x probably addresses around 4.5 million computers and the M1p or whatever they call it probably addresses under a half million machines.

A lot of people wanting the performance boost or fearing outdated, unsupported x86 machines will probably cause an increase in M1 demand, but that will be followed by a decreased demand for M2 and maybe even as far as M3 or M4 while demand catches up with the altered purchase cycle. Apple is no doubt well aware of this.

In any case, it’s best to try a new node with a smaller part where you know you need a million wafers anyway. Once you’ve optimized and refined a couple steppings, you move to the larger chip with smaller demand where yields are worse in general.

Personally, I think they would greatly benefit from a unified chiplet design starting with something close to the A14 then scaling up.


There is something available for building workstations. Not sure what about performance though. https://www.solid-run.com/arm-servers-networking-platforms/h...


Here's an ARM workstation: https://store.avantek.co.uk/ampere-altra-64bit-arm-workstati... "Unfortunately" the minimum CPU is 64-core.


I haven't yet used an M1 mac but based on what I've read about it I have fully bought into the hype train.

Hoping my next laptop will be a M2-powered MBP, assuming they can increase the maximum RAM to at least 32GB.


I replaced my Core i9 fully spec'ed out MacBook Pro 64GB RAM with an M1 MacBook Pro with 16GB of RAM and I can tell you my computing experience is better! Specially the fact that my computer works on battery!


I wouldn’t get too hung up on the RAM. Went from an XPS with 64GB that used about 16-20GB in my day to day, still able to use the same workflows and memory is handled fine behind the scenes on my M1 Air with 16GB. Maybe get your hands on one from a friend and play around with it. Would imagine ram heavy tasks like rendering etc would choke but I just run many workflows / builds simultaneously which works fine.


16GB might be passable right now, but I am regularly using 20-25GB at a time on my work laptop, a 2019 15" MBP, occasionally bursting to ~30GB when I'm really pushing certain dev environments.

I keep my personal laptops for a long time, and I don't want to be stuck with 16GB when it's getting close to not cutting it now based on what I do. I can't imagine being stuck with 16GB in 8-10 years.

My current personal laptop, a 2015 13" MBP, has 16GB of RAM. I can't imagine NOT upgrading.


All fair points, but the deal is these M1 MacBooks make more out of the ram they have. And I can’t imagine not upgrading a laptop in 8-10 years, still have a T430S but it can’t keep up with any reasonable workflow today


There are rumours of a supposed M1X that may hit before the M2, so you may be waiting a little longer than you’d think. :)

Of course, the Apple rumour mill; grain of salt, etc - but I wouldn’t be surprised if we saw an M1X that could, for instance - support 32GB of RAM by the end of the year - (which is the only blocker from me buying an M1) - and pop out the M2 next year maybe with an Apple-CPU-powered Mac Pro?

Food for thought. :)


Yeah I guess 'M1X' vs 'M2' doesn't matter so much. As long as they've had time to work out any kinks from the first gen and increase the RAM, I'm all in.


There's missing features (like number of USB ports) but no real kinks that I've come across. Although I don't need it, I tried parallels + ARM windows + x86 windows apps on a lark and it worked without any noticable performance issues.


Whoa. That's excellent to hear.

Since they are non-upgradeable, I will certainly be waiting until a unit with at least 32GB RAM (ideally 64GB) before I'd upgrade at all and consider it future-proofed, but this is great to know!


Yes, I would expect them to have yearly M1, M2, M3, etc consumer-grade SoCs, and then alongside that yearly or maybe less-than yearly X-versions of the current SoC, with features like more RAM, more IO, etc.

So 6 months from now you'll probably have the choice between an M2-based MacBook Air with better single core performance or an M1X-based MacBook Pro with more cores and more max. ram - unless they decide to do the reasonable thing and shift the regular and X versions to the same release cycle.


Why do you want 32GB? With my 8GB m1 Mac mini I was surprised to get away with as little memory as I did. I felt that the M1 needs far less memory than x86 to feel snappy.


I am glad that you found something that works for you, but I am confident 8GB will not work for the type of work I do. 16GB might be passable right now, but I am regularly using 20-25GB at a time on my 15" MBP now with my workloads, occasionally bursting to close to 30GB when I'm really pushing certain dev environments.

I keep my personal laptops for a long time, and I don't want to be stuck with 16GB when it's getting close to not cutting it now based on what I do. I can't imagine being stuck with 16GB in 8-10 years.


FTA: "the latest semiconductor production technology, known as 5-nanometer plus, or N5P. Producing such advanced chipsets takes at least three months”

I know as good as nothing of this process, but I can’t imagine the latency from silicon wafer to finished product is 3 months. I also can’t imagine some inherent start-up delay for producing the first chip (but as I said: I know as good as nothing of this process), so where do those 3 months go? Is it a matter of “you have to be extremely lucky to hit so few minor issues that it only takes 3 months to have a first working product”?


Latency for a chip is 4 weeks at the low end of complexity, and 12 weeks (3 months) at the high end of complexity.

My mind was blown when I first found that out


Thanks. Also good to hear that I’m not the only one who finds that surprising.


Fellow mind blown friend here!


Just a side rant here... I'm really frustrated I can't monitor the Neural Engine's usage in the M1 in my MacBook Air. Apparently Apple did not build an API for extracting data points from the these 16 cores, so I can only watch what the CPU and GPU are doing when running and optimizing Tensorflow models while the NE remains a black box.


If Apple is having this kind of success, it seems they should look to compete in the data center with this or the following generation of chips. I wonder if it is a good time to invest in Apple.


What's in it for Apple? I'm not trying to be glib, here, but unless there were some Mac only server functionality, nobody would buy an Apple ARM powered datacentre machine.


With a sufficiently wide lead in energy efficiency, just selling hardware without any follow up lock-in harvest can be attractive even for a company as spoiled as Apple. They'd likely want to make the modules offered sufficiently big to make them unattractive for desktop use or else they'd risk cannibalizing their lock-in market.


They get a gigantic boost in internal datacenter performance if they can jam a bunch of Apple Silicon chips into a rack mounted server and boot Linux on it. If they can get a similar boost in performance at lower power efficiency in the chip that is going in the Mac Pro, taking that chip and putting it on a board with Ethernet and Power wouldn't be a ton of engineering cost and then they could massively reduce the power consumption and cooling costs of the datacenters.

And then they could resell the box they designed as a server, either with Linux support out of the box (unlikely, but since in this mystical scenario they'd have to write kernel patches for Linux to get it to boot...) or with a build of macOS that could be set up headlessly, in order to recoup the development costs. Apple shops that want machines to run Compressor or Xcode's build server would eat that up.


Eh, the Asahi linux people already have linux running on this chip.

What's in it for Apple is money and better economies of scale for chips. But I don't really think it fits Apple's MO so I doubt they'll do it.


MO is my thought, too. Getting back into server hardware would require supporting a vastly different kind of client, and possibly require them to start doing some things that they, as a company, might find distasteful. Supporting equipment they stopped making a long time ago, for example. Including maintaining stockpiles of replacement parts.


> Eh, the Asahi linux people already have linux running on this chip

More specifically, people are running Linux on the CPu cores.

The M1 is a system-on-chip, and according to the floorplan [1], the CPUs are maybe 1/5th of the chip. There are many other features that aren't unlocked, such as GPU (which is a brand new architecture) or power management. The latter is key to exploiting the chip to its full performance envelope.

I don't expect Asahi to get anywhere further than proof-of-concept before it becomes obsolete by the march of the silicon industry.

[1] https://images.anandtech.com/doci/16226/M1.png


I think it depends on how much changes between generations. So far it seems like most of this isn't really new, but almost exactly what's been on the iDevices for a long time. If they don't re-architect substantially between generations I can see the Asahi project keeping up.

The GPU stuff is new, but it seems like good progress is being made: https://rosenzweig.io/blog/asahi-gpu-part-3.html

For data centers, it helps that GPU and so on is just less important. It's wasted Silicon, but the CPU is competitive even before considering the added features so that's not the end of the world. There's a good chance that Apple can use that to their advantage too, by using chips with broken GPUs/NNPUs in the DC... or designing a smaller chip for data centers... or one with more cores... or so on.


If Linux was supported, it would be an interesting competitor to AWS's graviton instances.

As for what's in it for Apple, it would be the profit from selling a hopefully large number of chips, but adding official Linux support and also commuting to an entire new market for a minimum of three years is probably far higher a cost on focus than any potential profits.


What if Rosetta 2 was that Mac only server functionality? I don't know that they'd do it, but from M1 Mac reviews it sounds like M1 + Rosetta 2 will run at least some x86 code faster and more power efficiently than any competitor.

I don't know how feasible it is to scale that up to a datacenter though, and I expect MacOS licensing costs would wipe away any power efficiency gains. But I do wonder if they could hypothetically scale up and beat the best Intel/AMD have to offer just using Rosetta 2 to translate x86 code.


First of all, Apple could save a huge amount of money replacing Intel based servers with their own chips. Both on the CPU price, expecially the Xeons are really expensive as well as on electricity consumption, probably the largest running cost of data centers.

Then the gains of scale, making a CPU just of the Mac Pro would mean too low production numbers, but with data center usage would drive those up - especially if Apple also sold it to other customers, e.g. bringing the Xserve back. For the OS they could run Linux virtualized or they give the native Linux on Mac developers a hand.


The amount of money that Apple would save to put a bunch of 2U ARM servers into racks is dwarfed by the costs of building such systems. Seriously, nobody (to the first approximation) is going to buy a macOS server, and so there's no reason for Apple to do this.


If they were to (re)enter this market they'd have to support Linux, which I just don't see happening.

What's interesting to me is to see if they'll use M-series chips in their own datacenters. They already run Linux there apparently.


Do you mean that they should start making and selling servers? It's unlikely.

Or do you mean that they'll start selling parts (SoCs)? Not in a million years :-)


I am just trying to forecast how bright Apple's future is. Seems like they have options. So, is there going to be a shift towards ARM/RISC in general or not. If so, where do I put my money?


Well, speaking for Apple, Apple 99% sells consumer products. So look for big consumer products markets which they haven't entered. Cars would be one of those markets, for example.

AR/VR/more wearables would be another.

Home appliances/electronics would be another.


I think they will soon, but only for a specific use-case: MacOS and iOS development. Devops in cloud is expected these days for development and the only offerings available for it are basically mac minis jammed into a rack, often with grey-area virtual machines running. A new Xserve model and Xserve cloud service would be great!


How would they be succesful in DC? Designing a product for consumers is very different than for servers. On top of that you add terrible server support for MacOS.


They only need to support a Linux kernel. They’ve used google cloud, azure, now aws. The contract is worth billions, and will end in 23 or 24.it’s very likely they’ll at least sun their own cloud completely. And maybe they’ll compete as a public cloud later


I'm really curious how this will play out. DataCenters haven't been apple's market for a long time, and the requirements of datacenter customers are kinda anti-apple these days.

More likely I would see Apple making an exclusive server chip licensing arrangement with ARM or something similar.


High throughput at relatively low power. To me, it seems like a match made in heaven. There is the practicality of building rock solid containerization on these CPUs. I don't know where that stands, but it seems like an obvious fit.


It would be great, but do you see Apple making commodity datacenter hardware for running Linux/Windows?


I doubt that they would bother with the 2nd coming of macOS Server for anything other than Apple shops.


I think if they ever released the servers, they would want a total control over what you run on them, so you couldn't just upload your service, Apple would have to approve it first.


How does that even make sense? You can run anything on Macs, and these are one level less enterprisey.


Here's hoping this chipset will support 32GB+ RAM and more than 2 displays!


Is anything known about the chip? The deal breaker of M1 (for me) as it currently stands is the amount of RAM it can handle (16 GB).

Edit: Mistyped 16 as 1, sorry about the confusion


Also the relatively low number of cores.


I've only read very rare workloads actually be constrained by 16GB on the M1, what use case do you have that you know will be hampered on a M1?


My current laptop has 64 GB. Could probably be fine with 32, but I'm seldom under 20 in usage. I run certain services locally in Docker, and then have 5+ instances of IntelliJ for various stuff open, some of them running a java server or node building the frontend, and others on my team may be running Android+iOS simulator at the same time as well.

I could alter my usage patterns and work with less. But people not used to having loads of ram don't know what they're missing out on.


I went from a 64GB XPS 15 as a developer utilizing ~10-20GB during general workloads and I can get the same work done on my new M1 MacBook Air 16GB without a hitch. Unless you are reading and writing to ram incredibly quickly or need a specific massive amount of data in memory like for rendering type tasks, running general applications beyond the 16GB point is totally fine, and the OS will figure out the rest.

I’m curious to know if it’d work for you, do you have access to an M1 to try out your workflow on? The max ram ‘issues’ seem to be way overblown.


side-question, isn't iOS simulator running natively on M1? That would mean it consumes less RAM than on x86. If that's true, it should be possible to fit Android+iOS workflow.

As a data point: I am running node + iOS Simulator (along with XCode + VSCode + Chrome with 50+ tabs) setup on M1 16GB and it works fine, I also keep them running while I take a break to play a LoL match. Works great for me.


Simulators run natively in x86 they simulate the api/abi as opposing to emulating the processor


My Ableton starter template project takes about 22GB of RAM to open. Music production is a pretty common use case that can be very heavy on RAM.


I think you mean 8GB?


I think he meant 16GB.


I own an M1 Macbook Air with 16GB of RAM


Yep. Unfortunately the M1 MacBook Pro currently only goes up to 8GB.


I am typing from an M1 MacBook Pro with 16GB.

edit: you have to select the SSD size then you can choose the RAM.


Ah, you are right! Thanks for the edit. I didn't click through on the presented options to see if they offered additional upgrades.


I think you mean 16GB?



To think a 36 year old architecture that Apple seeded is finally coming to fruition. I give them props for the long game for sure.


I’m reading somewhat incompatible reactions in the top level comments e.g. [1] > Somewhere, the collective whos who of the silicon chip world is shitting their pants. Apple just showed to the world how powerful and efficient processors can be. All that with good design. Customers are going to demand more from Intel and the likes.

Another [2]: > I really want the next MacBook Pro to support driving two 4K monitors over thunderbolt and have an option to buy 32 gigs of ram.

Meanwhile the last Intel MacBook Pro supports driving four (4!) 4K displays [4]. Apple silicon is far ahead in benchmarks but how does speeds and feeds translate into what customers actually want?

Battery life is impressive but unfortunately not the usual differentiator during a worldwide pandemic. The M1 Macs are quite quiet (the first MacBook Air without a fan—in 2020!) meanwhile the Intel Surface Book was fanless in 2017. We shot the messenger of the recent Intel attack Apple ads [5] but message is still worth reading. I bought an M1 MBA and realized the speed didn’t make a difference as my consumer computer. For the first time in decades I’m not sure if Apple provides the most pleasurable experience.

[1] https://news.ycombinator.com/item?id=26956336

[2] https://news.ycombinator.com/item?id=26955682

[4] https://support.apple.com/en-ca/HT210754

[5] https://www.macrumors.com/2021/03/17/justin-long-get-a-mac-i...


How are the reactions incompatible? People like me, who don't need more than 16 GB of RAM and one monitor, are happy with the M1. Other people are waiting on the M1X/M2 chip to bring what they need.

> meanwhile the Intel Surface Book was fanless in 2017

The MacBook was fanless in 2015 and, like many other fanless designs using Intel chips, it was slow.


> Other people are waiting on the M1X/M2 chip to bring what they need.

Well those people must have been bullish on Apple Silicon and 'not just' M1. They think its worth skipping over M1 rather than going all in on the 1st gen product which at the time had primitive support for most mainstream software, especially for developers.

Maybe Apple knew that the M1 could not drive more than 1 monitor on the Macbook Air and in fact left that limitation in with a small disclaimer.

Perhaps they will announce this capability in the M2 Macs.


> How are the reactions incompatible? People like me, who don't need more than 16 GB of RAM and one monitor, are happy with the M1. Other people are waiting on the M1X/M2 chip to bring what they need.

I agree with your nuance.

> The MacBook was fanless in 2015 and, like many other fanless designs using Intel chips, it was slow.

The Surface Book 2/3 and M1 MacBook Air are not slow (hence the point of my comparison)


Arm-based laptops that are competitive with Apple M1+ could arrive as early as 2023, powered by a Qualcomm SoC based on their $1.4B acquisition of Nuvia, lead by a team that designed Apple's M1, iPad and iPhone SoCs. This would likely run Windows for ARM and Linux. Hopefully it will also include hardware virtualization support, like M1.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Is Apple going to be affected by the chip shortages we have been hearing about?


Apple tends to buy fab capacity in very large chunks. IIRC, they bought all of TSMC's 5nm capacity for a year.


Nikkei says they have already postponed some Mac and iPad production. Not sure how reliable that story is, but so far it doesn't look like customer orders are at all delayed. I bought a nonstandard one when this article came out, and they delivered it a week ahead of schedule.

https://asia.nikkei.com/Business/Tech/Semiconductors/MacBook...


I would expect that Apple's chips are probably some of the more profitable chips made by TSMC. Apple has historically had relatively large margins, so they can probably afford to pay a bit more.


Probably not since they likely didn’t scale back their orders in 2020.


Taiwan also has a severe water shortage. I'd assume that's still a threat to production for Apple. https://www.taipeitimes.com/News/feat/archives/2021/04/22/20...


I had to wait a month to get my Macbook Air M1 here in Norway.

Was quite a painful wait (as my Macbook 12' got water damage, with the SMC limiting in to 1Ghz).


I know a lot of people waiting for this one, myself included. Here's hoping Asahi Linux is ready around then!

I'm guessing the 2021 MacBook Pro is going to be the fastest laptop ever made.


Every generation is "the fastest ever made". The question is more: will this one be the Pro version?


Surely not every mbp has been the fastest laptop ever made. Has this in fact at any given point in time actually been true?

This could be it tho, but there are probably some desktop TDP chip current gen laptops out there.


I didn't mean just Apple. I just meant the fastest laptop money can buy.


The fastest laptop money can buy has always been available. I guess this is the first time you are considering buying one?


Sometimes I fantasize about printing a t-shirt that simply says "shut up, you know what I meant".


I’ll buy two


Maybe their two requirements for a laptop were it being (1) an Apple product and (2) the fastest laptop money can buy.


I expect every mac user at my job to get one.


Besides the radio antenna (Qualcomm) that Apple is quickly replacing with their own, is there any other tech/chips inside Apple SoC that they don't design themselves?


Oh I think _they are_ getting into the radio chip business.

https://www.apple.com/newsroom/2021/04/apple-commits-430-bil...


When I was young, every time I got a new computer (which wasn't often, but it happened a few times), the new machine was absurd amounts faster than the previous one. Like 10 times more processing power and memory.

I'm glad we can live that feeling again, even if just for a short while.


How is this possible in this time of shortage of IC production capacity?


Apple wasn't cheap and pre-bought production capacity.


Apple booked the production capacity at TSMC years in advance.


My wallet is ready for the next line of macbook pros.


Think they new chip will have a better RAM solution ultimately allowing for more RAM?



People can be both legitimately impressed by the power and efficiency of Apple's first desktop-class processor, while also understand that certain more niche features were out of scope of a first version. I'm certainly expecting this to be fixed by the second generation, and if it's still missing I won't be quite as understanding.


> People can be both legitimately impressed by the power and efficiency of Apple's first desktop-class processor, while also understand that certain more niche features were out of scope of a first version. I'm certainly expecting this to be fixed by the second generation, and if it's still missing I won't be quite as understanding.

I’m responding to a HN commenter who was not just impressed about the power of the M1 but hyperbolically asserts that it is better than everything else yet the next top voted HN comment demonstrates otherwise with a demand for downgraded feature. The tenor of those reactions are opposed and my aim is to reflect the nuance


This is not an oxymoron. You can both feel that the M1 chip is superior to previous designs in most aspects, and admit that it is lacking in others.


I consider the ability to drive more than one external display to be directly related to the power and design of the chip.


I have an M1 MacBook Air, and I’m blown away. I cannot wait for a 16 inch MacBook Pro with whatever madness they have planned.

I love the direction Apple is headed with their hardware.


people wanting more cores, more memory, eGPU support, and I'm here just wanting them to have multiple colours...


So, M(n)+ or M(n+1) ?

"tentatively known as the M2"

Blasphemy! Plus then N+1!


I might consider buying one when it is able to run proper Linux. And even then it's probably going to be limited to ARM Linux only.


"Limited to the CPU type that is installed"... how is that a "limit"?


For my workloads that is a limitation that must be considered when purchasing hardware. It's the same reason why I don't buy ARM Chromebooks.

It's going to take years to run proper Linux on M1 and even more for the ecosystem to catch up to x64_86.

For me it's reasonable to keep using AMD Ryzen 5000 which is faster than M1 on my multithreaded workloads anyway despite using 7nm. Plus it has better GPU, more memory, more storage, more ports and supports multiple monitors.

Sure it is more expensive, but that's just because my segment is geared to pros with higher requirements. Apple currently has no laptop offering on this tier.


soldered ram and SSD coupled with SSD Wear issues leading to a less than 3 year lifespan of a laptop makes all of this a hard pass for me, and should be for any sensible person too.


Even if true (it isn't - SSD issues appear to be mostly related to as-yet non-native software), a 3 yr lifespan for the price of 6 yrs worth of half-speed laptop makes sense.


Not sure why you are downvoted, but this is true is it not? It's even worse with Apple Silicon machines since if the SSD dies, well the whole thing is bricked. Unlike the Intel Macs.

It seems the Mac aficionados (Especially the M1 fanatics) are in denial of the degree of lock-in with the Mac as it gradually descends into become nearly as locked in as an iPhone.

I'd give it 0.1 out of 10 for repairability. At least with the latest Surface line-up the SSD can be upgraded.


You can wear out any SSD. There's no evidence that Apple SSDs are any worse than others. You need to have backups. You need to understand that Apple products are sealed and disposable and only buy them if your use case can accommodate that.


Cool! So now (or at least soon-ish) I can get my hands on some dirt cheap, practically never used M1 hardware on eBay to play around with?

I wonder if Apple is familiar with the Osborne effect[1].

[1] https://en.wikipedia.org/wiki/Osborne_effect


This is a rumor, Apple didn't make this announcement. It is not an example of the Osborne Effect.


I don't think Apple will put the new processor in the existing M1 products, except for the 13" MacBook Pro.


Why a new SoC? Isn't the M1 basically maxiing out what can be done on a SoC, but what's missing is the version with external memory and GPU?

They can refresh the core in the M1 of course, and I expect they will do that yearly like the AXX cores, but it would be weird to go even 2 generations of the SoC without addressing the pro cpu.


Apple could easily fit 2x-4x the performance on an SoC so that's what people expect the M1X and M1Z to be. Note that it's still an SoC if it has "external" memory (Apple's "unified memory" isn't what people think).


> it's still an SoC if it has "external" memory

I meant "soldered ram next to SoC with cpu and gpu cores", i.e. not DIMMs and 200W PCIe GPU. For a pro chip in a desktop form factor I think DIMMs and PCIe graphics are inevitable, and that's the interesting storyline in the evolution of the M1. We know they'll produce better M1's, but the integration with 3rd party graphics, the choice of DIMM type and so on is interesting.


They may never have DIMMs. Apple is crazy enough to solder 256GB and call it a day. People would complain... and then they'd still buy it.


Even earlier than expected. Perhaps my suspicion to skip M1 to go for M2 was completely right.

It doesn't hurt to wait a little or perhaps skip the first gen versions due to the extremely early software support at the time, rather than getting last year's model (Nov 2020) and suffering from the workarounds and hacks to get your tools or software working on Apple Silicon.

I won't have a wobbly software transition unlike most M1 early adopters. Afterwards, I'll most certainly skipping M1 to something higher. (Likely M2)

Like I said before, wait for WWDC and don't end up like this guy. [0] Those on Intel (Especially those on 2020), no need to run for the M1, Just skip it and go for M2 or higher.

Downvoters: Here's another foot-gun Apple hid behind the hype squad. [1] For the iPad Pro, if you bought the 2020 version, your keyboard is incompatible with the M1 version meaning you have to fork another $350 for a new one that works.

At the time of the Macbook Air M1 launch (Nov 2020), tons of software issues, even the recovery system fell apart for most people for M1. Even upgrading to this on launch day right here with those issues was an instant no deal.

Once again Intel Mac users, plenty of time to migrate to something even better. (M2 or higher)

[0] https://www.zdnet.com/article/i-sold-my-old-ipad-pro-to-back...

[1] https://arstechnica.com/gadgets/2021/04/new-12-9-inch-ipad-p...


I've been extremely happy with my M1 Air. I waited until Golang and Docker were available (around Jan 2021), but I haven't suffered any workarounds or hacks.

To be honest, it has all been much smoother than I expected, but YMMV.


That link is just an ad for a service called Backflip. What does that have to do with this?


I have an M1 and I love it. Sure, there were some early issues like chrome crashing all the time and some packages not working but I haven't run into any issues as of late.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: