Why are all performance measurements of the M1 done against stuff that is far below state of the art?
So its faster then a 3 generations old budget card, that doesn't run nVidia optimized drivers, over I'm assuming Thunderbolt. So?
So its faster then the last Mac book air, that was old, thermal constrained, and had a chip from Intel that has been overtaken by AMD.
Every test is single core, but guess what, modern computers have multi cores and and hyper threading and that matters.
Apples presentation was full of weasel words like "in its class" "compared to previous models". Fine thats marketing, but can we please get some real, fair benchmarks, against the best the competition has to offer before we conclude that apples new silicon is gift from god to computing?
If you are going to convince me, show me how the CPU stacks up to a top of the line Ryzen/threadripper and run Cinebench. If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.
I'm sure you understand the performance differences between a 10W part with integrated graphics designed for a fanless laptop and a desktop part with active cooling and discrete graphics.
This article from Anandtech on the M1 is helpful in understanding why the M1 is so impressive.
I think Apple brought this on themselves when they announced it would be faster than 98%[1] of the existing laptops on the market. They didn't caveat it with "fanless laptops" or "laptops with 20hr of battery life", it's just supposedly faster than all but 2% of laptops you can buy today.
You say something like that about a low power fanless design and every tech nerd's first reaction is "bullshit". And now they want to call you on your bullshit.
"And in MacBook Air, M1 is faster than the chips in 98 percent of PC laptops sold in the past year.1"
There is a subtle difference between "98 percent of laptops sold" and your rephrasing as "2% of laptops you can buy today".
If you doubt the meaning, check out the footnote which refers to "publicly available sales data". You only need sales data if sales volume is a factor in the calculation.
I also don't doubt that "every tech nerd's reaction is 'bullshit'", but only because supremely confident proclamations of universal truth that are soon proven wrong is pretty much the defining trait among that community (c. f. various proclamations that solar power is useless, CSI-style super-resolution image enhancement is "impossible" because "the information was lost", "512k should be enough...", "less space than a Nomad..", and everything Paul Graham has said, ever).
> I also don't doubt that "every tech nerd's reaction is 'bullshit'", but only because supremely confident proclamations of universal truth that are soon proven wrong is pretty much the defining trait among that community (c. f. various proclamations that solar power is useless, CSI-style super-resolution image enhancement is "impossible" because "the information was lost", "512k should be enough...", "less space than a Nomad..", and everything Paul Graham has said, ever).
Notwithstanding the fact that you do have a point here, polemically phrased as it may somewhat ironically be, I just want to point out that the Paul Graham reference is probably not the best example of the "tech nerd community" trait you're describing. At least this particular community doesn't quite believe that everything Paul Graham say is true; a couple of examples:
I could share a lot more HN discussions, and some of his essays where he pretty much describe the trait you're taking issue with here -- but I'm already dangerously close to inadvertently becoming an example of a tech nerd that believe "everything Paul Graham has said, ever", is absolutely true ;) I don't, and I know for a fact that he doesn't think so either (there's an essay about that too).
Grandparent doesn't understand information theory. True superresolution is impossible. ML hallucination is a guess, not actual information recovery. Recovering the information from nowhere breaks the First Law of Thermodynamics. If grandparent can do it, he/she will be immediately awarded the Shannon Award, the Turing Award, and the Nobel Prize for Physics.
True superresolution is impossible, but a heck of a lot of resolution is hidden in video, without resorting to guesses and hallucination.
Tiny camera shakes on a static scene gives away more information about that scene, it's effectively multisampling of the static scene. (If I have to hunch it, any regular video can "easily" be upscaled 50% without resorting to interpolation or hallucination.)
Our wetware image processing does the same - look at a movie shot at the regular 24fps where people walk around. Their faces look normal. But pause any given frame, and it's likely a blur. (But our wetware image processing likely does hallucination too, so it's maybe not a fair comparison.)
It's not temporal interpolation. It's using data from different frame(s) to fill in the current frame. It's not interpolation at all. It's using a different accurate data source to augment another.
Super resolution can and does work in some circumstances.
By introducing a new source of information (the memory of the NN) it can reconstruct things it has seen before, and generalise this to new data.
In some cases this means hallucinations, true. But in other times (eg text where the NN has seen the font) it is reconstructing what that font is from memory.
But the thing is, in that case the information contained in the images was actually much less than what we are meant to make believe.
So if we are reconstructing letters from a known font we essentially are extracting 8 bits of information from the image. I'm pretty certain that if you distort the image to an SNR equivalent of below 8 bits you will not be able to extract the information.
Lossy image compression creates artifacts, which are in a way of form of falsely reconstructed information - information which wasn't there in the original image. Lossless compression algorithms work by reducing redundancy, but don't create information where it wasn't there (thus being very different from super-resolution algorithms).
Not if it’s written text and you are selecting between 26 different letters. It’s a probabilistic reconstruction, but that’s very different to a hallucination.
You're both right, but they're more right because the subtle difference you mention is the problem they're highlighting: Apple went out of their way to be unclear and create subtle differences in interpretation that would be favorable to Apple, as a company should.
After the odd graphs/numbers from the event, I was worried it was going to be an awful ~2 year period of jumping to conclusions based on:
- Geekbench scores
- "It's slow because Rosetta"
- HW comparisons that compare against ancient hardware because "[the more powerful PC equivalent] uses _too_ much power" implying that "[M1] against 4 year old HW is just the right amount of power", erasing the tradeoff between powerfulness and power consumption
The people claiming this blows Intel/AMD out of the water need to have stronger evidence than comparing against parts launched years ago for budget consumers, then waving away any other alternative based on power consumption.[1]
Trading off power for power consumption is an inherent property of chip design, refusing to consider other chipsets because they have a different set of tradeoffs mean you're talking about power consumption alone, not the chip design.
[1] n.b. this is irrational because the 4 year old part is likely to use more power than the current part. So, why is it more valid to compare against the 4 year old part? Following this logic, we need to find a low power GPU, not throw out the current part and bring in a 4 year old budget part.
Hate to draw a sweeping conclusion without any facts like this without elaborating, but, late for dinner :(: it's an absolute _nightmare_ of an article, leaving me little hope we'll avoid a year or two of bickering on this.
IMHO it's much more likely UI people will remember it kinda sucks to have 20 hour battery life with a bunch of touch apps than that we'll clear up the gish-gallop of Apple's charts and publications rushing to provide an intellectual basis for them, without having _any_ access to the chip under discussion, so they substitute iPhone/iPad chips that can completely trade off thermal concerns for long enough for a subset of benchmarks to run, making it look like the the powerfulness/power consumption tradeoff doesn't exist, though its was formerly "basic" to me.
My quote was from Apple's MacbookAir page[1], including the footnote.
Your quote is just not very specific. On its face, it could mean every PC Laptop ever produced. I'm somewhat certain that every Apple computing device ever has beaten that standard. Even Airpods and the Macbook chargers might be getting close these days.
2019 laptop sales are 9.3% of laptop sales for years 2010-2019 (https://www.statista.com/statistics/272595/global-shipments-...). The phrase "Faster than 98% of PC laptops" in itself is very general, an it's fair to assume that it's e.g. all laptops currently in use, or ever made - in part since this is a sales pitch, rather than a technical specification item. If we add 1990-2009 laptops to the statistic above, the share of purportedly modern laptops will just shrink, and substantially at that.
Then you have to add on top of that the consideration of what are the laptops people are actually buying and using. I can assure you that neither enterprise clients nor everyone outside of select, relatively spoiled, group people concentrated in just a few areas are regularly buying top of the line workstation or gaming laptops.
The parent is very much right, amongst techies there's an unhealthy tendency for self-righteous adjudication of the narrative or context.
>I think Apple brought this on themselves when they announced it would be faster than 98%[1] of the existing laptops on the market. They didn't caveat it with "fanless laptops" or "laptops with 20hr of battery life", it's just supposedly faster than all but 2% of laptops you can buy today.
Exactly. It's a meaningless number.
They also conspicuously avoid posting any GHz information of any kind. My assumption is that it's a fine laptop, but a bullshit performance claim.
The clock speed of ARM chips that have big.LITTLE cores is not very meaningful. The LITTLE cores can run at lower frequencies than the big cores. The Apple A series (and M series I will say with some confidence) support heterogeneous operation so both sets of cores can be active at once. The big cores can also be powered off if there's no high priority/power tasks.
The cores and frequency can scale up and down quickly so there's not a meaningful clock speed measure. The best number you'll get is the maximum frequency and that's still not useful for comparing ARM chips to x86.
Even Intel's chips don't have frequency measures that are all that useful. Some workloads support TurboBoost but only within a certain thermal envelope. Even the max conventional clock for a chip is variable depending on thermals.
I don't think it's worthwhile faulting Apple for not harping on the M series clock speeds since they're a vector instead of a scalar value.
Nobody is “faulting” Apple. They tend to avoid saying things for weird PR reasons.
In this case, they make impressive sounding, but almost completely meaningless assertions about performance. If anything, they are underselling an impressive achievement.
Intel does provide performance ranges in thermally constrained packages that are meaningful.
The frequency matters because it would give a far better insight into expected boost behavior & power consumption.
For example when you look up the 15w i5-1035G1 and see 1ghz base, 3.6ghz boost, you can figure out that no, those geekbench results are not representative of the chip running at 15w. It's pulling way, way more because it's in the turbo window, and the gap between base & turbo is huuuuuge.
So right now when Apple claims the M1 is 10w, we really have no context for that. Is it actually capped at 10w? Or is it 10w sustained like Intel's TDP measurements? How much faster is the M1 over the A14? How different is the M1's performance in the fanless Air vs. the fan MBP 13"?
Frequency would give insights into most all of that. It's not useless.
The GeekBench tests have revealed that the clock frequency of the M1 cores is 3.2 GHz, both in MB Air and in MB Pro.
Therefore M1 succeeds to have a slightly higher single-thread performance (between 3% and 8%) than Intel Tiger Lake and AMD Zen 3 at only 2/3 of their clock frequency.
The speed advantage of M1 in single-thread is not high enough to be noticeable in practice, but what counts is that reaching the same performance at high IPC and low frequency instead of low IPC and high frequency results in having the same performance as the competitors at a much lower power consumption.
This is the real achievement of Apple and not the ridiculous claims about M1 being 3 times faster than obsolete and slow older products.
Geekbench doesn't monitor the frequency over time. We don't know what the M1 actually ran at during the test, nor what it can boost to. Or if it has a boost behavior at all even.
It is good to be skeptical of that "faster" and that 2% measurement, because there are lots of opportunities to make them ambiguous. But clock frequencies are hardly useful for comparisons between AMD and Intel. They'd be even more useless across architectures.
Benchmarks are as good as it gets. Aggregate benchmarks used for marketing slides are less than ideal, but the problem with marketing is that it has to apply to everyone. Better workload focused benchmarks might come out later... In their defense most people won't look at these, because most people don't really stress their CPUs anyway.
> They also conspicuously avoid posting any GHz information of any kind
Is that information actually of any real use when dealing with a machine with asymmetric cores plus various other bits on the chip dedicated to specific tasks [1]?
What does ghz have to do with performance across chips? I’m reminded of the old days where AMD chips ran at a lower ghz and outperformed Intel chips running much faster. Intel had marketed that ghz === faster so AMD had to work to get around existing biases.
Even Intel had to fight against it when they transitioned from the P4 designs to Core. They began releasing much lower clocked chips in the same product matrix and it took a while for people to finally accept that frequency is a poor metric for comparing across even the same company’s designs. And I think Apple also significantly contributed to this problem in the mid-late PPC days with “megahertz myth” marketing coupled with increasingly obvious misleading performance claims.
GHz is also a fancy meaningless number that was spawned by Intel's marketing machine. The Ryzen 5000 chips cannot hit the 5GHz mark that Intel ones can, and yet the Ryzen 5000 chips thrashed the Intel ones in every benchmark done by everyone on youtube.
And that's when both the chips are on the same x86 platform. Differences in architecture can never be compared or should be compared with GHz as any kind of basis.
The only place where the GHz is useful for comparison is with products running the same chip
Why is the clock speed relavant? Yes it lower than the highest end x86 chips, but the PPC is drastically higher. They don’t want clueless consumers thinking clock speed matters to performance.
I dont know if you’ve actually done the numbers.. but most laptops on the market, have low to mediocre specs. It would surprise me if more than 2% are pro/enthusiast.
Apple didn't specify if they're counting by model or total sales, but virtually everything in the Gamer Laptop category is going to be faster in virtually every measure.
As a Joe Schmoe it's hard to get good figures, but it appears the total laptop market is about $161.952B[1] with the "gaming" laptop segment selling about $10.96B[2]. Since gaming laptops are more expensive this undercounts cheap laptops, but there are other classes of laptop that are going to outperform this mac, like business workstations.
There might be one way to massage the numbers to pull out that statistic somehow, but it is at best misleading.
>Apple didn't specify if they're counting by model or total sales,
If it was Model they would be spinning it. But they said sold in the past year. I dont know how anyone else would interpret it, but in financial and analytics that is very clearly implying unit sold.
Your [1] is Laptop with Tablet, total Laptop market is about $100B, although this year we might see a sharp increase cause of pandemic.
Let say there are $10 Gaming Laptop market. So 10% of Market Value are going to Gaming Laptop. Total Laptop Market includes Chromebook, so if you do ASP averaging I would expect at least 3x ( if not 4x or higher ) difference in Gaming and Rest of Laptop Market. So your hypothesis of "All Gaming Laptop" would be faster than M1 gives you roughly 3.3% of the Marketshare. Not too far off 2%.
And all of that is before we put any performance number into comparison.
On the fact that discrete gaming laptops have higher power requirements and better cooling solutions, in turn allowing much faster CPUs to run in them.
That's the most meaningful constraint for mobile CPUs today, after all.
If you're not going to compare Apples to Apples, i.e. if power, cooling and size is a constraint you're not going to care about at all, you might as well count desktop PCs as well.
Apples measurement comes pretty close to comparing "laptops that most people would actually buy". Not sure why it's meaningful that a laptop maker can put out a model that's as thick as a college book, has the very top bins of all parts, sounds like a jet engine when running at max speed, and is purchased by 1% of the most avid gamers.
Oh, and if someone puts out a second model that adds RGB backlit keyboard, but is otherwise equivalent, that should somehow count against Apples achievements, because for some reason counting by number of models is meaningful regardless of how many copies that model sold o_O
So no data except a view that more power and more cooling automatically leads to better performance independent of process, architecture and any other factors?
> Apple didn't specify if they're counting by model or total sales, but virtually everything in the Gamer Laptop category is going to be faster in virtually every measure.
This is what I don't get.. why would you ever assume they meant counting by model? That's a nearly meaningless measurement. How do you even distinguish between models in that measurement? Where do you set the thresholds? The supercharged gaming laptops are absolutely going to dominate that statistics no matter what, because there's a huge number of models, lots of which only differ mostly by cosmetic changes. The margins are likely higher, so they don't need to sell as many of a given model to make it worthwhile to put one out. Does a laptop maker even have to actually sell a single model for it to count? How many do they have to sell for it to count? Does it make sense to count models where every part is picked from the best performing bins, so that you're guaranteed that the model couldn't count for more than a fraction of sales?
Counting by number of laptops actually sold is the only meaningful measurement, at least you have a decent chance of finding an objective way to measure that.
And I thought it was 100% obvious from Apples marketing material what they meant, so I really don't get why anyone is confused about this.
> Apple didn't specify if they're counting by model or total sales, but virtually everything in the Gamer Laptop category is going to be faster in virtually every measure.
Not a single one will beat it in single core performance.
I suspect they're not technically saying that, since they're saying "PC laptops." But as Coldtea notes, it's pretty clear the M1-based laptops embarrass all current Intel-based Mac laptops. I'm just not going to fault Apple too much for failing to explicitly say "so this $999 MacBook Air just smokes our $2800 MacBook Pro."
I just had the thought that the figures could be skewed by education departments all over the country making bulk orders for cheap laptops for students doing remote learning.
It beats Apple’s own top of the line intel laptop at a fraction of the cost. According to the tests which have surfaced. That ought to count for something.
Apple has not made pure fantasy claims in the past so why should they now? The trend has been clear. Their ARM chips have made really rapid performance gains.
We don’t even have a significant quantity of people with hardware in hand yet so I’d like to reserve judgement.
At best we have some trite stats about single core performance but I’m interested to see whether or not this maps to reality on some much harder workloads that run a long time. Cinebench is an important one for me...
Every true tech nerd knows this is fucking impressive ever since iPhones started being benchmarked against Intel and every true tech nerd can tell that this is only going to get better. :)
Yep, people who actually care about tech and hardware are applauding this. Anti-Apple and x86 fanboys are the ones doing everything they can to discount it.
>I think Apple brought this on themselves when they announced it would be faster than 98%[1] of the existing laptops on the market. They didn't caveat it with "fanless laptops" or "laptops with 20hr of battery life", it's just supposedly faster than all but 2% of laptops you can buy today.
No, it's faster than all those laptops, not just "fanless" ones. It's actually faster than several fan-using i9 models. And it has already been compared to those.
The grantparent talks about comparisons for GPUs...
> I'm sure you understand the performance differences between a 10W part with integrated graphics designed for a fanless laptop and a desktop part with active cooling and discrete graphics.
The problem here is that the obvious competition is Zen 3, but AMD has released the desktop part and not the laptop part while Apple has released the laptop part and not the desktop part. (Technically the Mini is a desktop, but you know what I mean.)
However, the extra TDP has minimal effect on single thread performance because a single thread won't use the whole desktop TDP. Compare the laptop vs. desktop Zen 2 APUs:
Around 5% apart on single thread due to slightly higher boost clock, sometimes not even that. Much larger differences in the multi-threaded tests because that's where the higher TDP gets you a much higher base clock.
So comparing the single thread performance to desktop parts isn't that unreasonable. The real problem is that we don't have any real-world benchmarks yet, just synthetic dreck like geekbench.
I'm not so sure the fellow does understand that difference.
Their take reminds me of the Far Side cartoon, where the dog is mowing the lawn, a little irregularly, and a guy is yelling at him, "You call that mowing the lawn?"[1]
This is true, but I think a lot of folk are assuming that a future M2 or M3 will be able to scale up to higher wattage and match state-of-the-art enthusiast-class chips. That assumption is very much yet to be proven.
This is true, but I think a lot of folk are assuming that a future M2 or M3 will be able to scale up to higher wattage and match state-of-the-art enthusiast-class chips.
Apple wouldn't go down this path if they weren't confident that their designs would scale and keep them in the performance lead for a long time.
Look, the Mac grossed $9 billion last quarter, more than the iPad ($6.7 billion) and more than Apple Watch ($7.8 billion). They've no doubt invested a lot of time and money into this; there's no way, now that they've jettisoned Intel, they haven't gamed this entire thing out. There's too much riding on this.
Yes, Apple's entry level laptops smoke much more expensive Intel-based laptops. But wait until the replacements for the 16-inch MacBook Pro and the iMac and iMac Pros are released.
By then, the geek world would have gone through all phases of grief—we seem deep into denial right now, with some anger creeping in.
> Apple wouldn't go down this path if they weren't confident that their designs would scale and keep them in the performance lead for a long time.
There are two reasons to think this might not be the case. The first is that they could justify continuing to do this to their shareholders based solely on the cost savings from not paying margins to Intel, even if the performance is only the same and not better. Their customers might not appreciate having the transition dumped on them in that case, but Apple has a specific relationship with their customers.
And the second is that these things have a long lead time. They made the call to do this at a time when Intel was at once stagnant and the holder of the performance crown. Intel is still stagnant but now they have to contend with AMD. And with whatever Intel's response to AMD is going to be now that they've finally got an existential fire lit under them again.
So it was reasonable for them to expect to beat Intel's 14nm++++++ with TSMC's 5nm, but what happens now that AMD is no longer dead and is using TSMC too?
The first is that they could justify continuing to do this to their shareholders based solely on the cost savings from not paying margins to Intel, even if the performance is only the same and not better.
You know Apple’s market capitalization is a little over $2 trillion dollars, right? And Apple's gross margins have been in the 30-35% range for many years. This isn't a shareholder issue. They are by far the most profitable computer/gadget manufacturer around.
So it was reasonable for them to expect to beat Intel's 14nm++++++ with TSMC's 5nm, but what happens now that AMD is no longer dead and is using TSMC too?
No matter what AMD does in the short term, they're not going to beat the performance per watt of the M1, let alone the graphics, the Neural Engine and the rest of components of Apple's SoC. It's not just 14nm vs. 5nm; it's also ARM’s architecture vs. x86-64.
Apple has scaled A series production for more than a decade and nobody has caught them yet in iPhone/iPad performance. There were 64-bit iPhones for at least a year before Qualcomm and other ARM licensees could catch up.
There's no evidence or reason to believe it'll be any different with the M series in laptops and desktops.
> You know Apple’s market capitalization is a little over $2 trillion dollars, right?
That's the issue. Shareholders always want to see growth, but when you're that big, how do you do that? There isn't much uncaptured customer base left while they're charging premium prices, but offering lower-priced macOS/iOS devices would cannibalize the margins on existing sales. Solution: Increase the margins on existing sales without changing prices so that profitability increases at the same sales volume.
> No matter what AMD does in the short term, they're not going to beat the performance per watt of the M1
Zen 3 mobile APUs aren't out yet, but multiply the performance of the Zen 2-based Ryzen 7 4800U by the 20% gain from Zen 3 and the multi-threaded performance (i.e. the thing power efficiency is relevant to) is already there, and that's with Zen 3 on 7nm while Apple is using 5nm.
> it's also ARM’s architecture vs. x86-64.
The architecture is basically irrelevant. ARM architecture devices were traditionally designed to prioritize low power consumption over performance whereas x86-64 devices the opposite, but that isn't a characteristic of the ISA, it's just the design considerations of the target market.
And that distinction is disappearing now that everything is moving toward high core counts where the name of the game is performance per watt, because that's how you get more performance into the same power envelope. Epyc 7702 has a 200W TDP but that's what allows it to have 64 cores; it's only ~3W/core.
> Apple has scaled A series production for more than a decade and nobody has caught them yet in iPhone/iPad performance.
Ryzen 7 4800U has eight large cores vs 4 large plus 4 small in the M1 and even with your (hypothetical) 20% uplift multicore is just about matching M1. Single core is nowhere near as good.
'Architecture is basically irrelevant' not the biggest factor but not irrelevant - x64 still has to support all those legacy modes and has more complex front end.
You're working very hard to try to deny that Apple has passed AMD and Intel in this bit of the market. We'll have to see what happens at higher TDPs but they clearly have the architecture and process access to do very well.
> Ryzen 7 4800U has eight large cores vs 4 large plus 4 small in the M1 and even with your (hypothetical) 20% uplift multicore is just about matching M1.
We're talking about performance per watt. The little cores aren't a disadvantage there -- that's what they're designed for. They use less power than the big cores, allowing the big cores to consume more than half of the power budget and run at higher clocks, but the little cores still exist and do work at high power efficiency. It would actually be a credit to AMD to reach similar efficiency with entirely big cores and on an older process.
> Single core is nowhere near as good.
Geekbench shows Zen 3 as >25% faster than Zen 2 for single thread. Basically everything else shows it as ~20% faster. Geekbench is ridiculous.
> 'Architecture is basically irrelevant' not the biggest factor but not irrelevant - x64 still has to support all those legacy modes and has more complex front end.
This is the same argument people were making twenty years ago about why RISC architectures would overtake x86. They didn't. The transistors dedicated to those aspects of instruction decoding are a smaller percentage of the die today than they were in those days.
> No idea what Qualcomm has to do with this.
The claim was made that Apple has kept ahead of Qualcomm. But Intel and AMD have kept ahead of Qualcomm too, so that isn't saying much.
> You're working very hard to try to deny that Apple has passed AMD and Intel in this bit of the market.
People are working very hard to try to assert that Apple has passed AMD and Intel in this bit of the market. We still don't have any decent benchmarks to know one way or the other.
Half the reason I'm expecting this to be over-hyped is that we keep getting synthetic Geekbench results and not real results from real benchmarks of applications people actually use, which you would think Apple would be touting left and right if they were favorable.
We'll find out soon enough how things stand but just to point out that your first comment on small vs large cores really doesn't work - the benchmarks being quoted are absolute performance not performance per watt benchmarks. Small cores are more power efficient but they do less in a given period of time and hence benchmark lower.
AMD should easily beat Apple in graphics, all they have to do is switch to the latest Navi/RDNA2 microarchitecture. They are collaborating with Samsung on bringing Radeon into mobile devices, surely that will translate into efficiency improvements for laptops too.
> ARM’s architecture vs. x86-64
x86 will always need more power spent on instruction decode, sure, but it's not a huge amount.
Perhaps you haven't read the Anandtech article? [1]
Intel has stagnated itself out of the market, and has lost
a major customer today. AMD has shown lots of progress
lately, however it’ll be incredibly hard to catch up to
Apple’s power efficiency. If Apple’s performance trajectory
continues at this pace, the x86 performance crown might
never be regained.
I suspect the opposite. It is possible that Apple has decided that 640 KB is enough for everyone — that performance level of a tablet is what most common people need from a computer. Which is not entirely untrue, as a decent PC from 10 years ago is still suitable for most common tasks today if you are not into gaming, or virtual machines building other virtual machines. Most people don't really use all their cores and gigabytes. Also, consumers got used to smartphone limitations, so computer can now be presented as overall “bigger and better” mobile device with better keyboard, bigger drive, infinite battery, etc.
If they wanted raw performance in general code, they would stay with what they already had. The switch means that their goal was different.
I guess we'll see hordes of fans defending that decision quite soon.
I suspect the opposite. It is possible that Apple has decided that 640 KB is enough for everyone — that performance level of a tablet is what most common people need from a computer.
Perhaps you haven't been paying attention but this is Apple's third processor transition: 68K to PowerPC to Intel to ARM. Each time was to push the envelope of performance and to have a roadmap that wouldn't limit them in the future.
When the first PowerPC-based Macs shipped, they were so fast compared to what was available at the time, they couldn't be exported to North Korea, China or Iran; they were classified as a type of weapon [1].
The fact the PowerMac G4 was too fast to export at the time was even part of Apple's advertising in the 90s [2].
It's always been part of Apple's DNA to stay on the leading edge, especially with performance.
Apple's strategy has never been to settle for good enough. If that were the case, they wouldn't have spent the last 10+ years designing their own processors and ASICs. Dell, HP, Acer, etc. just put commodity parts in a case and ship lowest-commodity hardware. It shouldn't be a surprise that the M1 MacBook Air blows these guys out of the water.
Anyone paying attention saw this coming a mile away.
I have a quad-core 3.4 GHz Intel iMac and it's pretty clear the MacBook Pro with the M1 is going to be noticeably faster for some, if not all, of the common things I do as a web developer.
We know the M2 and the M3 are in development; I suspect 2021 will really be the year of shock and awe when the desktops start shipping.
There seems to be no evidence that Intel will be able to keep up with Apple. The early geek bench results show the M1 laptops beating even the high end Intel Mac ones. And that's with their most thermally constrained chip.
Apple will be releasing something like a M1X next, which will probably have way more cores and some other differences. But this M1 is incredibly impressive for this class of device. Intel has nothing near it to compete in this space.
The bigger question is how well does Apple keep up with AMD and Nvidia for GPUs and will they allow discrete GPUs.
Indeed, but given they are on TSMC 5nm and the apparent strength of the architecture and their team I think most will be inclined to give them the benefit of the doubt for the moment.
Actually biggest worry might be the economics - given their low volumes at the highest end (Mac Pro etc) how do they have the volumes to justify investing in building these CPUs?
I suspect the plan is to redefine computing with applications that integrate GPU (aka massively parallel vector math), plain old Intel-style integer and floating point, and some form of ML acceleration.
So multiple superfast cores are less important for - say - audio/video if much of the processing is being handled by the GPU, or even by the ML system.
This is a difference in kind not a difference in speed, because plain old x86/64 etc isn't optimal for this.
It's a little like the next level of the multimedia PC of the mid-90s. Instead of playing video and sound the goal is to create new kinds of smart immersive experiences.
Nvidia and AMD are kinda sorta playing around the edges of the same space, but I think Apple is going to try to own it. And it's a conscious long-term goal, while the competition is still thinking of specific hardware steppings and isn't quite putting the pieces together.
Good point. Apple dominates a unique workload mix brought on by the convergence of mobile and portable computing. They can benchmark this workload mix through very different system designs.
Probably nothing to stop them running linux on M series chips. I'd be a bit surprised actually - suspect we'll see something like a 32 Core CPU which will go into the higher end machines (maybe 2 in the Mac Pros).
The point of a computer as a workstation is it goes vroom. Computer that does not go vroom will not be effective for use cases where computer has to go vroom. It doesn't matter if battery life is longer or case is thinner. That won't decrease compile times or improve render performance.
The point of a computer as a workstation is it goes vroom.
The M1 is not currently in any workstation class computer.
It is in a budget desktop computer, a throw-it-in-your-bag travel computer, and a low-end laptop.
When an M series chip can't perform in a workstation class computer, then your argument will be valid. But you're trying to compare a VW bug with a Porsche because they look similar.
The "low-end laptop" starts at $1300, is labeled a Macbook Pro, and their marketing material states:
"The 8-core CPU, when paired with the MacBook Pro’s active cooling system, is up to 2.8x faster than the previous generation, delivering game-changing performance when compiling code, transcoding video, editing high-resolution photos, and more"
> It is in a budget desktop computer, a throw-it-in-your-bag travel computer, and a low-end laptop.
I took "budget desktop computer" to be the Mac Mini, "throw-it-in-your-bag travel computer" to be the Macbook Pro, and "a low-end laptop" to be the Macbook Air.
But I agree - the 13" is billed as a workstation and used as such by a huge portion of the tech industry, to say nothing of any others.
None of those are traditional Mac workstation workloads. No mention of rendering audio/video projects, for example. These are not the workloads Apple mentions when it wants to emphasize industry-leading power. (I mean, really, color grading?)
This MBP13 is a replacement for the previous MBP13; but the previous MBP13 was not a workstation either. It was a slightly-less-thermally-constrained thin-and-light. It existed almost entirely to be “the Air, but with cooling bolted on until it achieves the performance Intel originally promised us we could achieve in the Air’s thermal envelope.”
Note that, now that Apple are mostly free of that thermal constraint, the MBA and MBP13 are near-identical. Very likely the MBP13 is going away, and this release was just to satisfy corporate-leasing upgrade paths.
"workstation class" is a made up marketing word. Previous generation macbooks were all used for various workloads and absolutely used as portable workstations. Youre moving the goalposts.
Ah but according to the Official Category Consortium you’ve just eliminated several products[1] which would presumably be included if the “mobile workstation” moniker was designated based on workload capabilities.
[1]: including the 16” MBP, but certainly not limited to it
Laptop used to be a form factor (it fits on your lap) while very light, very small laptops, were in the notebook and subnotebook (or ultra portable) category.
I usually think of "subnotebook" as implying the keyboard is undersized; a thin and light machine that is still wide enough for a standard keyboard layout is something else.
I think we should bring back the term "luggable" for those mobile workstations and gaming notebooks that are hot, heavy, have 2hr or less of battery life.
Docked laptop is. With a benefit that if you want to work on the road you can take it without having to think about replicating your setup and copying data over.
Then what is a macbook for? Expensive web browsing? I've been told for a long time that macbooks are for work. Programmers all over use them, surely. Suddenly now none of that applies? To get proper performance you have to buy the mac desktop for triple the price?
Probably because the announced hardware is clearly entry level. The only model line that gets replaced is the MacBook Air which has been, frankly, cheap-is and underpowered for a long time.
So you have a platform that is (currently) embodied by entry level systems that appear to be noticeably faster than their predecessors. Apple has said publicly that they plan to finish this transition in 2 years. So more models are coming - and they'll have to be more performant again.
It seems pretty clear that the play here runs "Look here's our entry level, it's better than anyone else's entry level and could be comparable to midlevel from anyone else. But after taking crap for being underpowered while waiting for intel to deliver we cn now say that this is the new bar for performance at the entry level in these price brackets."
It would be interesting to see the comparison to a Ryzen 7 PRO 4750U, you can find that in a ThinkPad P14s for $60 less than the cheapest macbook air (same amount of ram and ssd size) so that seems like a fair comparison
Assuming that geekbench is reflective of actual perf (I'm not yet convinced) there is also the GPU, and the fact that AMD is sitting on a 20% IPC uplift and is still on 7nm.
So if they release a newer U part in the next few months it will likely best this device even on 7nm. An AMD part with a edram probably wouldn't hurt either.
It seems to me that apple hasn't proven anything yet, only that they are in the game. Lets revisit this conversation in a few years to see if they made the right decision from a technical rather than business perspective.
The business perspective seems clear, they have likely saved considerably on the processor vs paying a ransom to intel.
edit: for the downvoters, google Cezanne, because its likely due in very early 2021 and some of the parts are zen3. So apple has maybe 3 months before another set of 10-15W amd parts drop.
That'll mean an 8c/16t will catch up to a 4+4 core
Apple will have a 8+4 core out soon, and likely much larger after that. Since they're so much more power efficient, they can utilize cores better at any TDP.
Sad to see downvotes on this: it's like there's a set of people hellbent on echoing marketing claims, in ignorance of (what I formerly perceived as basic) chip physics - first one to the next process gets to declare a 20-30% bump, and in the age of TSMC and contract fabs, that's no longer an _actual_ differentiation, the way it was in the 90s.
I'm as sceptical of Apple's marketing claims as anyone but if you're comparing actual performance of laptops that you will be able to buy next week against hypothetical performance of a cpu that may or may not be available next year (or the year after) then the first has a lot more credibility.
PS last I checked AMD was not moving Zen to 5nm until 2022 - so maybe a year plus wait is a differentiation.
Regardless, this competition is great for us consumers! I’m excited to see ARM finally hit its stride and take on the x64 monopoly in general purpose computing.
You're making completely the wrong comparison. On the left you have Geekbench 5 scores for the A12Z in the Apple DTK, and on the right you have Geekbench 4 scores for the Ryzen.
The M1 has leaked scores of ~1700 single core and ~7500 multicore on Geekbench 5, versus 1200 and 6000 for the Ryzen 4750U.
how can you tell it's only 6k for the ryzen 4750U on the GB5 tests? there's so many pages and pages of tests I can't sift through all of that to confirm
> It seems pretty clear that the play here runs "Look here's our entry level
Not quite. The play here is, "Look, here is our 98% of sales laptop". That it's entry level is only an incidental factor. 98% of sales volume is at this price point, and so they get the maximum bang for their buck, the maximum impact, by attacking that one first. Not just because it's the slowest or entriest.
Had they started at the fastest possible one, sure it would have grabbed some great headlines. But wouldn't have had the same sales impact. (And it's icing that the slowest part is probably easiest to engineer.)
> Probably because the announced hardware is clearly entry level.
Yes, but why compare it to an entry card that was released 4 years ago instead of an entry card that's been released in the past 12 months? When the 1050 Ti was released, Donald Trump was trailing Hillary Clinton in the polls. Meanwhile, the 1650 (released April 2020, retails ~$150) is significantly faster than the 1050 Ti. (released October 2016, retailed $140 but can't be purchased new anymore)
The 1050 is still a desktop class card. The M1 is in tiny notebooks and the Mac Mini, none of which even have the space or thermals to house such a card.
NVIDIA doesn't make small GPUs very often. The 1050 uses a die that's 135mm^2, and the smallest GeForce die they've introduced since then seems to be 200mm^2. That 135mm^2 may be old, but it's still current within NVIDIA's product lineup, having been rebranded/re-released earlier this year.
Here are all the differences between the M1 Air and M1 Pro, from [1]:
Brighter screen (500 nits vs. 400)
Bigger battery (58 watt-hours vs. 50)
Fan (increasing thermal headroom)
Better speakers and microphones
Touch Bar
0.2 pounds of weight (3.0 pounds vs. 2.8 — not much)
The SoC is the same (although the entry-level Air gets 7-core GPUs, that’s probably chip binning). The screen is the same (Retina, P3 color gamut), both have 2 USB-C ports, both have a fingerprint reader.
Competition has 4k displays on their thin and lights. They also don't use macOS which has problems running at resolutions other than native or pixel-doubled. The suggested scalings are 1680 by 1050, 1440 by 900, and 1024 by 640. None of those are even fractions of the native resolution so the interface looks blurry and shimmers. Also, all the suggested scalings are very small so there isn't much screen real-estate.
No it's not. It was designed to be extremely light with many compromises to make that happen. Yeah it got outdated, but that doesn't mean it was entry level.
You think a fair comparison is a passively heated MacBook Air compared to a top spec PC cpu?
It absolutely destroys it’s competition in performance (low end light notebooks) - why would you think comparing it to a 3090rtx is anything like fair.
Like calling a dell XPS 13 a piece of shit because it can’t keep up with a thread-ripper Titan pc desktop.
The fact you’re even in a place (where the benchmarks destroy in class competition) to complain about how it’s not being put up against top spec pc hardware is testament to how powerful it is.
> Woot? So you are buying a new Fiat Punto and compare it to the latest spec of a Koenigsegg? What are you even doing?
under any other circumstance i’d agree with you. i think we can agree this is not the assertion apple are trying to push in their marketing of the M1.
if Fiat are going to claim their entry level punto is, in real world terms, faster that 98% of all cars sold in the last year, they’re inviting a lot of (fair) comparison.
the 1050 is a budget chip from 3 generations ago. even in the GPU space, nvidia are claiming that their mid range GPU (RTX3080) is outpacing their previous generation top-end GPU (RTX2080ti).
But the 1050 is a specialized GPU vs the general-purpose M1, and besides, the 1050 is from only three years ago. So what do you think the relationship between XTX6080 and and M4 will look like three years in the future?
I’d like to see comparisons of Tensorflow-gpu operations. Kind of like how Apple used to compare Photoshop filter or Final Cut performance across computers.
Is there a Tesla that lasted 20 years and after 500 thousand Kms is still functioning with little or no maintenance?
Switching to Apple has a cost, switching to Apple with Apple silicon has an even higher cost.
It all depends what you use your computer for, if you buy a Tesla you probably don't depend on your car, people buying entry level hw are people who don't need something fancy, they need a tool and good enough it's enough.
To reverse your analogy, if they have the same price, I take a computer that I can upgrade and actually own over an Apple
Do you see a lot of laptops with threadripper CPUs and 3090 RTX graphics cards?
I sure don't.
"Best-in-class" isn't a weasel word – it's recognition that no, this $1000 laptop is not going to be the fastest computer of all time. Just faster than other products similar to it.
> If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.
You can't really compare a laptop SoC with a dedicated graphics cards like the 3090 RTX. One is using a battery on a laptop and the other is plugged in to a power source with dedicated cooling.
Yes, Apple is claiming this is a better solution but that's mostly for laptops. While they did release a Mac mini, they still haven't released an Apple SoC Mac Pro or iMac. Those would be fair game for such comparison.
> If you are going to convince me, show me how the CPU stacks up to a top of the line Ryzen/threadripper and run Cinebench. If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.
These are chips for their low end, entry level products. Why on earth would you think that would be an appropriate comparison? That's absolutely absurd. They aren't putting these in iMacs or Mac Pros.
They aren't putting these in their high end MacBook Pros (they put it in their low end 13" pro, which has always been only a minor step up from the Air - the higher end 13" pro was more powerful, which they haven't replaced yet.)
Apple's "faster than 98% of laptops sold" line is obviously nonsense, like I'm sure it's technically correct - most laptops sold are cheap, low end things - but that's not a particularly huge achievement. I'm not sure why this line in particular is inviting people to make such ridiculous statements in response though.
> If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.
You're seriously suggesting we should compare the integrated graphics in a fanless bottom-of-their-line laptop that starts at $999 to a $1,500 graphics card ?
(Let's be brutally clear - that graphics card can do precisely squat on it's own. No CPU, no RAM, no PSU, no display, no keyboard, etc. etc. etc.)
Right answer here. I am pretty excited about a passively cooled device power efficient device that performs on-par, or better than, a 3-year-old, super heavy, gamer laptop like an Alienware r2/3. Those machines are still capable of running relatively impressive PC-VR, way ahead of what we see today in the standalone Oculus Quest2. Stick one (or preferably 2) of these things in a VR headset please Apple.
Hey, take a breather if it helps. No one here is responsible for convincing you of anything. But you sound a little upset.
If it helps, take into consideration performance:power ratios. None of your scenarios are fair otherwise, and I personally haven't seen anyone here claim the M1 will outperform everything. Hence, "in its class."
Maybe you saw some errant comments on PCMag.com claiming the M1 was the be all end all of computing?
> So its faster then a 3 generations old budget card, that doesn't run nVidia optimized drivers
This is a key point. Nvidia GPUs have not been supported in macOS since Mojave, so this seems like an apples-oranges comparison. Unless the benchmark for M1 was also run on Mojave (unlikely), then there's 3 years worth of software optimization potentially unaccounted for.
That said still an impressive showing given the TDP and the fact it's an integrated GPU vs. a dedicated GPU. It seems to hint that with enough GPU cores and a better cooling solution, it's not unreasonable to see these replacing the AMD Radeon 5500M/5600M next year in the MBP 16 and iMac lineups.
The 1050 Ti was the premium discrete GPU option in the XPS 15 in 2018. That only got upgraded to the 1650 last year. Maybe that's as much a ding on Dell as anything else, but either way, lots of us are still rocking those laptops and they're hardly "below entry level".
Nah. I have this laptop too and at the time, the 1050TI was considered underwhelming but "well this laptop isn't for gaming, it's a business laptop". The contemporary Surface Book 2 had a 1060 with almost double the performance and people were kind of pissed.
Comparisson could be against either an MX250/MX350 if they wanted to compare to nvidia (however that's not integrated in a SoC-like manner) or the AMD Vega on Renoir, or Intel Graphics on Ice/Tiger Lake (I honestly lost track of what intel calls their iGPUs these days, they went back and forth between confusing naming conventions, but it's the CPU gen/model that's important anyway).
The memory bandwith, amount of shader units, TMUs, fill rate, etc is different (slower) on the MX350. While they are surely the same archiecture, the MX350 is lower tier than the 1050.
And in this case, it'd make for a larger difference between Apples GPU and the nvidia. But then again, the M1 is a mobile SoC and the 1050 is a desktop GPU. So we shouldn't even be comparing them to being with.
>Because M1 is their entry-level, laptop class offering.
It makes zero sense to compare them to high-end desktop CPUs and GPUs.
Sure It makes zero sense to compare them to high-end desktop or laptops, since it is used in devices with lacking the main attribute of Personal computer - the ability to control it and install OS of your choice.
Therefore it falls into another category like phones, ipads and other toys just with attached keyboard.
I’m sure I’ll get downvoted too, probably more than you, probably even attracting sympathetic upvotes for your own comment but... I really cannot express how much I don’t care. I’ve booted multiple OSes for learning, for fun, and for software support when the software wasn’t available for my preferred OS. But I use my computer for work (and some web browsing). Other than games, macOS has all the software I could think to need. I don’t like rebooting anyway. I can’t think of a scenario where I would ever need a device with another OS that wouldn’t be supplied by an employer. Some freedoms are sacrosanct (and if it ever comes to pass that macOS absolutely prevents installing unblessed end-user apps which deal only with public and supported APIs, that would be my deal breaker), but some freedoms seem so abstract and theoretical in their impact that I just can’t give them weight beyond a thought exercise. “You can’t install any other OS besides this one we offer that meets all your needs” is just... hardly even a thing I would even give much thought.
Half the battery on a higher powered CPU (the 4800H is 45W and the 4900HS is 35W if I recall correctly), plus a discrete graphics GPU (on top of the iGPU) and active cooling vs passive cooling, plus a 120Hz display on the G14 (and probably Apples is 60Hz).
So a more power-hungry notebooks battery lasts half as long, but still in the double digits. Not quite a surprise there.
If it will only be available in phones then it should clearly be compared to phone SoCs. If it will be in laptops too it should be compared to laptops/PC too of course.
I don’t know of any laptops with an RTX 3090 GPU or a Threadripper, let alone in a MacBook Air thin form factor running on 10 watts with 15 hour battery life.
Even in phones, even those phones offered by a single vendor, there’s a pretty wide performance range targeting a generally accepted (if evolving) set of market/form factor/use case categories. “Phone” is so broad a category as to be meaningless for comparison. “Laptop” as well. It’s just as silly to compare an ultraportable to a 17” 7lb gaming laptop as to compare an upscale ultraportable to a budget Chromebook.
The M1 is clearly the low-end Apple Silicon chip, given the computers that Apple is putting it in: the MacBook Air, the Mac mini, and the two-port MBP. Why in the world should we demand that this chip be able to blow the doors off a "top of the line Ryzen"? This is a mobile CPU that's holding it own against CPUs like that in single-core performance. "Yeah, but I bet a 64-core Ryzen would just blow it away in multi-core, so who is Apple kidding? Pshaw." Really?
How is it relevant how it stacks upto a thread ripper and 3090 rtx? You are comparing an ultra book with a large high performance workstation pc then. That makes absolutely no sense.
I don’t know why this particular comparison is noteworthy, but this is not a top-of-the-line CPU or GPU, is not intended to be, and those comparisons would be meaningless. It’s a 10W part for lower-end devices.
Seems pretty obviously to show what mainstream GPU it is closest to in performance. Especially for the first benchmark, the figures are suspiciously close.
It seems amazing that Apple can make its own desktop chips now - imagine if the origin Apple had an apple chip instead of a 6502! OTOH, everyone used to design their own CPUs, like Sun and Acorn. Wozniak too.
Come on man, a 1050TI was a reasonably powerful graphics card. It's amazing that we now have an integrated graphics processor in a low power chip that can match its performance.
I'd like to see someone run our framework benchmarks project [1] on M1 versus something like an Asus G14 (Ryzen 4900HS). It's not graphics, but the ability to run web frameworks is of interest to me.
So when Apple m1 goes toe to toe with top cpus you all scream “apples to oranges” but when it comes to INTEGRATED GPU ON 10WATTS OF POWER it suddenly makes total sense for you to compare it to 350watt dedicated products?
Like.. what the hell were you thinking typing this comment?
And I'm not scared, I'm sad. I would love to have 3 or more open competitive desktop/laptop platforms (Windows, Linux and MacOS) but my view is that the release of the M1 makes that not very likely.
The M1 is almost identical to the A14, a great chip that was designed for mobile devices. It is designed to run one application at a time, don't do much compute or graphics, have a fixed set of memory and no hardware attachments. It does that brilliantly.
The problem is that a chip designed for a computer have different priorities. The need for low power consumption is less important. Even in a laptop, after 10h, Id rather take performance then more battery time. In a Computer you want more threads, expandable memory, PCI Gen 4.0, discreet graphics, GPU Compute, Raytracing, support for fast networking (the Mac Mini used to have 10Gig lan), multiple display support, expandable storage, and in a stationary computer preferably you want to be able to upgrade parts.
If we look at the M1 compare it to the A14, we can see that Apple has made almost no modifications to the architecture to make it fit in a computer rather then a mobile device. They didn't add more cores, they didn't add support for PCI, GPUs, Ethernet, or anything else that makes it more like a Computer CPU.
Maybe this is besides the point, but to me it was very telling that they didn't update the physical design of any of the 3 computers. The thermal envelope of the M1 Should make it possible to make some much sexier designs, then the old designs we where given. It tells me a lot about how Apple allocates its resources.
This paired with Apples messaging, tells me that Apple is not interested in making Macs that are competitive with PCs on the things that make you choose to use a computer over a mobile device. It feels to me like Apple has moved to Apple Silicon on the mac because its convenient for them to have a single platform, rather than it being the right design for the products.
Apple have neglected the mac and especially the pro segment for a long time. There was always a chance they would get their act together and build reasonably priced, top speced machines, that could compete with Linux and Windows boxes, but with this move, I cant see them put the resources behind their silicon to compete with AMD, nVidia, and Intel when it comes to Performance and features.
So, no I'm not scared, I'm sad that Apple has taken themselves out of the running.
If you used to run a Mac, then you'd know that they didn't redesign the outside of any Macs when they switched to Intel. It signals continuity of the platform. And besides, they just redesigned the Air two years ago. What were you expecting?
And yes, they did add more cores. There's PCIe 4. There's support for Ethernet. There's also Thunderbolt and HDMI.
You're extrapolating the three lowest-end Macs up through the entire product line. They aren't going to keep the higher-end Intel Mac mini available if they don't think it (and its specs) serves a purpose. They aren't going to invest everything they did in the new Mac Pro just to drop it after a year.
And if the performance of Apple's chips over the last decade makes you think they won't put the resources behind their silicon to compete, I don't know what to tell you.
This doesn't exclude you from being a mac hater. In fact judging from the comments you read on every post on HN that is remotely related to Apple, that seems more likely than the typical blind hatred you see on the internet broadly.
> And I'm not scared, I'm sad. I would love to have 3 or more open competitive desktop/laptop platforms (Windows, Linux and MacOS) but my view is that the release of the M1 makes that not very likely.
Based on what, exactly? Very few people have their hands on an M1 mac and there hasn't been any reviews of the systems from third parties yet.
> If we look at the M1 compare it to the A14, we can see that Apple has made almost no modifications to the architecture to make it fit in a computer rather then a mobile device. They didn't add more cores, they didn't add support for PCI, GPUs, Ethernet, or anything else that makes it more like a Computer CPU.
You don't know what they did or didn't do to adapt the M1 for computers. There has to be some degree of PCI-E support since they are using thunderbolt ports. There is ethernet on the mac mini with the M1 chip. It remains to be seen whether or not supporting discrete GPUs will be a thing and that'll largely depend on how their GPU scales. The computers that use discrete GPUs in macs right now haven't released an Apple silicon variant yet.
> Maybe this is besides the point, but to me it was very telling that they didn't update the physical design of any of the 3 computers. The thermal envelope of the M1 Should make it possible to make some much sexier designs, then the old designs we where given. It tells me a lot about how Apple allocates its resources.
I think it is beside the point. There's no reason to delay new chips just for the sake of being in sync with a new redesign of the chassis. It may or may not happen next year with the iMac or Macbook Pro 16, we don't know that yet.
> Apple have neglected the mac and especially the pro segment for a long time. There was always a chance they would get their act together and build reasonably priced, top speced machines, that could compete with Linux and Windows boxes, but with this move, I cant see them put the resources behind their silicon to compete with AMD, nVidia, and Intel when it comes to Performance and features.
They only released their entry level computers. You have nothing to base this argument on. There have only been leaked benchmarks all of which seem favorable towards Apple. We won't know what it means until people start getting them and testing them.
You don't know what they did or didn't do to adapt the M1 for computers. There has to be some degree of PCI-E support since they are using thunderbolt ports. There is ethernet on the mac mini with the M1 chip.
Also, from the Docker issue that was posted here yesterday, we know that the A14 does not have support for virtualization and M1 does (I presume the ARM equivalent of VT-X, etc.).
You completely sidestep the fact that benchmarks show the air outperforming all the other laptops Apple sell today with intel chips. How can you not see that as an achievement?
None of the computers getting the M1 today previously let you put in PCI cards or do any of the stuff you claim is needed other than expand the memory.
A14 only made to do one task at a time is just not true. iOS itself runs multiple tasks and writing software utilizing multiple threads is something you do all the time in iOS. I have written more multithreaded code on iOS than on a desktop. Why? Because using grand central dispatch is a must when dealing with all sorts of low latency web APIs you access. I don’t have that issue when building desktop apps.
These Apple laptops have always had few ports. Don’t blame M1 for that.
I'm genuinely puzzled as to what Apple would have to have done to convince you with the M1 given the machines it was designed for. It's an 8 core chip that runs in a fanless laoptop and outperforms all of the competition in that form factor.
We haven't seen what the desktop versions with much higher TDPs and discrete graphics will look like yet but you've written them off?
What was “old” about the previous MacBook Air? It was ice lake. Also, the new one is even more “thermally constrained” because it doesn’t have a fan. What’s your point?
It seems odd to be so cynical about a drastic performance increase yoy and a breakthrough in performance per watt. I bet if it wasn’t Apple you would be a little more excited.
I think that’s the important point to consider. Nvidia is non grata with Apple, and that rift happened way way before Metal. These drivers have no chance of performing representatively.
Try changing it to compare the M1 against a 2080. This is only one generation behind yes? If I could just paste a pic in I would, but I will let you do the work yourself. It is similar. Amazing IMO, and I am no longer an apple fan.
Apple could nearly buy Nvidia with their cash in hand alone, is it that surprising? I'd be more surprised if they fucked it up completely.
That they have actually chose to do it is impressive, although highly worrying to me from a software freedom perspective. What are the odds of Apple releasing drivers for any of their hardware?
Exactly, but I am not surprised. It has always been this deep seated suspicion and denial of Apple achievements by Apple haters. I remember debating with someone claiming the iPhone was not innovation by Apple because they had not made the touch sensor. It was like “they just got lucky”
Considering that there’s no difference between the 1050ti in the OP and the 5500M that PragmaticPulp posted I’m inclined to say this test sucks. Userbenchmark.com shows there should be a substantial (38%) improvement between those two. Take these early results with a HUGE grain of salt because they smell fishy.
1: Userbenchmark.com is terrible and nobody should use it for anything. At least their CPU side of things is hopelessly bought & paid for by Intel (and even within the Intel lineup they give terrible & wrong advice), maybe the GPU side is better but I wouldn't count on it.
2: The real question there isn't "why is the 1050 Ti not faster?" it's "how did you run a 1050 Ti on MacOS in the first place, since Nvidia doesn't make MacOS drivers anymore and hasn't for a long time?"
> Userbenchmark.com is terrible and nobody should use it for anything. At least their CPU side of things is hopelessly bought & paid for by Intel (and even within the Intel lineup they give terrible & wrong advice), maybe the GPU side is better but I wouldn't count on it.
To provide some elaboration on this: Their overall CPU score used to be 30% single, 60% quad, 10% multicore. Last year around the launch of zen 2 they gave it an update. Which makes sense; the increasing ability of programs to actually scale beyond four cores means that multicore should get more importance. And so they changed the influence of multicore numbers from 10% to... 2%. Not only was it a blatant and ridiculous move to hurt the scores of AMD chips, you got results like this, an i3 beating an i9 https://cdn.mos.cms.futurecdn.net/jDJP8prZywSyLPesLtrak4-970...
And there was some suspicious dropping of zen 3 scores a week ago, too, it looks like.
I don’t see that as evidence of blatant bias for Intel. The site is just aimed at helping the average consumer pick out a part, and I think the weighting makes sense.
Most applications can only make use of a few CPU-heavy threads at a time, and these systems with with 18 cores will not make any difference for the average user. In fact, the 18 core behemoth might actually feel slower for regular desktop usage since it’s clocked lower.
If you are a pro with a CPU-heavy workflow that scales well with more threads, then you probably don’t need some consumer benchmark website to tell you that you need a CPU with more cores.
But lots of things do use more than 4 cores, with games especially growing in core use over time. Even more so if you want to stream to your friends or have browsers and such open in the background. To suddenly set that to almost zero weight, when it was already a pretty low fraction, right when zen 2 came out, is clear bias.
> In fact, the 18 core behemoth might actually feel slower for regular desktop usage since it’s clocked lower.
The amount of processes running on a windows OS reached 'ludicrous speed' many years ago. Most of these are invisible to the user, doing things like telemetry, hardware interaction, and low level and mid level OS services.
A quick inspection of the details tab in my task manager shows around 200 processes, only half of which are browser.
And starting a web browser with one page results in around half a dozen processes
Regarding 2. I think that none of those benchmarks were run on MacOS. Their benchmark tool seems to be Windows-only https://www.userbenchmark.com/ (click on "free download" and the MacOS save dialogue will inform you that the file you are about to download is a Windows executable).
1. Today I learned something new. Still, can’t let great be the enemy of good. It may be imperfect but it’s the source I used. Do you have a better source I can replace it with?
2. That’s a good question and I don’t have an answer for that.
Sure, great is the enemy good etc
, but the allegation here and in other threads is that these benchmarks are bad. Or, worse, inherently and deliberately biased.
As for a better source, I don't know with the M1 being so new, but that's no reason to accept bad data, if this benchmark actually is as bad as others here are saying.
It's not the 4-6x in raw graphics improvement advertised but at 10W vs Xe Max's 25W for just the GPU, the M1 getting 50% more fps, that's still 3.75x in perf/watt.
Let alone - let's really not forget that this is an HBM type architecture. As a complete package it seems awesome, but we can argue for ages about the performance of GPU cores with no end result.
You'd think so, but seems most people[1] think it's just LPDDR wired to the SoC using the standard protocol, inside the same package. (Though it might use > 1 channel I guess?)
Which would be the same width as dual channel DDR4 – 8x16 == 2x64 :)
Also it's completely ridiculous that some people think it might be HBM. The presentation slide (shown in both anandtech articles) very obviously shows DDR style chips, with their own plastic packaging. That is not how HBM looks :) Also HBM would noticeably raise the price.
Unfortunately it’s the GPU that causes the issue, not the CPU.
Whatever bus video runs over is wired through the dedicated GPU, so integrated is not an option with external monitors connected. That by itself would be fine, except for whatever reason, driving monitors with mismatched resolutions maxes the memory clock on the 5300M and 5500M. This causes a persistent 20W power draw from the GPU, which results in a lot of heat and near-constant fan noise, even while idle. As there isn’t a monitor in the world that matches the resolution of the built-in screen, this means the clocks are always maxed (unless you close the laptop and run in clamshell mode).
The 5600M uses HBM2 memory and doesn’t suffer from the issue, but a £600 upgrade to work around the issue is lame, especially when I don’t actually need the GPU, you just can’t buy a 16” without one.
Disabling turbo boost does help a little, but it doesn’t come close to solving it.
My memory is hazy on this but I did come across an explanation for this behaviour. At mismatched resolutions or fractional scaling (and mismatched resolutions are effectively fractional scaling) macOS renders the entire display to a virtual canvas first. This effectively requires the dGPU.
Your best bet is to run external displays at the MBPs resolutions and because that is not possible/ideal you are left with choices of running at 1080p/4k/5k. macOS no longer renders crisp on 1080p so 3840x2160 is the last remaining widely available and affordable choice while 5K is still very expensive.
Hardly anyone makes 5K displays - I have a pair of HP Z27q monitors made in late 2015 that are fantastic, but I had to get them used off eBay because HP and Dell both discontinued their 5K line (Dell did replace it with an 8K, but that was way out of my budget).
Part of the reason for 5K’s low popularity was its limited compatibility: they required 2xDP1.2 connections in MST mode. Apple’s single-cable 5K and 6K monitors both depend on Thunderbolt’s support for 2x DP streams to work. I’m not aware of them being PC-compatible monitors at native resolution yet.
I love 5K - but given a bandwidth boost I’d prefer 5K @ 120Hz instead of 8K @ 60Hz.
I am a bit curious to know why this specific problem has been appearing in various Nvidia lineups in the beginning of the decade, and is reappearing now.
Offscreen benchmarks just mean that they are run at a fixed resolution and not limited by vsync. These benchmarks are better for comparing one GPU to another. Onscreen can be artificially limited to 60fps and/or run at the native resolution of the device which can hugely skew the results (A laptop might show double the benchmark speed just because it has a garbage 1366x768 display).
This is significant because the NVIDIA 1050Ti, while way slower than the state of the art, is still the meaningful practical minimum discrete GPU. There’s no 2050 or 3050 - if you go about looking for the cheapest PC suitable for gaming, you might buy a 1050Ti. It is head and shoulders above the integrated GPUs (yes, even the AMD Vega ones), and comparable in power to last-gen (as of last week) consoles, the PS4 and Xbox One S.
This in turn means that every PC and console game created up until now, and probably for the next year or so, will comfortably run on a 1050Ti, even with compromises (“medium” settings, middling resolution like 1080p).
For the first generation of Apple’s Mac GPU, running at 10W, this is a great result. This means, Apples idiosyncrasies aside, there will be no technical problems in porting games to macOS. (Metal is less of a problem than you’d think - if you use Unreal, Unity, DirectX 11/12 or Vulkan for graphics, the work needed will be on the order of a few man/months, which is small change in game development budgets.)
Really, I find that surprising. I use a GTX 1070 and it's still capable of playing a decent number of recent games at 4K with the settings turned down. Seems like the GTX 1050 Ti should still cut it for some moderate 1080p gaming.
Seriously? I run a GTX 770 and get by fine with most games I want to play. I’ll upgrade eventually but don’t feel the need right now. Maybe CPU is the bottleneck for you?
I do have one question. How did they managed to get 1050ti working with metal api? As far as I know there is no nvidia drivers for macOS big sur and macOS catalina.
Given the power draw difference, the performance is impressive. The version of the 1050 sold as a discrete GPU for laptops draws several times more power than the entire M1.
>The power consumption of the GeForce GTX 1050 is roughly on par with the old GTX 960M, which would mean around 40-50 Watts
In comparison, the mobile version of the 1080 drew between 150 and 200 Watts, with the lower clocked Max Q version getting the power draw down to 90 to 110 Watts.
It's definitely impressive performance for an ultrabook - my concern right now is external monitor support.
I use an eGPU for the sole purpose of running triple monitors. It's a docking station, and the whole setup works brilliantly with my 13" Macbook. Productive workstation setup at home, super nimble laptop on the go.
Apparantly these M1 chips don't support eGPUs and don't even support two external monitors, let alone three. I know it's asked a lot, but what's "Pro" about that?
Completely negates any of the benefits of M1, which is a shame because I really wanted one.
edit: You know, if they don't resolve this then the final round of Intel Macbooks with the non-butterfly keyboards are going to be the holy grail of laptops to a lot of people for a long time, just like the pre-butterfly ones with the ports still are.
Multiple external monitors, eGPUs and bootcamp... that's a lot to lose, man.
The M1 MB Pro is only the replacement for the 2 port MB Pro. One should assume that there will be 4 port MB Pro coming too featuring a variant/successor of the M1. That probably would be supporting more screens, more memory and storage.
It brings a fan, which makes a huge difference if you have sustained loads. It also has a larger battery than the Air. Basically, it is the Air for those who need more sustained compute power. The 4 port version will certainly be much more capable, but also more expensive.
Pro apparently means it has an extra 100 nits and forced airflow to minimize processor throttling. It only has two serial ports and a small screen, so this was made for field use, though it does have support for a single 6K display when you get (or are confined to) home.
Might be better to go for a System76 or Dell if you can find all the apps you need for them.
Those are really good points and thank you for making them.
I actually bought one of the last generation of MacBook Pros so I can continue to build an x86 based server application. Maybe these machines will command a premium on the second hand market in due course?
The multiple monitor thing is weird. The Mac Mini supports more than one, and the one display it DOES support is the Pro Display XDR, which has more pixels than two 4K monitors.
The M1 Mac Mini supports one external display on the Thunderbolt port and a second one on the HDMI display. There are probably limitations on simultaneous resolution and refresh rates. This iteration of the M1 was not designed to handle large amounts of I/O. I would expect a more advanced version (M1X/M2) sometime in the next 6-12 months. That iteration will likely support 4 thunderbolt ports, multiple external monitors and larger RAM limits. That’s the one that Apple would use in iMacs and the other MacBook Pros.
I do wish that Apple would go ahead and release a 27” 5K display for the current machines. The 6K is way overkill and there are no good options for monitors >4K. They are probably waiting for the new iMacs with a new design language to put out a matching monitor.
Absolutely! For me it feels as if Mac OS repeats the errors of MS Windows Vista and 10. For instance, the horrible telemetry as well as the suboptimal "one interface for touch and mouse pointer" paradigm.
I feel the Mac os ui is still a lot better than both win32 ui and the new "modern" ui in uwp apps. Mac os ui is functional, easy to use, enough information density. And the animation, styling and graphics that looks great on a high dpi screen. And there is cohesive design and animation on the system and the apps. Apple's design is both beautiful, fluid to use and functional. While microsoft's design? It's still non existent. Even app design is just... Like teams ui is clunky to use, has huge title and side bars, low information density.
Win32 ui just looks 90s, and some looks broken on high dpi screens. While microsoft's new ui is just broken for the desktop use. Text size is different between uwp apps like mail, and win32 apps like word. They are even from the same company and same product team.
I don't think ever mac will be touch based. They share design style with ios. But it's going to be optimized for mouse and keyboard. And I love how you can search for commands in a app on the Mac in the help menu. It allow you to access app features all by typing what you want. You can use gui like a cmd.
Can you expand on "horrible telemetry" please? And since no Macs have a touch screen, I'm not sure what you mean by the "one interface" thing. Big Sur may resemble iOS but it is not the same interface at all.
When installing Windows, I had to unplug the ethernet during a particular setup screen to avoid having every login checked against my cloud account. It put Candy Crush and Farmville ads in my start menu without consent. I remember having to spend effort to get Cortana to go away and (maybe) not send my searches to the cloud.
In MacOS, we've recently seen: pushy siri, sending search results to the cloud, and yesterday the OCSP failure made it obvious they were sending logs of every app launch to the cloud :/ . It's the same direction, even if they aren't yet quite as lost as Microsoft.
I know, right? What's with all these people, expecting their personal computers to respect their privacy? If you want cool features, just be quiet and let Apple send whatever data they want to their servers. It's fine!
Did you miss the part where the parent reply said "opted in enabling Siri"?
I mean, if you enable a completely optional feature that requires giving up a bit of privacy for its literal intended functionality, how is it Apple's fault? And unlike Cortana on Windows 10, you can disable Siri feature just with a click of a button, or you can just click a button to not enable it in the first place. When you start your new Mac for the first time, it asks you very explicitly if you want it enabled or not.
I complained about Siri being _pushy_. It is. That's Apple's fault. If you opt out of Siri, that should be the end of it. It should not constantly nag you to enable it with an update-available style box and it should not drop a button in the button bar right above the delete key where it is guaranteed to get accidental presses.
I feel like there's some over-reaction here. You pretty much have to either:
1. allow blanket access to all executables
2. perform some sort of validation/verification
Most people are not as technical as the people on this site, and we know that blanket access to all executables is not a great idea. MacOS is not immune to malware.
Having executables be signed means the signature has to be checked and the certificate has to be checked to see if it hasn't been revoked. I don't see how to do that without "phoning home" and checking, tbh.
I mean yes Apple could enable some expert setting or something for developer/expert types where you can say "trust me I know what I'm doing" but a lot of people who don't know what they're doing will enable that and then malware will run rampant again.
It's a shitty situation. But I'm not going to go down the paranoia rabbit-hole of assuming this is done to spy on me. At least not yet.
They could use bloom filters and inherited trust to avoid having to send the signature of every executable to the internet. And there really should be a switch to turn this off for people who don’t want to be treated like children and will accept the risk of malware. Make it something on the command line and I guarantee no regular user will enable it.
There's probably a middle ground but you know that within a day of making it optional, the various "Clean my Mac" utilities will have an option to disable it and soon enough instead of 99% of people having it enabled, 75% will have it dibbled. And then some malware hits and spreads like wildfire and the same people who were so adamant on disabling it are now complaining that Apple isn't doing enough to protect them and woowee Macs are just as insecure as Windows.
Speaking of Windows, they also moved to a "we know what's best for you" model with Windows 10.
Definitely a "damned if you do, damned if you don't" situation.
I understand why "power users" feel frustrated but also understand the company's POV. A story like "Macs are invulnerable to this latest ransomware attack" looks pretty good to investors; the random complaints of nerds and power users go mostly unnoticed.
I bet $100 this latest scandal will not affect Apple's bottom line nor will anyone care within a month - there'll be other reasons to be outraged over on Twitter.
I don't think I ever opted in to Siri. How can I turn it off? I've disabled it from the menu bar but it's still on the touchbar, just waiting for me to slightly miss the delete key
You then drag and drop items on and off the touchbar. It is a totally inane, unintuitive interface and it took me forever to find it. Also, I couldn't figure out how to change it because the option DISAPPEARS if you're trying to customize in clamshell mode. The touch bar has to be open
I can't tell if it's deliberately bad UX, but I spent months being asked if I wanted to turn on Siri typing on this keyboard...
I feel like macOS asks me about Siri and privacy at login after every major update, with an unskippable setup window, but at least after account creation. Open System Preferences > Siri and disable Ask Siri. You can edit the Touch Bar via a menu in Finder.
Disabling OCSP (I believe you have a typo in your hosts file suggestion, by the way) presumably doesn't actually disable notarization, just the OCSP (checking for revoked certificates) part of that process.
Ever tried to opt out of Siri? I have. It nags you constantly. You search, looking for a gist to nuke it, but what you find doesn't work. Finally, you give in, just to get the damn thing to shut up.
This sounds like you may have a corrupt preference file somewhere - perhaps deleting ~/Library/Preferences/co. On a normal install, you'll only be asked to enable on major OS releases. Since you're also on an old version of Catalina it'd be a good idea to install the security updates, too.
You could try to delete all of the preference files:
Phoning home on every executable launch. Both because it's bad for privacy, and because the implementation of it is absolutely horrific such that when Apple's servers went down it basically locked up everyones computers at the same time.
Is it even really telemetry as we'd consider it? It's OCSP digital signature verification... to check if an app signature wasn't revoked. (or a website cert, or anything really)
I think the thought is since the tap targets all got so much bigger for Big Sur it no longer feels like an OS designed for a pointer, more one designed for a finger.
>By the standards of modern disk and network, couldn't they download revocation caches the way they do with malware?
The whole point is to check if a cert has been revoked. If you have an out of date cache, you'll falsely approve a cert that should be revoked. I'm not defending the system as a whole, but if you care about revoking authentication – which they clearly do – then a cache directly undermines that goal.
A malware hash doesn't get revoked, new ones just get added.
It’s designed to unify the Mac and iOS experiences.
Why ? Because once everyone starts using iOS apps on their Mac eg. Netflix, Outlook Mac-only apps will slowly disappear. Hence you will need a look and feel that works on touch.
The Android tablet scene is almost non existent. With the ipad pro and magic keyboard you could realistically use an ipad as a laptop if the software you needed was on it.
I imagine that eventually devs will target pro software for the ipad and have it come to macos for free.
They just struck me as absurdly large for mouse interaction but sensible if you plan to introduce touchscreen at some point. Ditto for all the control center controls.
Exactly this. The idea of seriously powerful machines with great battery life is awesome. An even more proprietary, locked down system with software that keeps getting worse? Not at all.
I understand Apple making it super secure for non tech people, but it should provide a way to disable all that stuff for power users. And I mean some setting in the preferences, not hacking the hosts file.
I agree with your premise (a way to disable), but isn’t it better they make it hard to do? Kinda like a competency test to make sure you know what you’re doing.
I do agree. You don't want your grandma or teenager kid changing that setting. But it should be an official way of doing it, not a series of hacks on the OS.
Exactly. This thing would be a killer dual-boot linux machine. But they won't release any drivers, and I think secure boot will probably only do MacOS.
I'm an ancient IT guy (35+ years). I used Linux on desktop/laptop/server from 1995 onwards. I got a Mac Pro laptop 5 years ago (before the keyboard debacle) because I wanted quality hardware.
I'm running Big Sur and have been since the public beta. I also run iTerm2 and Macports and Firefox and Thunderbird. My editor is Neovim.
MacOS is still a BSD Unix underneath. I can't think of anything I want to run on Linux that I can't run on MacOS.
I also get a nice UI, native first class apps for all my various WFH chat/video clients, MS Office for work etc.
Why exactly would I want to dual boot into Linux? What runs on Linux that doesn't on MacOS?
Personally, I like the UI on linux. I can customize it to my hearts content, and run whatever style I want. I can switch from my i3 setup, a tiling wm, to GNOME in an instant. MacOS is alright, its UI is just a tad bloated and some things I don't like. But I can change and choose my UI in linux.
MacOS just doesn't run great on my 2015 MBP (dual-core) either. I have a 4k monitor, and launching 5 chrome/safari tabs, zoom, and VSCode creates noticeable lag. Window dragging being 24fps, choppy scrolling, etc. I'm sure linux would run better on the same hardware. Its not what doesn't run, its how it runs.
Some pieces of software are Linux only, like Singlularity(scientists aren’t allowed to run Docker). And getting some of scientific packages can be a hassle due to compilers, but I can get 99.99% of very niche packages working on Mac.
MBP is a killer machine because it lets you install Unix software AND you don’t have to know what a driver is. That’s the selling point for me. Every time I’m seduced by XPS 13’s slick design, I read couple horror stories from users trying to troubleshoot driver issues on Ubuntu. It’s just not worth it.
>I've never been so excited and so turned off at the same time.
Oh yes. Apple Hardware are still in great shape from a high level overview. But Software and Services.....
Despite both having its own sets of problems, they are still industry leading. Which always make me unsure what to make of it, are Apple really that good? Or Microsoft and Google just doesn't know how to compete? Or more likely a bit of both.
Every time I read comments like this I’m wondering what’s going on. I’ve used MS DOS, Windows 98, Ubuntu since 8.04...And never have I been so happy with an OS as on Mac.
Sounds like someone is regurgitating Apple's marketing speech.
How do you define "Most advanced"? because, for me, an OS that you can't use to run apps because Apple's servers were down is anything but advanced to me.
> It is by far the most advanced operating system which serves newbies and professionals alike. I know people who use it to browse Facebook and use Messages/Music/Safari etc and those who use it to manage servers, build apps, and more.
Advanced according to what metric exactly? And your second sentence can be said about Windows or any modern OS really.
It's true many problems end up being solved, but you don't see a problem shipping broken software on which millions depend to work?
I started using macOS back on Panther and I don't trust Apple to ship a reliable update anymore. I'm still on Mojave because even today Catalina is broken for a lot of people.
I've been running Big Sur since it went into public beta. There were a few issues, VPNs were disabled for a bit, they didn't load the SMB drivers, a few of my apps had to release updates (Karabiner, Bartender), but otherwise everything works.
It seems like Apple may err on the side of just letting it get painfully slow instead of crisping your legs. The primary differentiator between the Air and Pro is that the Pro comes with a fan and a $300 premium.
Don't forget the Touch Bar - that weird uncle in every family that we just have to put up with - the feature nobody asked for and every coder I know would so much rather be without.
It annoys me any my coworkers (we use 2017/2018 15" MBP's) every work day to no end. I hate it when when I'm accidentally logged out by feathering the 'lock' icon.
I'm aware I could move it - but ideally where are times that I'd want to use the feature as intended.
The lack of haptic feedback for the Touch Bar is a bizarre and unusual oversight for a company usually so detail-obsessed as Apple. We are not allowed to install anything but specifically approved software, so any third-party solutions are a no-go here. It should be built-in, it's a no-brainer.
I've personally taken to using a bluetooth Apple keyboard half the time with my work laptop just to avoid the Touch Bar as it honestly feels infinitely more comfortable for coding to not have it there.
No, it shows it performs better? Go look again at the numbers in the linked comparison. Also calling it low-end is misleading, the 1050Ti was solidly mid-tier, and also a dedicated graphics card used in desktop machines. This is comparing it to integrated graphics in a lightweight laptop (the M1 in the recently announced MacBook Air).
If you really are a graphics engineer you'll know that the person you're replying to is absolutely right. The performance on this integrated gpu is outstanding and outperforms a dedicated desktop gpu that's ~2 years old on 10W of power.
We are talking about a half-inch thick ultrabook with 18 hours of battery life here.
The idea is that this can do the same as what was historically in a thick and heavy gaming laptop that got 4 hours of battery life in use..is very impressive. It firmly destroys all SKUs of the 15 inch MBP
You can run Witcher 3 on a 1050 Ti with a reasonably useful graphics level and framerate. You can't really do that with any other laptop integrated CPU/GPU.
I started using a pi for some personal projects and I concur. There are things that aren't supported or have to be build from source, and that is if there's support at all.
I'm sure the M1 is going to light a fire in getting ARM support on most projects, but it's not there yet.
I’ve gotten update notices for a large percentage of my apps where they call out that they are now compiled for M1 Macs. I think it likely that most actively maintained apps will publish M1 natives version in short order.
The idea of changing my 2015 MBP for something else is a tough one to accept, my biggest beef with the recent Air being the heat issues: the fan on the previous Air is so useless it's laughable[1] and they completely removed it on the new one, I remain skeptical.
I just upgraded my 2015 MBP to a 16" 2019 MBP. I specifically bought it now so that I have an Intel chip because I don't want to deal with the headache while things slowly switch over.
I hadn't realized how much faster computers had gotten since 2015, this new machine runs circles around the old one. The keyboard is great and I actually like the touch bar. 0 regrets, it's an upgrade in every single way.
If your needs require high CPU utilization for extended time periods, you might be better off what the new MacBook Pro. The Air is intended for more causal use cases.
Ouch, that's pretty steep price for additional 8gb ram. How does it compare to other devices in the same category? I'm out of the loop when it comes to laptops, for desktops iirc you can get one 32gig die of ram for cheaper than that.
It's definitely more expensive than simply buying a DIMM, but keep in mind that is "unified" memory and is part of the SoC, and is likely much faster than any old stick of RAM you'd normally put in a laptop. That, plus the Apple Premium.
This is exactly the camp I'm in. I won't touch the new MBPro because of the touchbar (and keyboard woes), but I'm willing to concede the peripheral inputs if it means the thing is much lighter/slimmer and has better performance.
The 1050ti is not supported on modern macos running metal. See the difference when the card is on windows or Linux running real nvidia drivers as the card was built to.
Seems to blow it out of the water. Those are pretty compelling numbers, so much so that once the second iteration is out, it might be worthwhile to replace my 2019 MBP 13".
These numbers are quite good if you realize that the M1 iGPU is comparable to a discrete GPU. Also the power consumption is for the entire package not just the discrete GPU alone.
A 1050Ti is a three generation old card and was bottom of the line when it came out, so not too bad for integrated graphics.
It's plenty for everything but modern gaming, and since those games aren't likely to be ported to ARM on Mac anytime soon it's not a huge problem. Apple has always had something of a rocky relationship with game publishers, at least on the Mac. Lots of older games will probably work fine, assuming the driver situation isn't a nightmare. Apple is somewhat notorious for neglecting graphics card drivers unfortunately.
Saying that it's plenty for everything but modern gaming isn't saying much either. Every other integrated graphics solution has been fine for everything but modern gaming.
It looks like World of Warcraft at the very least is getting a day-one Apple ARM build, which suggests that porting isn't too bad. It should be a relatively easy transition for any game that already ran on macOS or iOS.
GTX 1050 2GB was the bottom of the line when the 1050 Ti came out. Both were released in Oct 2016, there's also GT 1030 which was released few months later.
Yup, whereas the CPU in the SoC is roughly a little better in multicore than a desktop Ryzen 3600X and single core plays way at the top of the Zen 3 line.
It is actually worse than a 1050TI but it is better than a 1050TI in MacOS. It is a meaningless comparison when the GPU is half as fast in the OS tested compared to the same GPU in Windows. In other words: People are cherry-picking the results they like.
The GTX 1050 Ti was a lower-mid-range gaming card a couple generations ago. AFAIK it can still run most new games, possibly with compromises for smooth performance (low graphical settings, 30 FPS, or sub-1080p resolution).
Considering that the devices that were replaced were integrated GPUs only, the fact that these devices now run close to current gen discrete GPUs is a big jump.
The bigger question to be answered is whether this is a baseline that will be surpassed handily by the higher end released coming later, or that this is about as good as it gets now.
I'm assuming that the reason these were released separately was because the later arriving devices have significantly differing SoC's with even better performance, and maybe even discrete GPUs with variable, scaling performance.
> the fact that these devices now run close to current gen discrete GPUs is a big jump.
The 1050 Ti (4 years old) has about the same performance as a GTX 680. A card from 2012.
This comparison makes absolutely no sense. You'd want to compare the M1 against either the current generation integrated, such as the Vega 8 or 11 in the Ryzen 4xxx mobile CPUs or the Intel Xe-LP in the current tiger lake CPUs, or you'd want to compare it against last gen integrated.
Comparing it against a discreet card from 4 years ago with the performance of a card from 8 years ago is just... weird?
I was trying to say 'close to a recent gen discrete CPU' not 'close (in performance) to a current gen CPU.'
1050 is just one generation removed from most current discrete GPUs, I don't believe the 3000 series is out yet on laptops, the 2080 just came out last year.
I suspect (and this is just speculation) the M1 is very closely related to their iPad architecture which was never designed for more than one external display, as such the architecture wasn't really suitable and between building a 'computer' based on that chip and re-architecting the design it just wasn't feasible or required to support it on the first release.
Thats exactly my thought too. Makes me wonder if it will support two monitors in clamshell mode since the mac mini supports two. I guess we'll have to wait and see. Maybe it's something they can patch in.
I'm also curious as to how the graphics connections / routing works; would they have an internal bus and then split and mux it or something? Or perhaps an eDP or LVDS bus that can't be switched out to some HDMI or DP transceiver in clamshell?
Let's see if iFixit can get us some nice high-res pictures. Heck, someone might leak schematics and any normal users can probably show a IOTree for the system. Always nice to explore those on new machines.
Very much agree, I wonder as well. In other news, are there any adaptors out there that let me plug in two 4K displays and pretend to be a double wide single 4K display?
Agreed. If you think about what’s common between all the types of “professionals” that Apple targets, from coding to music production to whatever... one of them is wanting multiple screens to have everything laid out in front of them.
That’s not really the market for these models. The Air and the low end MacBook Pro only have two ports and are not really well suited to working with multiple external monitors. I suspect that, when Apple update the higher end machines, that they will be able to support multiple external monitors (and more RAM). The M1 is more of an MVP at this point.
Technically the M1 can support 2 displays too, 1 @ 6k/60Hz and 1 @ 4k/60Hz. This is done eg. for the Mac mini. However, for the laptops, one of those is consumed by the built in screen.
Which is weird because every laptop I've had for 15 years has had a mux on the RAMDAC outputs so you could connect as many external screens as you can have max outputs and just turn off the builtin screen.
It's been a long time since I've actually followed graphics benchmarks at all - where is this on the scale from intel intergrated to modern discrete laptop gpu? E.g. a radeon 5500 or similar?
[edit: to be clear I mean modern feature set type stuff, not just raw framerate]
This makes me very curious about the performance of their future 16" MacBook Pros once they move away from Intel. I assume they'd still have a discrete GPU, but maybe as an option instead of as standard?
But isn't this just another thing people download to run benchmarks and then the result is uploaded?
Anyway, answering my own question: multiple results have been uploaded in the past two days and uploaded by anonymous users. Maybe tech reviewers running it with devices they received early?
No, that's because some of the people don't have first principles and trust at all and instead of validating they just make stuff up ;-)
I was trying to point out that disputing things is fine, but the whole basis of a website where benchmarks are uploaded is trust-and-reputation-over-time to the point where enough other people can re-run the same tests on their machines (one they get them) to validate the results. Heck, you might almost call that science!
Right now, there aren't additional results and you can't easily reproduce them because the machines aren't wide spread or available. But we can take the track record and reputation of the site and application and use that to value the integrity of the published benchmarks to be 'likely correct'.
The old 1050 TI only has 2.1 TFLOPS. Compare M1 against the real beast RTX 3090 with 35 TFLOPS - that's 17 times more than in 1050 TI. 2.1 TFLOPS doesn't impress me at all.
So you don't think 320W and a massive cooler that weighs more than an entire MacBook Air is kind of a pointless comparison to a <20W SoC that can even be passively cooled?
I'm not an Apple fan at all but that is some mighty impressive performance in such a small package and considering the thermal- and power envelope.
Why is everyone surprised by this? Fudging numbers, stealing credit, and blatantly lying is what Apple has always done. Of course they put a beautiful marketing layer on top of it all so its slightly less obvious.
I do as well. Which is why I was sad to see mostly comments supporting the new chip without backing it up with more meaningful benchmarks, or being surprised that Apple would put out benchmarks that are half truths in order to sell a product. Neither of these are "news".
Without directly commenting on the performance of the M1 chip, I still believe the biggest hurdle is software compatibility. Apple's "universal binary" seems dubious, and I don't believe Rosetta 2 is anything more than emulation software which will have performance ramifications.
Microsoft has faced this same problem themselves. Releasing the Surface Pro X is a great example of a machine that is limited by software.
As others have noted in other threads, Apple's ability to run iOS apps natively on the M1 chip seems like a great mechanism to lower the switching costs, though I maintain the chasm left to cross is software.
This is all of course, notwithstanding the "locked down OS" concerns from the front page for the past day or so. Does an M1 Macbook Air with BigSur make a competent development machine?
"universal binary" is just a binary with multiple executables for multiple CPUs inside. Hardly unusual, it's been used for 32+64 bit before, and to package multiple flavors of arm.
Rosetta 2 is emulation in once sense, but it precompiles applications in to M1 code for speed.
Running all x86-64 mac apps plus iOS apps plus any updated / universal apps is pretty far from a software shortage.
There is absolutely nothing even remotely dubious about universal binaries. They were used during the PPC->Intel transition and again for x86->amd64. You can create them right now.
So its faster then a 3 generations old budget card, that doesn't run nVidia optimized drivers, over I'm assuming Thunderbolt. So?
So its faster then the last Mac book air, that was old, thermal constrained, and had a chip from Intel that has been overtaken by AMD.
Every test is single core, but guess what, modern computers have multi cores and and hyper threading and that matters.
Apples presentation was full of weasel words like "in its class" "compared to previous models". Fine thats marketing, but can we please get some real, fair benchmarks, against the best the competition has to offer before we conclude that apples new silicon is gift from god to computing?
If you are going to convince me, show me how the CPU stacks up to a top of the line Ryzen/threadripper and run Cinebench. If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.