Hacker News new | past | comments | ask | show | jobs | submit login

Apple usually massively exaggerates their tech spec comparison - is it REALLY half the power use of all times (so we'll get double the battery life) or is it half the power use in some scenarios (so we'll get like... 15% more battery life total) ?



IME Apple has always been the most honest when it makes performance claims. LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours. All the while, Dell or HP would claim 19 hours and you'd be lucky to get 2 eg [1].

As for CPU power use, of course that doesn't translate into doubling battery life because there are other components. And yes, it seems the OLED display uses more power so, all in all, battery life seems to be about the same.

I'm interested to see an M3 vs M4 performance comparison in the real world. IIRC the M3 was a questionable upgrade. Some things were better but some weren't.

Overall the M-series SoCs have been an excellent product however.

[1]: https://www.laptopmag.com/features/laptop-battery-life-claim...

EDIT: added link


> ME Apple has always been the most honest when it makes performance claims.

Okay, but your example was about battery life:

> LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours. All the while, Dell or HP would claim 19 hours and you'd be lucky to get 2 eg [1]

And even then, they exaggerated their claims. And your link doesn't say anything about HP or Dell claiming 19 hour battery life.

Apple has definitely exaggerated their performance claims over and over again. The Apple silicon parts are fast and low power indeed, but they've made ridiculous claims like comparing their chips to an nVidia RTX 3090 with completely misleading graphs

Even the Mac sites have admitted that the nVidia 3090 comparison was completely wrong and designed to be misleading: https://9to5mac.com/2022/03/31/m1-ultra-gpu-comparison-with-...

This is why you have to take everything they say with a huge grain of salt. Their chip may be "twice" as power efficient in some carefully chosen unique scenario that only exists in an artificial setting, but how does it fare in the real world? That's the question that matters, and you're not going to get an honest answer from Apple's marketing team.


M1 Ultra did benchmark close to 3090 in some synthetic gaming tests. The claim was not outlandish, just largely irrelevant for any reasonable purpose.

Apple does usually explain their testing methodology and they don’t cheat on benchmarks like some other companies. It’s just that the results are still marketing and should be treated as such.

Outlandish claims notwithstanding, I don’t think anyone can deny the progress they achieved with their CPU and especially GPU IP. Improving performance on complex workloads by 30–50% in a single year is very impressive.


It did not get anywhere close to a 3090 in any test when the 3090 was running at full power. They were only comparable at specific power usage thresholds.


Different chips are generally compared at similar power levels, ime. If you ran 400 watts through an M1 Ultra and somehow avoid instantly vaporizing the chip in the process, I'm sure it wouldn't be far behind the 3090.


Ok but that doesn't matter if you can't actually run 400 watts through an M1 Ultra. If you wanna compare how efficient a chip is, sure, that's a great way to test. But you can't make the claim that your chip is as good as a 3090 if the end user is never going to see the performance of an actual 3090


You're right, its not 19 hours claimed. It was more than even that.

> HP gave the 13-inch HP Spectre x360 an absurd 22.5 hours of estimated battery life, while our real-world test results showed that the laptop could last for 12 hours and 7 minutes.


the absurdness was difference in claimed battery life vs actual battery life. 19 vs 2 is more absurd than 22.5 vs 12

> Speaking of the ThinkPad P72, here are the top three laptops with the most, er, far out battery life claims of all our analyzed products: the Lenovo ThinkPad P72, the Dell Latitude 7400 2-in-1 and the Acer TravelMate P6 P614. The three fell short of their advertised battery life by 821 minutes (13 hours and 41 mins), 818 minutes (13 hours and 38 minutes) and 746 minutes (12 hours and 26 minutes), respectively.

Dell did manage to be one of the top 3 most absurd claims though.


You’re working hard to miss the point there.

Dell and IBM were lying about battery life before OSX was even a thing and normal people started buying MacBooks. Dell and IBM will be lying about battery life when the sun goes red dwarf.

Reviewers and individuals like me have always been able to get 90% of Apple’s official battery times without jumping through hoops to do so. “If you were very careful” makes sense for an 11% difference. A ten hour difference is fucking bullshit.


So you are saying that Dell with Intel CPU could get longer battery life than Mac with M1? What does that say about quality of Apple engineering? Their marketeering is certainly second to none.


Maybe for battery life, but definitely not when it comes to CPU/GPU performance. Tbf, no chip company is, but Apple is particularly egregious. Their charts assume best case multi-core performance when users rarely ever use all cores at once. They'd have you thinking it's the equivalent of a 3090 or that you get double the frames you did before when the reality is more like 10% gains.


They are pretty honest when it comes to battery life claims, they’re less honest when it comes to benchmark graphs


I don't think less honest covers it and can't believe anything their marketing says after the 3090 claims. Maybe it's true, maybe not. We'll see from the reviews. Well assuming the reviewers weren't paid off with an "evaluation unit".


> LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours.

For literal YEARS, Apple battery life claims were a running joke on how inaccurate and overinflated they were.


I’ve never known a time when Dell, IBM, Sony, Toshiba, Fujitsu, Alien, weren’t lying through their teeth about battery times.

What time period are you thinking about for Apple? I’ve been using their laptops since the last G4 which is twenty years. They’ve always been substantially more accurate about battery times.


The problem with arguing about battery life this way is that it's highly dependent on usage patterns.

For example I would be surprised if there is any laptop, which is sufficiently fast for my usage, and it's battery life is more than 2-3 hours top. Heck, I have several laptops and all of them dies in one-one and a half hours. But of course, I never optimized for battery life, so who knows. So in my case, all of them are lying equally. I don't even check battery life for 15 years now. It's a useless metric for me, because all of them are shit.

But of course for people who don't need to use VMs, run several "micro"services at once, have constant internet transfer and have 5+ Intellij project open at the same time which caching several millions LOC, while gazillion web pages are open, maybe there is a difference, for me it doesn't matter whether it's one or one and a half hours.


You should try a MacBook Pro someday. It would still last all day with that workload. I had an XPS at work and it would last 1.5 hrs. My Apple laptop with the same workload lasts 6-8 hours easily. I never undocked the dell because of the performance issues. I undock the mac all the time because I can trust it to last.


I have a 2 year old high spec Macbook Pro with less load than the GP and rarely can get > 3 hours out of it.


I'm curious, what do you do with it?


Nothing too crazy I don't think. A bunch of standard Electron applications, a browser, a terminal - that's pretty much it. Sometimes Dockers, but I always kill it when I'm done.


> IME Apple has always been the most honest when it makes performance claims.

In nearly every single release, their claims are well above actual performance.


Controlling the OS is probably a big help there. At least, I saw lots of complaints about my zenbook model’s battery not hitting the spec. It was easy to hit or exceed it in Linux, but you have to tell it not to randomly spin up the CPU.


I had to work my ass off on my Fujitsu Lifebook to get 90% of the estimate, even on Linux. I even worked on a kernel patch for the Transmeta CPU, based on unexploited settings in the CPU documentation, but it came to no or negligible difference in power draw, which I suppose is why Linus didn’t do it in the first place.


BTW I get 19 hours from DELL XPS and Latitude. It's Linux with custom DE and Vim as IDE though.


I get about 21 hours from mine, it's running Windows but powered off.


This is why Apple can be slightly more honest about their battery specs, they don’t have the OS working against them. Unfortunately most DELLs XPS will be running Windows, so it is still misleading to provide specs based on what the hardware could do if not sabotaged.


I wonder if it’s like webpages. The numbers are calculated before marketing adds the crapware and ruins all of your hard work.


can you share more details about your setup?


Archlinux, mitigations (spectre alike) off, X11, OpenBox, bmpanel with only CPU/IO indicator. Light theme everywhere. Opera in power save mode. `powertop --auto-tune` and `echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo` Current laptop is Latitude 7390.


Right, so you are disabling all performance features and effectively turning your CPU into a low–end low–power SKU. Of course you’d get better battery life. It’s not the same thing though.


> echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo

Isn't that going to torch performance? My i9-9900 has a base frequency of 3.6 Ghz and a turbo of 5.0 Ghz. Disabling the turbo would create a 28% drop in performance.

I suppose if everything else on the system is configured to use as little power as possible, then it won't even be noticed. But seeing as CPUs underclock when idle (I've seen my i9 go as low as 1.2 Ghz), I'm not sure disabling turbo makes a significant impact except when your CPU is being pegged.


That's the point. I have no performance bottleneck with no_turbo. My i5 tends to turn on turbo mode and increased power demand (heat leaks) even if it's no needed. For example with no_turbo laptop is always cold and fan basically stays silent. With turbo it easily gets 40C warm while watching YT or doing my developer stuff, building docker containers and so.


I get 20 minutes from my Dell (not the XPS), with Vim. When it was brand-new, I got 40 minutes. A piece of hot garbage, with an energy-inefficient intel cpu..


Frankly that sounds like you got a lemon. Even the most inefficient gaming laptops get over an hour under a full gaming workload.


> IME Apple has always been the most honest when it makes performance claims

Yes and no. They'll always be honest with the claim, but the scenario for the claimed improvement will always be chosen to make the claim as large as possible, sometimes with laughable results.

Typically something like "watch videos for 3x longer <small>when viewing 4k h265 video</small>" (which means they adapted the previous gen's silicon which could only handle h264).


> IME Apple has always been the most honest when it makes performance claims

That's just laughable, sorry. No one is particularly honest in marketing copy, but Apple is for sure one of the worst, historically. Even more so when you go back to the PPC days. I still remember Jobs on stage talking about how the G4 was the fasted CPU in the world when I knew damn well that it was half the speed of the P3 on my desk.


Worked in an engineering lab at the time of the G4 introduction and I can contest that the G4 was a very, very fast CPU for scientific workloads.

Confirmed here: https://computer.howstuffworks.com/question299.htm (and elsewhere.)

A year later I was doing bonkers (for the time) photoshop work on very large compressed tiff files and my G4 laptop running at 400Mhz was more than 2x as fast as PIIIs on my bench.

Was it faster all around? I don't know how to tell. Was Apple as honest as I am in this commentary about how it mattered what you were doing? No. Was it a CPU that was able to do some things very fast vs others? I know it was.


Funny you mention that machine I still have one of those laying around. It was a very cool machine indeed with a very capable graphics card but that's about it. It did some things better/faster than a Pentium III PC but only if you went for the bottom of the barrel unit and crippled the software support (MMX just like another reply mentioned).

On top of that Intel increased frequency faster than Apple could handle. And after the release of the Pentium 4, the G4s became very noncompetitive so fast that one would question what could save Apple (later, down the road, Intel it turns out).

They tried to salvage it with the G5s but those came with so many issues that even their bi-proc water-cooled were just not keeping up. I briefly owned of those after repairing it for "free" using 3 of them, supposedly dead; the only thing worth a dam in that was the GPU. Extremely good hardware in many ways but also very weak for so many things that it had to be used only for very specific tasks, otherwise a cheap Intel PC was much better.

Which is precisely why right after they went with Intel. After years of subpar performance on laptops because they were stuck at G4 (not even high frequency).

Now I know from your other comments that you are a very strong believer and I'll admit that there were many reasons to use a Mac (software related) but please stop pretending they were performance competitive because that's just bonkers. If they were, the Intel switch would never have happend in the first place...


It's just amazing that this kind of nonsense persists. There were no significant benchmarks, "scientific" or otherwise, at the time or since showing that kind of behavior. The G4 was a dud. Apple rushed out some apples/oranges comparisons at launch (the one you link appears to be the bit where they compared a SIMD-optimized tool on PPC to generic compiled C on x86, though I'm too lazy to try to dig out the specifics from stale links), and the reality distortion field did the rest.


While certainly misleading, there were situations where the G4 was incredibly fast for the time. I remember being able to edit Video in iMove on a 12" G4 Laptop. At that time there was no equivalent x86 machine.


Have any examples from the past decade? Especially in the context of how exaggerated the claims are from PC and Android brands they are competing with?


Apple recently claimed that RAM in their Macbooks is equivalent to 2x the RAM in any other machine, in defense of the 8GB starting point.

In my experience, I can confirm that this is just not true. The secret is heavy reliance on swap. It's still the case that 1GB = 1GB.


Sure, and they were widely criticized for this. Again, the assertion I was responding to is that Apple does this ”laughably” more than competitors.

Is an occasional statement that they get pushback on really worse than what other brands do?

As an example from a competitor, take a look at the recent firestorm over Intel’s outlandish anti-AMD marketing:

https://wccftech.com/intel-calls-out-amd-using-old-cores-in-...


> Sure, and they were widely criticized for this. Again, the assertion I was responding to is that Apple does this ”laughably” more than competitors.

FWIW: the language upthread was that it was laughable to say Apple was the most honest. And I stand by that.


Fair point. Based on their first sentence, I mischaracterized how “laughable” was used.

Though the author also made clear in their second sentence that they think Apple is one of the worst when it comes to marketing claims, so I don’t think your characterization is totally accurate either.


Ye that was hilarious, my basic workload borders on the 8GB limit not even pushing it. They have fast swap but nothing beats real ram in the end, and considering their storage pricing is as stupid as their RAM pricing it really makes no difference.

If you go for the base model, you are in for a bad time, 256GB with heavy swap and no dedicated GPU memory (making the 8GB even worse) is just plain stupid.

This what the Apple fanboys don't seem to get, their base model at somewhat affordable price are deeply incompetent and if you start to load it up the pricing just do not make a lot of sense...


> If you go for the base model, you are in for a bad time, 256GB with heavy swap and no dedicated GPU memory (making the 8GB even worse) is just plain stupid ... their base model at somewhat affordable price are deeply incompetent

I got the base model M1 Air a couple of years back and whilst I don't do much gaming I do do C#, Python, Go, Rails, local Postgres, and more. I also have a (new last year) Lenovo 13th gen i7 with 16GB RAM running Windows 11 and the performance with the same load is night and day - the M1 walks all over it whilst easily lasting 10hrs+.

Note that I'm not a fanboy; I run both by choice. Also both iPhone and Android.

The Windows laptop often gets sluggish and hot. The M1 never slows down and stays cold. There's just no comparison (though the Air keyboard remains poor).

I don't much care about the technical details, and I know 8GB isn't a lot. I care about the experience and the underspecced Mac wins.


I don't know about your Lenovo and how your particular workload is handled by Windows.

And I agree that in pure performance, the Apple Silicon Macs will kill it; however, I am really skeptical that an 8GB model would give you a better experience overall. Faster for long compute operations sure, but then you have to deal with all the small slowdown from constant swapping. Unless you stick to a very small amounts of apps and very small amounts of tabs at the same time (which is rather limiting) I don't know how you do it. I don't want to call you a liar but maybe you are emotionally attached (just like I am sometimes) to the device to realize it, or maybe the various advantages of the Mac make you ignore the serious limitations that come with it.

Everyone has their own sets of tradeoffs but my argument is that you can deal with the 8GB Apple Silicon devices you are very likely to be well served by a much cheaper device anyway (like half as cheap).


All I can say is I have both and I use both most days. In addition to work-issued Windows laptops, so I have a reasonable and very regular comparison. And the comparative experience is exactly as I described. Always. Every time.

> you have to deal with all the small slowdown from constant swapping

That just doesn't happen. As I responded to another post, though, I don't do Docker or LLMs on the M1 otherwise you'd probably be right.

> Unless you stick to a very small amounts of apps and very small amounts of tabs at the same time

It's really common to have approaching 50+ tabs open at once. And using Word is often accompanied by VS Code, Excel, Affinity Designer, DotNet, Python, and others due to the nature of what I'm doing. No slowdown.

> maybe you are emotionally attached

I am emotionally attached to the device. Though as a long-time Mac, Windows, and Linux user I'm neither blinkered nor tribal - the attachment is driven by the experience and not the other way around.

> maybe the various advantages of the Mac make you ignore the serious limitations that come with it

There are indeed limitations. 8GB is too small. The fact that for what I do it has no impact doesn't mean I don't see that.

> you can deal with the 8GB Apple Silicon devices you are very likely to be well served by a much cheaper device anyway (like half as cheap)

I already have better Windows laptops than that, and I know that going for a Windows laptop that's half as cheap as the entry level Air would be nothing like as nice because the more expensive ones already aren't (the Lenovo was dearer than the Air).

---

To conclude, you have to use the right tool for the job. If the nature of the task intrinsically needs lots of RAM then 8GB is not good enough. But when it is enough it runs rings around equivalent (and often 'better') Windows machines.


None of that seems to be high loads or stuff that needs a lot of ram.


Not individually, no. Though it's often done simultaneously.

That said you're right about lots of RAM in that I wouldn't bother using the 8GB M1 Air for Docker or running LLMs (it can run SD for images though, but very slowly). Partly that's why I have the Lenovo. You need to pick the right machine for the job at hand.


You know that RAM in these machines is more different than the same as "RAM" in a standard PC? Apple's SoC RAM is more or less part of the CPU/GPU and is super fast. And for obvious reasons cannot be added to.

Anyway, I manage a few M1 and M3 machines with 256/8 configs and they all run just as fast as 16 and 32 machines EXCEPT for workloads that need more than 8GB for a process (virtualization) or workloads that need lots of video memory (Lightroom can KILL an 8GB machine that isn't doing anything else...)

The 8GB is stupid discussion isn't "wrong" in the general case, but it is wrong for maybe 80% of users.


> EXCEPT for workloads that need more than 8GB for a process

Isn't that exactly the upthread contention: Apple's magic compressed swap management is still swap management that replaces O(1) fast(-ish) DRAM access with thousands+ cycle page decompression operations. It may be faster than storage, but it's still extremely slow relative to a DRAM fetch. And once your working set gets beyond your available RAM you start thrashing just like VAXen did on 4BSD.


Exactly! Load a 4GB file and welcome the beach ball spinner any time you need to context switch to another app. I don't know how they don't realize that because it's not really hard to get there. But when I was enamored with Apple stuff in my formative years, I would gladly ignore that or brush it off so I can see where they come from, I guess.


It's not as different as the marketing would like you to think. In fact, for the low-end models even the bandwidth/speed isn't as big of a deal as they make it out to be, especially considering that bandwidth has to be shared for the GPU needs.

And if you go up in specs the bandwidth of Apple silicon has to be compared to the bandwidth of a combo with dedicated GPU. The bandwidth of dedicated GPUs is very high and usually higher than what Apple Silicon gives you if you consider the RAM bandwidth for the CPU.

It's a bit more complicated but that's marketing for you. When it comes to speed Apple RAM isn't faster than what can be found in high-end laptops (or desktops for that matter).


There is also memory compression and their insane swap speed due to SoC memory and ssd


Every modern operating system now does memory compression


Some of them do it better than others though.


Apple uses Magic Compression.


Not sure what windows does but the popular method on e.g. fedora is to split memory into main and swap and then compress swap. It could be more efficient the way Apple does it by not having to partition main memory.


This is a revolution


Citation needed?


Don't know if I'm allowed to. It's not that special though.


> The secret is heavy reliance on swap

You are entirely (100%) wrong, but, sadly, NDA...


I do admit the "reliance on swap" thing is speculation on my part :)

My experience is that I can still tell when the OS is unhappy when I demand more RAM than it can give. MacOS is still relatively responsive around this range, which I just attributed to super fast swapping. (I'd assume memory compression too, but I usually run into this trouble when working with large amounts of poorly-compressible data.)

In either case, I know it's frustrating when someone is confidently wrong but you can't properly correct them, so you have my apologies


Memory compression isn't magic and isn't exclusive to macOS.


I suggest you go and look HOW it is done in apple silicon macs, and then think long and hard why this might make a huge difference. Maybe Asahi Linux guys can explain it to you ;)


I understand that it can make a difference to performance (which is already baked into the benchmarks we look at), I don't see how it can make a difference to compression ratios, if anything in similar implementations (ex: console APUs) it tends to lead to worse compression ratios.

If there's any publicly available data to the contrary I'd love to read it. Anecdotally I haven't seen a significant difference between zswap on Linux and macOS memory compression in terms of compression ratios, and on the workloads I've tested zswap tends to be faster than no memory compression on x86 for many core machines.


How convenient :)


Regardless of what you can't tell, he's absolutely right regarding Apple's claims: saying that a 8gb mac is as good as a 16gb non-mac is laughable.


My entry-level 8GB M1 Macbook Air beats my 64GB 10-core Intel iMac in my day-to-day dev work.


That was never said. They said 8gb mac is similar to a 16gb non-Mac


If someone is claiming “‹foo› has always ‹barred›”, then I don't think it's fair to demand a 10 year cutoff on counter-evidence.


For “always” to be true, the behavior needs to extend to the present date. Otherwise, it’s only true to say “used to”.


Clearly it isn’t the case that Apple has always been more honest than their competition, because there were some years before Apple was founded.


Interesting, by what benchmark did you compare the G4 and the P3?

I don't have a horse in this race, Jobs lied or bent the truth all the time so it wouldn't surprise me, I'm just curious.


I remember that Apple used to wave around these SIMD benchmarks showing their PowerPC chips trouncing Intel chips. In the fine print, you'd see that the benchmark was built to use AltiVec on PowerPC, but without MMX or SSE on Intel.


Ah so the way Intel advertises their chips. Got it.


Yeah, and we rightfully criticize Intel for the same and we distrust their benchmarks


You can claim Apple is dishonest for a few reasons.

1) Graphs often are unannotatted.

2) Comparisons are rarely against latest generation products. (their argument for that has been that they do not expect people to upgrade yearly, so its showing the difference of their intended upgrade path).

3) They have conflated performance, for performance per watt.

However, when it comes to battery life, performance (for a task) or specification of their components (screens, ability to use external displays up to 6k, port speed etc) there are almost no hidden gotchas and they have tended to be trustworthy.

The first wave of M1 announcements were met with similar suspicion as you have shown here; but it was swiftly dispelled once people actually got their hands on them.

*EDIT:* Blaming a guy who's been dead for 13 years for something they said 50 years ago, and primarily it seems for internal use is weird. I had to look up the context but it seems it was more about internal motivation in the 70’s than relating to anything today, especially when referring to concrete claims.


"This thing is incredible," Jobs said. "It's the first supercomputer on a chip.... We think it's going to set the industry on fire."

"The G4 chip is nearly three times faster than the fastest Pentium III"

- Steve Jobs (1999) [1]

[1] https://www.wired.com/1999/08/lavish-debut-for-apples-g4/


Thats cool, but literally last millennium.

And again, the guy has been dead for the better part of this millennium.

What have they shown of any product currently on the market, especially when backed with any concrete claim, that has been proven untrue-

EDIT: After reading your article and this one: https://lowendmac.com/2006/twice-as-fast-did-apple-lie-or-ju... it looks like it was true in floating point workloads.


The G4 was a really good chip if you used photoshop. It took intel awhile to catch up.


If you have to go back 20+ years for an example…


Apple marketed their PPC systems as "a supercomputer on your desk", but it was nowhere near the performance of a supercomputer of that age. Maybe similar performance to a supercomputer from the 1970's, but that was their marketing angle from the 1990's.


From https://512pixels.net/2013/07/power-mac-g4/: the ad was based on the fact that Apple was forbidden to export the G4 to many countries due to its “supercomputer” classification by the US government.


It seems that US government was buying too much into tech hypes at the turn of the millenium. Around the same period PS2 exports were also restricted [1].

[1] https://www.latimes.com/archives/la-xpm-2000-apr-17-fi-20482...


The PS2 was used in supercomputing clusters.


Blaming a company TODAY for marketing from the 1990s is crazy.


Except they still do the same kind of bullshit marketing today.


> Apple marketed their PPC systems as "a supercomputer on your desk"

It's certainly fair to say that twenty years ago Apple was marketing some of its PPC systems as "the first supercomputer on a chip"[^1].

> but it was nowhere near the performance of a supercomputer of that age.

That was not the claim. Apple did not argue that the G4's performance was commensurate with the state of the art in supercomputing. (If you'll forgive me: like, fucking obviously? The entire reason they made the claim is precisely because the latest room-sized supercomputers with leapfrog performance gains were in the news very often.)

The claim was that the G4 was capable of sustained gigaflop performance, and therefore met the narrow technical definition of a supercomputer.

You'll see in the aforelinked marketing page that Apple compared the G4 chip to UC Irvine’s Aeneas Project, which in ~2000 was delivering 1.9 gigaflop performance.

This chart[^2] shows the trailing average of various subsets of super computers, for context.

This narrow definition is also why the machine could not be exported to many countries, which Apple leaned into.[^3]

> Maybe similar performance to a supercomputer from the 1970's

What am I missing here? Picking perhaps the most famous supercomputer of the mid-1970s, the Cray-1,[^4] we can see performance of 160 MFLOPS, which is 160 million floating point operations per second (with an 80 MHz processor!).

The G4 was capable of delivering ~1 GFLOP performance, which is a billion floating point operations per second.

Are you perhaps thinking of a different decade?

[^1]: https://web.archive.org/web/20000510163142/http://www.apple....

[^2]: https://en.wikipedia.org/wiki/History_of_supercomputing#/med...

[^3]: https://web.archive.org/web/20020418022430/https://www.cnn.c...

[^4]: https://en.wikipedia.org/wiki/Cray-1#Performance


>That was not the claim. Apple did not argue that the G4's performance was commensurate with the state of the art in supercomputing.

This is marketing we're talking about, people see "supercomputer on a chip" and they get hyped up by it. Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.

> The entire reason they made the claim is

The reason they marketed it that way was to get people to part with their money. Full stop.

In the first link you added, there's a photo of a Cray supercomputer, which makes the viewer equate Apple = Supercomputer = I am a computing god if I buy this product. Apple's marketing has always been a bit shady that way.

And soon after that period Apple jumped off the PPC architecture and onto the x86 bandwagon. Gimmicks like "supercomputer on a chip" don't last long when the competition is far ahead.


I can't believe Apple is marketing their products in a way to get people to part with their money.

If I had some pearls I would be clutching them right now.


> This is marketing we're talking about, people see "supercomputer on a chip" and they get hyped up by it.

That is also not in dispute. I am disputing your specific claim that Apple somehow suggested that the G4 was of commensurate performance to a modern supercomputer, which does not seem to be true.

> Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.

This is why context is important (and why I'd appreciate clarity on whether you genuinely believe a supercomputer from the 1970s was anywhere near as powerful as a G4).

In the late twentieth and early twenty-first century, megapixels were a proxy for camera quality, and megahertz were a proxy for processor performance. More MHz = more capable processor.

This created a problem for Apple, because the G4's SPECfp_95 (floating point) benchmarks crushed Pentium III at lower clock speeds.

PPC G4 500 MHz - 22.6

PPC G4 450 MHz - 20.4

PPC G4 400 MHz - 18.36

Pentium III 600 MHz – 15.9

For both floating point and integer benchmarks, the G3 and G4 outgunned comparable Pentium II/III processors.

You can question how this translates to real world use cases – the Photoshop filters on stage were real, but others have pointed out in this thread that it wasn't an apples-to-apples comparison vs. Wintel – but it is inarguable that the G4 had some performance advantages over Pentium at launch, and that it met the (inane) definition of a supercomputer.

> The reason they marketed it that way was to get people to part with their money. Full stop.

Yes, marketing exists to convince people to buy one product over another. That's why companies do marketing. IMO that's a self-evidently inane thing to say in a nested discussion of microprocessor architecture on a technical forum – especially when your interlocutor is establishing the historical context you may be unaware of (judging by your comment about supercomputers from the 1970s, which I am surprised you have not addressed).

I didn't say "The reason Apple markets its computers," I said "The entire reason they made the claim [about supercomputer performance]…"

Both of us appear to know that companies do marketing, but only you appear to be confused about the specific claims Apple made – given that you proactively raised them, and got them wrong – and the historical backdrop against which they were made.

> In the first link you added, there's a photo of a Cray supercomputer

That's right. It looks like a stylized rendering of a Cray-1 to me – what do you think?

> which makes the viewer equate Apple = Supercomputer = I am a computing god if I buy this product

The Cray-1's compute, as measured in GFLOPS, was approximately 6.5x lower than the G4 processor.

I'm therefore not sure what your argument is: you started by claiming that Apple deliberately suggested that the G4 had comparable performance to a modern supercomputer. That isn't the case, and the page you're referring to contains imagery of a much less performant supercomputer, as well as a lot of information relating to the history of supercomputers (and a link to a Forbes article).

> Apple's marketing has always been a bit shady that way.

All companies make tradeoffs they think are right for their shareholders and customers. They accentuate the positives in marketing and gloss over the drawbacks.

Note, too, that Adobe's CEO has been duped on the page you link to. Despite your emphatic claim:

> Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.

The CEO of Adobe is quoted as saying:

> “Currently, the G4 is significantly faster than any platform we’ve seen running Photoshop 5.5,” said John E. Warnock, chairman and CEO of Adobe.

How is what you are doing materially different to what you accuse Apple of doing?

> And soon after that period Apple jumped off the PPC architecture and onto the x86 bandwagon.

They did so when Intel's roadmap introduced Core Duo, which was significantly more energy-efficient than Pentium 4. I don't have benchmarks to hand, but I suspect that a PowerBook G5 would have given the Core Duo a run for its money (despite the G5 being significantly older), but only for about fifteen seconds before thermal throttling and draining the battery entirely in minutes.


My iBook G4 was absolutely crushed by my friends Wintel laptops that they bought for half as much. Granted it was more carriable and had somewhat better battery life (needed it cause how much longer was needed to do stuff) but really performance was not a good reason to go with Apple hardware, and that still holds true as far as I'm concerned.


G4 was 1998, Core Duo was 2006, 8 years isn’t bad.


That is a long time – bet it felt even longer to the poor PowerBook DRI at Apple who had to keep explaining to Steve Jobs why a G5 PowerBook wasn't viable!


Ya, I really wanted a G5 but power and thermals weren’t going to work and IBM/Moto weren’t interested in making a mobile version.


Indeed. Have we already forgotten about the RDF?


No, it was just always a meaningless term...


Was simply a phrase to acknowledge that Jobs was better at giving demos than anyone who ever lived.


Didn’t he have to use two PPC procs to get the equivalent perf you’d get on a P3?

Just add them up, it’s the same number of Hertz!

But Steve that’s two procs vs one!

I think this is when Adobe was optimizing for Windows/intel and was single threaded, but Steve put out some graphs showing better perf on the Mac.


> IME Apple has always been the most honest when it makes performance claims.

I guess you weren't around during the PowerPC days... Because that's a laughable statement.


I have no idea who's down voting you. They were lying through their teeth about CPU performance back then.

A PC half the price was smoking their top of the line stuff.


That's funny you say that, because this is precisely the time, I started buying Macs (I got a Pismo PowerBook G3 gifted and then bought an iBook G4). And my experience was that for sure, if you put as much money into a PC than in a Mac you would get MUCH better performance.

What made it worth it at the time (I felt) was the software. Today I'm really don't think so, software has improved overall in the industry and there is not a lot of things "Mac specific" that makes it a clear-cut choice.

As for the performance I can't believe all the Apple silicon hype. Sure, it gets good battery life given you use strictly Apple software (or software optimized for it heavily) but in mixed workload situation it's not that impressive.

Using the M2 MacBook Pro of a friend I figured I could get maybe 4-5 hours out of its best case scenario which is better than the 2-3 hours you would get from a PC laptop but also not that great considering the price difference.

And when it comes to performance it is extremely unequal and very lackluster for many things. Like there is more lag launching Activity Monitor on a 2K++ MacBook Pro than launching task manager on a 500 PC. This is a small somewhat stupid example but it does tell the overall story.

They talk a big game but in reality, their stuff isn't that performant in the real world.

And they still market games when one of their 2K laptops plays Dota 2 (a very old, relatively ressource efficient game) worse than a cheapo PC.


> Using the M2 MacBook Pro of a friend I figured I could get maybe 4-5 hours out of its best case scenario which is better than the 2-3 hours you would get from a PC laptop but also not that great considering the price difference.

Any electron apps on it?


Yes, but I stopped caring about electron apps some time ago. You can't just drop or ignore useful software to satisfy Apple marketing. Just like you can't just ignore Chrome for Safari to satisfy the autonomy claims, because Chrome is much more useful and better at quite a bit of things.

I went the way of only Apple and Apple optimized software for quite a while but I just can't be bothered anymore, considering the price of the hardware and nowadays the price of subscription software.

And this is exactly my argument I you use the hardware in a very specific way, you get there but it is very limiting, annoying and inacceptable considering the pricing.

It's like saying that a small city car gets more gas mileage when what one needs is actually a capable truck. It's not strictly wrong but also not very helpful.

I think the Apple Silicon laptops are very nice if you can work within the limitations, but the moment you start pushing on those you realize they are not really worth the money. Just like the new iPad Pro they released; completely awesome hardware but how many people can actually work within the limitations of iPad OS to make the price not look like a complete ripoff. Very few I would argue.


or VMs. they should be getting way better out of that.


Apple switched to Intel chips 20 years ago. Who fucking cares about PowerPC?

Today, Apple Silicon is smoking all but the top end Intel chips, while using a fraction of the power.


Oh those megahertz myths! Their marketing department is pretty amazing at their spin control. This one was right up there with "it's not a bug; it's a feature" type of spin.


All I remember is tanks in the commercials.

We need more tanks in commercials.


Before macOS became NextStep it was practically a different company. I’ve been using Apple hardware for 21 years, when they got a real operating system. Even the G4 did better than the laptop it replaced.


Apple is always honest but they know how to make you believe something that isn’t true.


Yeah, the assumption seems to be that using less battery by one component means that the power will just magically go unused. As with everything else in life, as soon as something stops using a resource something else fills the vacuum to take advantage of the resource.


Quickly looking at the press release, it seems to have the same comparisons as in the video. None of Apple's comparisons today are between the M3 and M4. They are ALL comparing the M2 and M4. Why? It's frustrating, but today Apple replaced a product with an M2 with a product with an M4. Apple always compares product to product, never component to component when it comes to processors. So those specs are far more impressive than if we could have numbers between the M3 and M4.


Didn't they do extreme nitpicking for their tests so they could show the M1 beating a 3090 (or M2 a 4090, I can't remember).

Gave me quite a laugh when Apple users started to claim they'd be able to play Cyberpunk 2077 maxed out with maxed out raytracing.


I'll give you that Apple's comparisons are sometimes inscrutable. I vividly remember that one.

https://www.theverge.com/2022/3/17/22982915/apple-m1-ultra-r...

Apple was comparing the power envelope (already a complicated concept) of their GPU against a 3090. Apple wanted to show that the peak of their GPU's performance was reached with a fraction of the power of a 3090. What was terrible was that Apple was cropping their chart at the point where the 3090 was pulling ahead in pure compute by throwing more watts at the problem. So their GPU was not as powerful as a 3090, but a quick glance at the chart would completely tell you otherwise.

Ultimately we didn't see one of those charts today, just a mention about the GPU being 50% more efficient than the competition. I think those charts are beloved by Johny Srouji and no one else. They're not getting the message across.


Plenty of people on HN thought that M1 GPU is as powerful as 3090 GPU, so I think the message worked very well for Apple.

They really love those kind of comparisons - e.g. they also compared M1s against really old Intel CPUs to make the numbers look better, knowing that news headlines won't care for details.


> not component to component

that's honestly kind of stupid when discussing things like 'new CPU!' like this thread.

I'm not saying the M4 isn't a great platform, but holy cow the corporate tripe people gobble up.


They compared against really old intel CPUs because those were the last ones they used in their own computers! Apple likes to compare device to device, not component to component.


You say that like it's not a marketing gimmick meant to mislead and obscure facts.

It's not some virtue that causes them to do this.


It's funny because your comment is meant to mislead and obscure facts.

Apple compared against Intel to encourage their previous customers to upgrade.

There is nothing insidious about this and is in fact standard business practice.


Apple's the ONLY tech company that doesn't compare products to their competitors.

The intensity of the reality distortion field and hubris is mind boggling.

Turns out, you fell for it.


No, they compared it because it made them look way better for naive people. They have no qualms comparing to other competition when it suits them.

You're explanation is a really baffling case of corporate white knighting.


Yes, can't remember the precise combo either, there was a solid year or two of latent misunderstandings.

I eventually made a visual showing it was the same as claiming your iPhone was 3x the speed of a Core i9: Sure, if you limit the power draw of your PC to a battery the size of a post it pad.

Similar issues when on-device LLMs happened, thankfully, quieted since then (last egregious thing I saw was stonk-related wishcasting that Apple was obviously turning its Xcode CI service into a full-blown AWS competitor that'd wipe the floor with any cloud service, given the 2x performance)


It’s an iPad event and there were no M3 iPads.

That’s all. They’re trying to convince iPad users to upgrade.

We’ll see what they do when they get to computers later this year.


I have a Samsung Galaxy S7 FE tablet, and I can't figure any use case where I may use more power.

I agree that iPad has more interesting software than android for use cases like video or music editing, but I don't do those on a tablet anyway.

I just can't imagine anyone updating their ipad M2 for this except a tiny niche that really wants that more power.


I don't know who would prefer to do music or video editing on smaller display, without keyboard for shortcuts, without proper file system and with problematic connectivity to external hardware. Sure, it's possible, but why? Ok, maybe there's some usecase on the road where every gram counts, but that seems niche.


The A series was good enough.

I’m vaguely considering this but entirely for the screen. The chip has been irrelevant to me for years, it’s long past the point where I don’t notice it.


A series was definitely not good enough. Really depends on what you're using it for. Netflix and web? Sure. But any old HDR tablet, that can maintain 24Hz, is good enough for that.

These are 2048x2732 with 120Hz displays, that support 6k external displays. Gaming and art apps push them pretty hard. From the iPad user in my house, goin from the 2020 non M* iPad to a 2023 M2 iPad made a huge difference for the drawing apps. Better latency is always better for drawing, and complex brushes (especially newer ones), selections, etc, would get fairly unusable.

For gaming, it was pretty trivial to dip well below 60Hz with a non M* iPad, with some of the higher demand games like Fortnight, Minecraft (high view distance), Roblox (it ain't what it used to be), etc.

But, the apps will always gravitate to the performance of the average user. A step function in performance won't show up in the apps until the adoption follows, years down the line. Not pushing the average to higher performance is how you stagnate the future software of the devices.


You’re right, it’s good enough for me. That’s what I meant but I didn’t make that clear at all. I suspect a ton of people are in a similar position.

I just don’t push it at all. The few games I play are not complicated in graphics or CPU needs. I don’t draw, 3D model, use Logic or Final Cut or anything like that.

I agree the extra power is useful to some people. But even there we have the M1 (what I’ve got) and the M2 models. But I bet there are plenty of people like me who mostly bought the pro models for the better screen and not the additional grunt.


The AX series, which is what iPads were using before the M series, were precisely the chip family that got rebranded as the M1, M2, etc.

The iPads always had a lot of power, people simply started paying more attention when the chip family was ported to PC.


Yeah. I was just using the A to M chip name transition as an easy landmark to compare against.


AI on the device may be the real reason for an M4.


Previous iPads have had that for a long time. Since the A12 in 2018. The phones had it even earlier with the A11.

Sure this is faster but enough to make people care?

It may depend heavily on what they announce is in the next version of iOS/iPadOS.


That’s my point - if there’s a real on-device LLM it may be much more usable with the latest chip.


That's because the previous iPad Pros came with M2, not M3. They are comparing the performance with the previous generation of the same product.


> They are ALL comparing the M2 and M4. Why?

Well, the obvious answer is that those with older machines are more likely to upgrade than those with newer machines. The market for insta-upgraders is tiny.

edit: And perhaps an even more obvious answer: there are no iPads that contained the M3, so the comparison would be more useless. The M4 was just launched today exclusively in iPads.


because previous ipad was M2. So 'remember how fast was your previous ipad', well this one is N better.


They know that anyone who has bought an M3 is good on computers for a long while. They're targeting people who have m2 or older macs. People who own an m3 are basically going to buy anything that comes down the pipe, because who needs an m3 over an m2 or even an m1 today?


I’m starting to worry that I’m missing out on some huge gains (M1 Air user.) But as a programmer who’s not making games or anything intensive, I think I’m still good for another year or two?


You're not going to be missing out on much. I had the first M1 Air and recently upgraded to an M3 Air. The M1 Air has years of useful life left and my upgrade was for reasons not performance related.

The M3 Air performs better than the M1 in raw numbers but outside of some truly CPU or GPU limited tasks you're not likely to actually notice the difference. The day to day behavior between the two is pretty similar.

If your current M1 works you're not missing out on anything. For the power/size/battery envelope the M1 Air was pretty awesome, it hasn't really gotten any worse over time. If it does what you need then you're good until it doesn't do what you need.


I have a 2018 15" MBP, and an M1 Air and honestly they both perform about the same. The only noticeable difference is the MBP takes ~3 seconds to wake from sleep and the M1 is instant.


I have an M1 Air and I test drove a friend's recent M3 Air. It's not very different performance-wise for what I do (programming, watching video, editing small memory-constrained GIS models, etc)


I wanted to upgrade my M1 because it was going to swap a lot with only 8 gigs of RAM and because I wanted a machine that could run big LLMs locally. Ended up going 8G macbook air M1 -> 64G macbook pro M1. My other reasoning was that it would speed up compilation, which it has, but not by too much.

The M1 air is a very fast machine and is perfect for anyone doing normal things on the computer.


Doesn't seem plausible to me that Apple will release a "M3 variant" that can drive "tandem OLED" displays. So probably logical to package whatever chip progress (including process improvements) into "M4".

And it can signal that "We are serious about iPad as a computer", using their latest chip.

Logical alignment to progresses in engineering (and manufacturing) packaged smartly to generate marketing capital for sales and brand value creation.

Wonder how the newer Macs will use these "tandem OLED" capabilities of the M4.


The iPads skipped the M3 so they’re comparing your old iPad to the new one.


I like the comparison between much older hardware with brand new to highlight how far we came.


> I like the comparison between much older hardware with brand new to highlight how far we came.

That's ok, but why skip the previous iteration then? Isn't the M2 only two generations behind? It's not that much older. It's also a marketing blurb, not a reproducible benchmark. Why leave out comparisons with the previous iteration even when you're just hand-waving over your own data?


In this specific case, it's because iPad's never got the M3. They're literally comparing it with the previous model of iPad.

There were some disingenuous comparisons throughout the presentation going back to A11 for the first Neural Engine and some comparisons to M1, but the M2 comparison actually makes sense.


I wouldn't call the comparison to A11 disingenuous, they were very clear they were talking about how far their neural engines have come, in the context of the competition just starting to put NPUs in their stuff.

I mean, they compared the new iPad Pro to an iPod Nano, that's just using your own history to make a point.


Fair point—I just get a little annoyed when the marketing speak confuses the average consumer and felt as though some of the jargon they used could trip less informed customers up.


personally I think this is a comparison most people want. The M3 had a lot of compromises over the M2.

that aside, the M4 is about the Neural Engine upgrades over anything (which probably should have been compared to the M3)


What are such compromises? I may buy an M3 mbp, so would like to hear more


The M3 Pro had some downgrades compared to the M2 Pro, less performance cores and lower memory bandwidth. This did not apply to the M3 and M3 Max.


Yes, kinda annoying. But on the other hand, given that apple releases a new chip every 12 months, we can grant them some slack here. Given that from AMD, Intel or nvidia we see usually a 2 year cadence.


There’s probably easier problems to solve in the ARM space than x86 considering the amount of money and time spent on x86.

That’s not to say that any of these problems are easy, just that there’s probably more lower hanging fruit in ARM land.


And yet they seem to be the only people picking the apparently "Low Hanging Fruit" in ARM land. We'll see about Qualcomm's Nuvia-based stuff, but that's been "nearly released" for what feels like years now, but you still can't buy one to actually test.

And don't underestimate the investment Apple made - it's likely at a similar level to the big x86 incumbents. I mean AMD's entire Zen development team cost was likely a blip on the balance sheet for Apple.


They don't care as much for the ARM stuff because software development investment vastly outweighs the chip development costs.

Sure, maybe they can do better but at what cost and for what? The only thing Apple does truly better is performance per watt which is not something that is relevant for a large part of the market.

x86 stuff is still competitive performance wise, especially in the GPU department where Apple attempts are rather weak compared to what is on offer across the pond. The Apple Silicon switch cost a large amount of developer effort for optimisation, and in the process a lot of software compatibility was lost, it took a long time to get even the most popular softwares to get properly optimized and some software house even gave up on supporting macOS because it just wasn't worth the man hour investment considering the tiny market.

This is why I am very skeptical about the Qualcomm ARM stuff, it needs to be priced extremely well to have a chance, if consumers do not pick it up in droves, no software port is going to happen in a timely manner and it will stay irrelevant. Considering the only thing much better than the current x86 offering is the performance per watt, I do not have a lot of hope, but I may be pleasantly surprised.

Apple aficionados keep raving about battery life but it's not really something a lot of people care about (appart for smartphones, where Apple isn't doing any better than the rest of industry).


> Qualcomm's Nuvia-based stuff, but that's been "nearly released" for what feels like years now

Launching at Computex in 2 weeks, https://www.windowscentral.com/hardware/laptops/next-gen-ai-...


Good to know that it's finally seeing the light. I thought they're still in legal dispute with ARM about Nuvia's design?


Not privy to details, but some legal disputes can be resolved by licensing price negotiations, motivated by customer launch deadlines.


speaking of which, whatever happened to qualcomm's bizarre assertion that ARM was pulling a sneak move in all its new licensing deals to outlaw third-party IP entirely and force ARM-IP-only?

there was one quiet "we haven't got anything like that in the contract we're signing with ARM" from someone else, and then radio silence. And you'd really think that would be major news, because it's massively impactful on pretty much everyone, since one of the major use-cases of ARM is as a base SOC to bolt your custom proprietary accelerators onto...

seemed like obvious bullshit at the time from a company trying to "publicly renegotiate" a licensing agreement they probably broke...


Again, not saying that they are easy (or cheap!) problems to solve, but that there are more relatively easy problems in the ARM space than the x86 space.

That’s why Apple can release a meaningfully new chip every year where it takes several for x86 manufacturers


> We'll see about Qualcomm's Nuvia-based stuff, but that's been "nearly released" for what feels like years now, but you still can't buy one to actually test.

That's more bound by legal than technical reasons...


Maybe for GPUs, but for CPU both intel and AMD release with yearly cadance. Even when Intel has nothing new to release, generation is bumped.


> Apple always compares product to product, never component to component when it comes to processors.

I don't think this is true. When they launched the M3 they compared primarily to M1 to make it look better.


From their specs page, battery life is unchanged. I think they donated the chip power savings to offset the increased consumption of the tandem OLED


I’ve not seen discussion that Apple likely scales performance of chips to match the use profile of the specific device it’s used in. An M2 in an iPad Air is very likely not the same as an M2 in an MBP or Mac Studio.


The GeekBench [1,2] benchmarks for M2 are:

Single Core: iPad Pro (M2): 2539 Macbook Air (M2): 2596 Macbook Pro (M2): 2645

Multi Core: iPad Pro (M2 8-core): 9631 Macbook Air (M2 8-core): 9654 Macbook Pro (M2 8-core): 9642

So, it appears to be almost the same performance (until it throttles due to heat, of course).

1. https://browser.geekbench.com/ios-benchmarks 2. https://browser.geekbench.com/mac-benchmarks


Surprisingly, I think it is: I was going to comment that here, then checked Geekbench, single core scores match for M2 iPad/MacBook Pro/etc. at same clock speed. i.e. M2 "base" = M2 "base", but core count differs, and with the desktops/laptops, you get options for M2 Ultra Max SE bla bla.


A Ryzen 7840U in a gaming handheld is not (configured) the same as a Ryzen 7840U in a laptop, for that matter, so Apple is hardly unique here.


The manufacturer often targets a tdp that is reasonable for thermals and battery life, but the cpu package is often the same.


Yeah, but the difference is that you usually don't get people arguing that it's the same thing or that it can be performance competitive in the long run. When it comes to Apple stuff, people say some irrational stuff that is totally bonkers...


Likely there is also a smaller battery as the iPad Pro is quite a bit thinner


I don't know, but the M3 MBP I got from work already gives the impression of using barely any power at all. I'm really impressed by Apple Silicon, and I'm seriously reconsidering my decision from years ago to never ever buy Apple again. Why doesn't everybody else use chips like these?


I have an M3 for my personal laptop and an M2 for my work laptop. I get ~8 hours if I'm lucky on my work laptop, but I have attributed most of that battery loss to all the "protection" software they put on my work laptop that is always showing up under the "Apps Using Significant Power" category in the battery dropdown.

I can have my laptop with nothing on screen, and the battery still points to TrendMicro and others as the cause of heavy battery drain while my laptop seemingly idles.

I recently upgraded my personal laptop to the M3 MacBook pro and the difference is astonishing. I almost never use it plugged in because I genuinely get close to that 20 hour reported battery life. Last weekend I played a AAA Video Game through Xbox Cloud Gaming (awesome for mac gamers btw) and with essentially max graphics (rendered elsewhere and streamed to me of course), I got sucked into a game for like 5 hours and lost only 8% of my battery during that time, while playing a top tier video game! It really blew my mind. I also use GoLand IDE on there and have managed to get a full day of development done using only about 25-30% battery.

So yeah, whatever Apple is doing, they are doing it right. Performance without all the spyware that your work gives you makes a huge difference too.


Over the weekend, I accidentally left my work M3 unplugged with caffeinate running (so it doesn't sleep). It wasn't running anything particularly heavy, but still, on Monday, 80% charge left.

That's mindblowing. Especially since my personal laptop is a Thinkpad X1 Extreme. I can't leave that unplugged at all.


Apple quote 18h of Apple TV playback or 12h of web browsing so I will call a large amount of bullshit on that.

Even considering the marketing, best case scenario you would be between 27 and 41% battery consumption for 5h of runtime. The actual number will be lower than that because you probably don't use the MBP at the low brightness they use for marketing benchmarks and game streaming constantly requires power for the wifi chip (video can buffer, hence the lower consumption).

There is no way to say this nicely, but can you stop lying ?


For the AAA video game example, I mean, it is interesting how far that kind of tech has come… but really that’s just video streaming (maybe slightly more difficult because latency matters?) from the point of view of the laptop, right? The quality of the graphics there have more-or-less nothing to do with the battery.


I think the market will move to using chips like this, or at least have additional options. The new Snapdragon SOC is interesting, and I would suspect we could see Google and Microsoft play in this space at some point soon.


> is it REALLY half the power use of all times (so we'll get double the battery life)

I'm not sure what you mean by "of all times" but half the battery usage of the processor definitely doesn't translate into double the battery life since the processor is not the only thing consuming power.


Any product that uses this is more than just the chip, so you cannot get a proportional change in battery life.


Sure, but I also remember them comparing M1 chip to 3090 GTX and my MacBook M1 Pro doesn't really run games well.

So I've become really suspicious about any claims about performance done by Apple.


It's not just games. There is in fact not a lot of stuff that Apple Silicon can run well. In theory you get great battery life, to use software nobody wants to use or to take longer to run stuff that is not running well.

The problem is two-fold, first the marketing bullshit does not match the reality and second the Apple converted will lie without even thinking about it to justify the outrageous price.

There are a lot of things I like about Apple hardware but the reality is that they can charge so much because there is a lot of mythology around their products and it just doesn't add up.

Now if only they be bothered to actually make software great (and not simpleton copies of what already exists), there would be an actual valid reason to unequivocally recommend their stuff but they can't be bothered since they already make too much money as it is.


I mean, I remember Apple comparing the M1 Ultra to Nvidia's RTX 3090. While that chart was definitely putting a spin on things to say the least, and we can argue from now until tomorrow about whether power consumption should or should not be equalised, I have no idea why anyone would expect the M1 Pro (an explicitly much weaker chip) to perform anywhere near the same.

Also what games are you trying to play on it? All my M-series Macbooks have run games more than well enough with reasonable settings (and that has a lot more to do with OS bugs and the constraints of the form factor than with just the chipset).


They compared them in terms of perf/watt, which did hold up, but obviously implied higher performance overall.


That is fault of the devs. Because optimization for dedicated graphic cards is a either integrated in the game engine or they just have a version for rtx users.


Apple might use simplified and opaque plots to drive their point, but they all too often undersell the differences. Indepedent reviews for example find that they not just hit the mark Apple mentions for things like battery but that often do slightly better...


Well battery life would be used by other things too right? Especially by that double OLED screen. "best ever" in every keynote makes me laugh at this point, but it doesn't mean that they're not improving their power envelope.


> so we'll get double the battery life

This is an absurd interpretation. Nobody hears that and says "they made the screen use half the energy".


You wouldn’t necessarily get twice the battery life. It could be less than that due to the thinner body causing more heat, a screen that utilizes more energy, etc


CPU is not the only way that power is consumed in a portable device. It is a large fraction, but you also have displays and radios.


Isn't 15% more battery life a huge improvement on a device already well known for long battery life?


Apple is one of the few companies that underpromise and overdeliver and never exaggerate.

Compared to the competition, I'd trust Apple much more than the Windows laptop OEMs.


If there is any dishonesty, I would wager it is a case of it can double the battery life in low power scenarios. Can go twice as long when doing word processing for instance. Can potentially idle a lot lower




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: