Hacker News new | past | comments | ask | show | jobs | submit login

IME Apple has always been the most honest when it makes performance claims. LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours. All the while, Dell or HP would claim 19 hours and you'd be lucky to get 2 eg [1].

As for CPU power use, of course that doesn't translate into doubling battery life because there are other components. And yes, it seems the OLED display uses more power so, all in all, battery life seems to be about the same.

I'm interested to see an M3 vs M4 performance comparison in the real world. IIRC the M3 was a questionable upgrade. Some things were better but some weren't.

Overall the M-series SoCs have been an excellent product however.

[1]: https://www.laptopmag.com/features/laptop-battery-life-claim...

EDIT: added link




> ME Apple has always been the most honest when it makes performance claims.

Okay, but your example was about battery life:

> LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours. All the while, Dell or HP would claim 19 hours and you'd be lucky to get 2 eg [1]

And even then, they exaggerated their claims. And your link doesn't say anything about HP or Dell claiming 19 hour battery life.

Apple has definitely exaggerated their performance claims over and over again. The Apple silicon parts are fast and low power indeed, but they've made ridiculous claims like comparing their chips to an nVidia RTX 3090 with completely misleading graphs

Even the Mac sites have admitted that the nVidia 3090 comparison was completely wrong and designed to be misleading: https://9to5mac.com/2022/03/31/m1-ultra-gpu-comparison-with-...

This is why you have to take everything they say with a huge grain of salt. Their chip may be "twice" as power efficient in some carefully chosen unique scenario that only exists in an artificial setting, but how does it fare in the real world? That's the question that matters, and you're not going to get an honest answer from Apple's marketing team.


M1 Ultra did benchmark close to 3090 in some synthetic gaming tests. The claim was not outlandish, just largely irrelevant for any reasonable purpose.

Apple does usually explain their testing methodology and they don’t cheat on benchmarks like some other companies. It’s just that the results are still marketing and should be treated as such.

Outlandish claims notwithstanding, I don’t think anyone can deny the progress they achieved with their CPU and especially GPU IP. Improving performance on complex workloads by 30–50% in a single year is very impressive.


It did not get anywhere close to a 3090 in any test when the 3090 was running at full power. They were only comparable at specific power usage thresholds.


Different chips are generally compared at similar power levels, ime. If you ran 400 watts through an M1 Ultra and somehow avoid instantly vaporizing the chip in the process, I'm sure it wouldn't be far behind the 3090.


Ok but that doesn't matter if you can't actually run 400 watts through an M1 Ultra. If you wanna compare how efficient a chip is, sure, that's a great way to test. But you can't make the claim that your chip is as good as a 3090 if the end user is never going to see the performance of an actual 3090


You're right, its not 19 hours claimed. It was more than even that.

> HP gave the 13-inch HP Spectre x360 an absurd 22.5 hours of estimated battery life, while our real-world test results showed that the laptop could last for 12 hours and 7 minutes.


the absurdness was difference in claimed battery life vs actual battery life. 19 vs 2 is more absurd than 22.5 vs 12

> Speaking of the ThinkPad P72, here are the top three laptops with the most, er, far out battery life claims of all our analyzed products: the Lenovo ThinkPad P72, the Dell Latitude 7400 2-in-1 and the Acer TravelMate P6 P614. The three fell short of their advertised battery life by 821 minutes (13 hours and 41 mins), 818 minutes (13 hours and 38 minutes) and 746 minutes (12 hours and 26 minutes), respectively.

Dell did manage to be one of the top 3 most absurd claims though.


You’re working hard to miss the point there.

Dell and IBM were lying about battery life before OSX was even a thing and normal people started buying MacBooks. Dell and IBM will be lying about battery life when the sun goes red dwarf.

Reviewers and individuals like me have always been able to get 90% of Apple’s official battery times without jumping through hoops to do so. “If you were very careful” makes sense for an 11% difference. A ten hour difference is fucking bullshit.


So you are saying that Dell with Intel CPU could get longer battery life than Mac with M1? What does that say about quality of Apple engineering? Their marketeering is certainly second to none.


Maybe for battery life, but definitely not when it comes to CPU/GPU performance. Tbf, no chip company is, but Apple is particularly egregious. Their charts assume best case multi-core performance when users rarely ever use all cores at once. They'd have you thinking it's the equivalent of a 3090 or that you get double the frames you did before when the reality is more like 10% gains.


They are pretty honest when it comes to battery life claims, they’re less honest when it comes to benchmark graphs


I don't think less honest covers it and can't believe anything their marketing says after the 3090 claims. Maybe it's true, maybe not. We'll see from the reviews. Well assuming the reviewers weren't paid off with an "evaluation unit".


> LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours.

For literal YEARS, Apple battery life claims were a running joke on how inaccurate and overinflated they were.


I’ve never known a time when Dell, IBM, Sony, Toshiba, Fujitsu, Alien, weren’t lying through their teeth about battery times.

What time period are you thinking about for Apple? I’ve been using their laptops since the last G4 which is twenty years. They’ve always been substantially more accurate about battery times.


The problem with arguing about battery life this way is that it's highly dependent on usage patterns.

For example I would be surprised if there is any laptop, which is sufficiently fast for my usage, and it's battery life is more than 2-3 hours top. Heck, I have several laptops and all of them dies in one-one and a half hours. But of course, I never optimized for battery life, so who knows. So in my case, all of them are lying equally. I don't even check battery life for 15 years now. It's a useless metric for me, because all of them are shit.

But of course for people who don't need to use VMs, run several "micro"services at once, have constant internet transfer and have 5+ Intellij project open at the same time which caching several millions LOC, while gazillion web pages are open, maybe there is a difference, for me it doesn't matter whether it's one or one and a half hours.


You should try a MacBook Pro someday. It would still last all day with that workload. I had an XPS at work and it would last 1.5 hrs. My Apple laptop with the same workload lasts 6-8 hours easily. I never undocked the dell because of the performance issues. I undock the mac all the time because I can trust it to last.


I have a 2 year old high spec Macbook Pro with less load than the GP and rarely can get > 3 hours out of it.


I'm curious, what do you do with it?


Nothing too crazy I don't think. A bunch of standard Electron applications, a browser, a terminal - that's pretty much it. Sometimes Dockers, but I always kill it when I'm done.


> IME Apple has always been the most honest when it makes performance claims.

In nearly every single release, their claims are well above actual performance.


Controlling the OS is probably a big help there. At least, I saw lots of complaints about my zenbook model’s battery not hitting the spec. It was easy to hit or exceed it in Linux, but you have to tell it not to randomly spin up the CPU.


I had to work my ass off on my Fujitsu Lifebook to get 90% of the estimate, even on Linux. I even worked on a kernel patch for the Transmeta CPU, based on unexploited settings in the CPU documentation, but it came to no or negligible difference in power draw, which I suppose is why Linus didn’t do it in the first place.


BTW I get 19 hours from DELL XPS and Latitude. It's Linux with custom DE and Vim as IDE though.


I get about 21 hours from mine, it's running Windows but powered off.


This is why Apple can be slightly more honest about their battery specs, they don’t have the OS working against them. Unfortunately most DELLs XPS will be running Windows, so it is still misleading to provide specs based on what the hardware could do if not sabotaged.


I wonder if it’s like webpages. The numbers are calculated before marketing adds the crapware and ruins all of your hard work.


can you share more details about your setup?


Archlinux, mitigations (spectre alike) off, X11, OpenBox, bmpanel with only CPU/IO indicator. Light theme everywhere. Opera in power save mode. `powertop --auto-tune` and `echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo` Current laptop is Latitude 7390.


Right, so you are disabling all performance features and effectively turning your CPU into a low–end low–power SKU. Of course you’d get better battery life. It’s not the same thing though.


> echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo

Isn't that going to torch performance? My i9-9900 has a base frequency of 3.6 Ghz and a turbo of 5.0 Ghz. Disabling the turbo would create a 28% drop in performance.

I suppose if everything else on the system is configured to use as little power as possible, then it won't even be noticed. But seeing as CPUs underclock when idle (I've seen my i9 go as low as 1.2 Ghz), I'm not sure disabling turbo makes a significant impact except when your CPU is being pegged.


That's the point. I have no performance bottleneck with no_turbo. My i5 tends to turn on turbo mode and increased power demand (heat leaks) even if it's no needed. For example with no_turbo laptop is always cold and fan basically stays silent. With turbo it easily gets 40C warm while watching YT or doing my developer stuff, building docker containers and so.


I get 20 minutes from my Dell (not the XPS), with Vim. When it was brand-new, I got 40 minutes. A piece of hot garbage, with an energy-inefficient intel cpu..


Frankly that sounds like you got a lemon. Even the most inefficient gaming laptops get over an hour under a full gaming workload.


> IME Apple has always been the most honest when it makes performance claims

Yes and no. They'll always be honest with the claim, but the scenario for the claimed improvement will always be chosen to make the claim as large as possible, sometimes with laughable results.

Typically something like "watch videos for 3x longer <small>when viewing 4k h265 video</small>" (which means they adapted the previous gen's silicon which could only handle h264).


> IME Apple has always been the most honest when it makes performance claims

That's just laughable, sorry. No one is particularly honest in marketing copy, but Apple is for sure one of the worst, historically. Even more so when you go back to the PPC days. I still remember Jobs on stage talking about how the G4 was the fasted CPU in the world when I knew damn well that it was half the speed of the P3 on my desk.


Worked in an engineering lab at the time of the G4 introduction and I can contest that the G4 was a very, very fast CPU for scientific workloads.

Confirmed here: https://computer.howstuffworks.com/question299.htm (and elsewhere.)

A year later I was doing bonkers (for the time) photoshop work on very large compressed tiff files and my G4 laptop running at 400Mhz was more than 2x as fast as PIIIs on my bench.

Was it faster all around? I don't know how to tell. Was Apple as honest as I am in this commentary about how it mattered what you were doing? No. Was it a CPU that was able to do some things very fast vs others? I know it was.


Funny you mention that machine I still have one of those laying around. It was a very cool machine indeed with a very capable graphics card but that's about it. It did some things better/faster than a Pentium III PC but only if you went for the bottom of the barrel unit and crippled the software support (MMX just like another reply mentioned).

On top of that Intel increased frequency faster than Apple could handle. And after the release of the Pentium 4, the G4s became very noncompetitive so fast that one would question what could save Apple (later, down the road, Intel it turns out).

They tried to salvage it with the G5s but those came with so many issues that even their bi-proc water-cooled were just not keeping up. I briefly owned of those after repairing it for "free" using 3 of them, supposedly dead; the only thing worth a dam in that was the GPU. Extremely good hardware in many ways but also very weak for so many things that it had to be used only for very specific tasks, otherwise a cheap Intel PC was much better.

Which is precisely why right after they went with Intel. After years of subpar performance on laptops because they were stuck at G4 (not even high frequency).

Now I know from your other comments that you are a very strong believer and I'll admit that there were many reasons to use a Mac (software related) but please stop pretending they were performance competitive because that's just bonkers. If they were, the Intel switch would never have happend in the first place...


It's just amazing that this kind of nonsense persists. There were no significant benchmarks, "scientific" or otherwise, at the time or since showing that kind of behavior. The G4 was a dud. Apple rushed out some apples/oranges comparisons at launch (the one you link appears to be the bit where they compared a SIMD-optimized tool on PPC to generic compiled C on x86, though I'm too lazy to try to dig out the specifics from stale links), and the reality distortion field did the rest.


While certainly misleading, there were situations where the G4 was incredibly fast for the time. I remember being able to edit Video in iMove on a 12" G4 Laptop. At that time there was no equivalent x86 machine.


Have any examples from the past decade? Especially in the context of how exaggerated the claims are from PC and Android brands they are competing with?


Apple recently claimed that RAM in their Macbooks is equivalent to 2x the RAM in any other machine, in defense of the 8GB starting point.

In my experience, I can confirm that this is just not true. The secret is heavy reliance on swap. It's still the case that 1GB = 1GB.


Sure, and they were widely criticized for this. Again, the assertion I was responding to is that Apple does this ”laughably” more than competitors.

Is an occasional statement that they get pushback on really worse than what other brands do?

As an example from a competitor, take a look at the recent firestorm over Intel’s outlandish anti-AMD marketing:

https://wccftech.com/intel-calls-out-amd-using-old-cores-in-...


> Sure, and they were widely criticized for this. Again, the assertion I was responding to is that Apple does this ”laughably” more than competitors.

FWIW: the language upthread was that it was laughable to say Apple was the most honest. And I stand by that.


Fair point. Based on their first sentence, I mischaracterized how “laughable” was used.

Though the author also made clear in their second sentence that they think Apple is one of the worst when it comes to marketing claims, so I don’t think your characterization is totally accurate either.


Ye that was hilarious, my basic workload borders on the 8GB limit not even pushing it. They have fast swap but nothing beats real ram in the end, and considering their storage pricing is as stupid as their RAM pricing it really makes no difference.

If you go for the base model, you are in for a bad time, 256GB with heavy swap and no dedicated GPU memory (making the 8GB even worse) is just plain stupid.

This what the Apple fanboys don't seem to get, their base model at somewhat affordable price are deeply incompetent and if you start to load it up the pricing just do not make a lot of sense...


> If you go for the base model, you are in for a bad time, 256GB with heavy swap and no dedicated GPU memory (making the 8GB even worse) is just plain stupid ... their base model at somewhat affordable price are deeply incompetent

I got the base model M1 Air a couple of years back and whilst I don't do much gaming I do do C#, Python, Go, Rails, local Postgres, and more. I also have a (new last year) Lenovo 13th gen i7 with 16GB RAM running Windows 11 and the performance with the same load is night and day - the M1 walks all over it whilst easily lasting 10hrs+.

Note that I'm not a fanboy; I run both by choice. Also both iPhone and Android.

The Windows laptop often gets sluggish and hot. The M1 never slows down and stays cold. There's just no comparison (though the Air keyboard remains poor).

I don't much care about the technical details, and I know 8GB isn't a lot. I care about the experience and the underspecced Mac wins.


I don't know about your Lenovo and how your particular workload is handled by Windows.

And I agree that in pure performance, the Apple Silicon Macs will kill it; however, I am really skeptical that an 8GB model would give you a better experience overall. Faster for long compute operations sure, but then you have to deal with all the small slowdown from constant swapping. Unless you stick to a very small amounts of apps and very small amounts of tabs at the same time (which is rather limiting) I don't know how you do it. I don't want to call you a liar but maybe you are emotionally attached (just like I am sometimes) to the device to realize it, or maybe the various advantages of the Mac make you ignore the serious limitations that come with it.

Everyone has their own sets of tradeoffs but my argument is that you can deal with the 8GB Apple Silicon devices you are very likely to be well served by a much cheaper device anyway (like half as cheap).


All I can say is I have both and I use both most days. In addition to work-issued Windows laptops, so I have a reasonable and very regular comparison. And the comparative experience is exactly as I described. Always. Every time.

> you have to deal with all the small slowdown from constant swapping

That just doesn't happen. As I responded to another post, though, I don't do Docker or LLMs on the M1 otherwise you'd probably be right.

> Unless you stick to a very small amounts of apps and very small amounts of tabs at the same time

It's really common to have approaching 50+ tabs open at once. And using Word is often accompanied by VS Code, Excel, Affinity Designer, DotNet, Python, and others due to the nature of what I'm doing. No slowdown.

> maybe you are emotionally attached

I am emotionally attached to the device. Though as a long-time Mac, Windows, and Linux user I'm neither blinkered nor tribal - the attachment is driven by the experience and not the other way around.

> maybe the various advantages of the Mac make you ignore the serious limitations that come with it

There are indeed limitations. 8GB is too small. The fact that for what I do it has no impact doesn't mean I don't see that.

> you can deal with the 8GB Apple Silicon devices you are very likely to be well served by a much cheaper device anyway (like half as cheap)

I already have better Windows laptops than that, and I know that going for a Windows laptop that's half as cheap as the entry level Air would be nothing like as nice because the more expensive ones already aren't (the Lenovo was dearer than the Air).

---

To conclude, you have to use the right tool for the job. If the nature of the task intrinsically needs lots of RAM then 8GB is not good enough. But when it is enough it runs rings around equivalent (and often 'better') Windows machines.


None of that seems to be high loads or stuff that needs a lot of ram.


Not individually, no. Though it's often done simultaneously.

That said you're right about lots of RAM in that I wouldn't bother using the 8GB M1 Air for Docker or running LLMs (it can run SD for images though, but very slowly). Partly that's why I have the Lenovo. You need to pick the right machine for the job at hand.


You know that RAM in these machines is more different than the same as "RAM" in a standard PC? Apple's SoC RAM is more or less part of the CPU/GPU and is super fast. And for obvious reasons cannot be added to.

Anyway, I manage a few M1 and M3 machines with 256/8 configs and they all run just as fast as 16 and 32 machines EXCEPT for workloads that need more than 8GB for a process (virtualization) or workloads that need lots of video memory (Lightroom can KILL an 8GB machine that isn't doing anything else...)

The 8GB is stupid discussion isn't "wrong" in the general case, but it is wrong for maybe 80% of users.


> EXCEPT for workloads that need more than 8GB for a process

Isn't that exactly the upthread contention: Apple's magic compressed swap management is still swap management that replaces O(1) fast(-ish) DRAM access with thousands+ cycle page decompression operations. It may be faster than storage, but it's still extremely slow relative to a DRAM fetch. And once your working set gets beyond your available RAM you start thrashing just like VAXen did on 4BSD.


Exactly! Load a 4GB file and welcome the beach ball spinner any time you need to context switch to another app. I don't know how they don't realize that because it's not really hard to get there. But when I was enamored with Apple stuff in my formative years, I would gladly ignore that or brush it off so I can see where they come from, I guess.


It's not as different as the marketing would like you to think. In fact, for the low-end models even the bandwidth/speed isn't as big of a deal as they make it out to be, especially considering that bandwidth has to be shared for the GPU needs.

And if you go up in specs the bandwidth of Apple silicon has to be compared to the bandwidth of a combo with dedicated GPU. The bandwidth of dedicated GPUs is very high and usually higher than what Apple Silicon gives you if you consider the RAM bandwidth for the CPU.

It's a bit more complicated but that's marketing for you. When it comes to speed Apple RAM isn't faster than what can be found in high-end laptops (or desktops for that matter).


There is also memory compression and their insane swap speed due to SoC memory and ssd


Every modern operating system now does memory compression


Some of them do it better than others though.


Apple uses Magic Compression.


Not sure what windows does but the popular method on e.g. fedora is to split memory into main and swap and then compress swap. It could be more efficient the way Apple does it by not having to partition main memory.


This is a revolution


Citation needed?


Don't know if I'm allowed to. It's not that special though.


> The secret is heavy reliance on swap

You are entirely (100%) wrong, but, sadly, NDA...


I do admit the "reliance on swap" thing is speculation on my part :)

My experience is that I can still tell when the OS is unhappy when I demand more RAM than it can give. MacOS is still relatively responsive around this range, which I just attributed to super fast swapping. (I'd assume memory compression too, but I usually run into this trouble when working with large amounts of poorly-compressible data.)

In either case, I know it's frustrating when someone is confidently wrong but you can't properly correct them, so you have my apologies


Memory compression isn't magic and isn't exclusive to macOS.


I suggest you go and look HOW it is done in apple silicon macs, and then think long and hard why this might make a huge difference. Maybe Asahi Linux guys can explain it to you ;)


I understand that it can make a difference to performance (which is already baked into the benchmarks we look at), I don't see how it can make a difference to compression ratios, if anything in similar implementations (ex: console APUs) it tends to lead to worse compression ratios.

If there's any publicly available data to the contrary I'd love to read it. Anecdotally I haven't seen a significant difference between zswap on Linux and macOS memory compression in terms of compression ratios, and on the workloads I've tested zswap tends to be faster than no memory compression on x86 for many core machines.


How convenient :)


Regardless of what you can't tell, he's absolutely right regarding Apple's claims: saying that a 8gb mac is as good as a 16gb non-mac is laughable.


My entry-level 8GB M1 Macbook Air beats my 64GB 10-core Intel iMac in my day-to-day dev work.


That was never said. They said 8gb mac is similar to a 16gb non-Mac


If someone is claiming “‹foo› has always ‹barred›”, then I don't think it's fair to demand a 10 year cutoff on counter-evidence.


For “always” to be true, the behavior needs to extend to the present date. Otherwise, it’s only true to say “used to”.


Clearly it isn’t the case that Apple has always been more honest than their competition, because there were some years before Apple was founded.


Interesting, by what benchmark did you compare the G4 and the P3?

I don't have a horse in this race, Jobs lied or bent the truth all the time so it wouldn't surprise me, I'm just curious.


I remember that Apple used to wave around these SIMD benchmarks showing their PowerPC chips trouncing Intel chips. In the fine print, you'd see that the benchmark was built to use AltiVec on PowerPC, but without MMX or SSE on Intel.


Ah so the way Intel advertises their chips. Got it.


Yeah, and we rightfully criticize Intel for the same and we distrust their benchmarks


You can claim Apple is dishonest for a few reasons.

1) Graphs often are unannotatted.

2) Comparisons are rarely against latest generation products. (their argument for that has been that they do not expect people to upgrade yearly, so its showing the difference of their intended upgrade path).

3) They have conflated performance, for performance per watt.

However, when it comes to battery life, performance (for a task) or specification of their components (screens, ability to use external displays up to 6k, port speed etc) there are almost no hidden gotchas and they have tended to be trustworthy.

The first wave of M1 announcements were met with similar suspicion as you have shown here; but it was swiftly dispelled once people actually got their hands on them.

*EDIT:* Blaming a guy who's been dead for 13 years for something they said 50 years ago, and primarily it seems for internal use is weird. I had to look up the context but it seems it was more about internal motivation in the 70’s than relating to anything today, especially when referring to concrete claims.


"This thing is incredible," Jobs said. "It's the first supercomputer on a chip.... We think it's going to set the industry on fire."

"The G4 chip is nearly three times faster than the fastest Pentium III"

- Steve Jobs (1999) [1]

[1] https://www.wired.com/1999/08/lavish-debut-for-apples-g4/


Thats cool, but literally last millennium.

And again, the guy has been dead for the better part of this millennium.

What have they shown of any product currently on the market, especially when backed with any concrete claim, that has been proven untrue-

EDIT: After reading your article and this one: https://lowendmac.com/2006/twice-as-fast-did-apple-lie-or-ju... it looks like it was true in floating point workloads.


The G4 was a really good chip if you used photoshop. It took intel awhile to catch up.


If you have to go back 20+ years for an example…


Apple marketed their PPC systems as "a supercomputer on your desk", but it was nowhere near the performance of a supercomputer of that age. Maybe similar performance to a supercomputer from the 1970's, but that was their marketing angle from the 1990's.


From https://512pixels.net/2013/07/power-mac-g4/: the ad was based on the fact that Apple was forbidden to export the G4 to many countries due to its “supercomputer” classification by the US government.


It seems that US government was buying too much into tech hypes at the turn of the millenium. Around the same period PS2 exports were also restricted [1].

[1] https://www.latimes.com/archives/la-xpm-2000-apr-17-fi-20482...


The PS2 was used in supercomputing clusters.


Blaming a company TODAY for marketing from the 1990s is crazy.


Except they still do the same kind of bullshit marketing today.


> Apple marketed their PPC systems as "a supercomputer on your desk"

It's certainly fair to say that twenty years ago Apple was marketing some of its PPC systems as "the first supercomputer on a chip"[^1].

> but it was nowhere near the performance of a supercomputer of that age.

That was not the claim. Apple did not argue that the G4's performance was commensurate with the state of the art in supercomputing. (If you'll forgive me: like, fucking obviously? The entire reason they made the claim is precisely because the latest room-sized supercomputers with leapfrog performance gains were in the news very often.)

The claim was that the G4 was capable of sustained gigaflop performance, and therefore met the narrow technical definition of a supercomputer.

You'll see in the aforelinked marketing page that Apple compared the G4 chip to UC Irvine’s Aeneas Project, which in ~2000 was delivering 1.9 gigaflop performance.

This chart[^2] shows the trailing average of various subsets of super computers, for context.

This narrow definition is also why the machine could not be exported to many countries, which Apple leaned into.[^3]

> Maybe similar performance to a supercomputer from the 1970's

What am I missing here? Picking perhaps the most famous supercomputer of the mid-1970s, the Cray-1,[^4] we can see performance of 160 MFLOPS, which is 160 million floating point operations per second (with an 80 MHz processor!).

The G4 was capable of delivering ~1 GFLOP performance, which is a billion floating point operations per second.

Are you perhaps thinking of a different decade?

[^1]: https://web.archive.org/web/20000510163142/http://www.apple....

[^2]: https://en.wikipedia.org/wiki/History_of_supercomputing#/med...

[^3]: https://web.archive.org/web/20020418022430/https://www.cnn.c...

[^4]: https://en.wikipedia.org/wiki/Cray-1#Performance


>That was not the claim. Apple did not argue that the G4's performance was commensurate with the state of the art in supercomputing.

This is marketing we're talking about, people see "supercomputer on a chip" and they get hyped up by it. Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.

> The entire reason they made the claim is

The reason they marketed it that way was to get people to part with their money. Full stop.

In the first link you added, there's a photo of a Cray supercomputer, which makes the viewer equate Apple = Supercomputer = I am a computing god if I buy this product. Apple's marketing has always been a bit shady that way.

And soon after that period Apple jumped off the PPC architecture and onto the x86 bandwagon. Gimmicks like "supercomputer on a chip" don't last long when the competition is far ahead.


I can't believe Apple is marketing their products in a way to get people to part with their money.

If I had some pearls I would be clutching them right now.


> This is marketing we're talking about, people see "supercomputer on a chip" and they get hyped up by it.

That is also not in dispute. I am disputing your specific claim that Apple somehow suggested that the G4 was of commensurate performance to a modern supercomputer, which does not seem to be true.

> Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.

This is why context is important (and why I'd appreciate clarity on whether you genuinely believe a supercomputer from the 1970s was anywhere near as powerful as a G4).

In the late twentieth and early twenty-first century, megapixels were a proxy for camera quality, and megahertz were a proxy for processor performance. More MHz = more capable processor.

This created a problem for Apple, because the G4's SPECfp_95 (floating point) benchmarks crushed Pentium III at lower clock speeds.

PPC G4 500 MHz - 22.6

PPC G4 450 MHz - 20.4

PPC G4 400 MHz - 18.36

Pentium III 600 MHz – 15.9

For both floating point and integer benchmarks, the G3 and G4 outgunned comparable Pentium II/III processors.

You can question how this translates to real world use cases – the Photoshop filters on stage were real, but others have pointed out in this thread that it wasn't an apples-to-apples comparison vs. Wintel – but it is inarguable that the G4 had some performance advantages over Pentium at launch, and that it met the (inane) definition of a supercomputer.

> The reason they marketed it that way was to get people to part with their money. Full stop.

Yes, marketing exists to convince people to buy one product over another. That's why companies do marketing. IMO that's a self-evidently inane thing to say in a nested discussion of microprocessor architecture on a technical forum – especially when your interlocutor is establishing the historical context you may be unaware of (judging by your comment about supercomputers from the 1970s, which I am surprised you have not addressed).

I didn't say "The reason Apple markets its computers," I said "The entire reason they made the claim [about supercomputer performance]…"

Both of us appear to know that companies do marketing, but only you appear to be confused about the specific claims Apple made – given that you proactively raised them, and got them wrong – and the historical backdrop against which they were made.

> In the first link you added, there's a photo of a Cray supercomputer

That's right. It looks like a stylized rendering of a Cray-1 to me – what do you think?

> which makes the viewer equate Apple = Supercomputer = I am a computing god if I buy this product

The Cray-1's compute, as measured in GFLOPS, was approximately 6.5x lower than the G4 processor.

I'm therefore not sure what your argument is: you started by claiming that Apple deliberately suggested that the G4 had comparable performance to a modern supercomputer. That isn't the case, and the page you're referring to contains imagery of a much less performant supercomputer, as well as a lot of information relating to the history of supercomputers (and a link to a Forbes article).

> Apple's marketing has always been a bit shady that way.

All companies make tradeoffs they think are right for their shareholders and customers. They accentuate the positives in marketing and gloss over the drawbacks.

Note, too, that Adobe's CEO has been duped on the page you link to. Despite your emphatic claim:

> Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.

The CEO of Adobe is quoted as saying:

> “Currently, the G4 is significantly faster than any platform we’ve seen running Photoshop 5.5,” said John E. Warnock, chairman and CEO of Adobe.

How is what you are doing materially different to what you accuse Apple of doing?

> And soon after that period Apple jumped off the PPC architecture and onto the x86 bandwagon.

They did so when Intel's roadmap introduced Core Duo, which was significantly more energy-efficient than Pentium 4. I don't have benchmarks to hand, but I suspect that a PowerBook G5 would have given the Core Duo a run for its money (despite the G5 being significantly older), but only for about fifteen seconds before thermal throttling and draining the battery entirely in minutes.


My iBook G4 was absolutely crushed by my friends Wintel laptops that they bought for half as much. Granted it was more carriable and had somewhat better battery life (needed it cause how much longer was needed to do stuff) but really performance was not a good reason to go with Apple hardware, and that still holds true as far as I'm concerned.


G4 was 1998, Core Duo was 2006, 8 years isn’t bad.


That is a long time – bet it felt even longer to the poor PowerBook DRI at Apple who had to keep explaining to Steve Jobs why a G5 PowerBook wasn't viable!


Ya, I really wanted a G5 but power and thermals weren’t going to work and IBM/Moto weren’t interested in making a mobile version.


Indeed. Have we already forgotten about the RDF?


No, it was just always a meaningless term...


Was simply a phrase to acknowledge that Jobs was better at giving demos than anyone who ever lived.


Didn’t he have to use two PPC procs to get the equivalent perf you’d get on a P3?

Just add them up, it’s the same number of Hertz!

But Steve that’s two procs vs one!

I think this is when Adobe was optimizing for Windows/intel and was single threaded, but Steve put out some graphs showing better perf on the Mac.


> IME Apple has always been the most honest when it makes performance claims.

I guess you weren't around during the PowerPC days... Because that's a laughable statement.


I have no idea who's down voting you. They were lying through their teeth about CPU performance back then.

A PC half the price was smoking their top of the line stuff.


That's funny you say that, because this is precisely the time, I started buying Macs (I got a Pismo PowerBook G3 gifted and then bought an iBook G4). And my experience was that for sure, if you put as much money into a PC than in a Mac you would get MUCH better performance.

What made it worth it at the time (I felt) was the software. Today I'm really don't think so, software has improved overall in the industry and there is not a lot of things "Mac specific" that makes it a clear-cut choice.

As for the performance I can't believe all the Apple silicon hype. Sure, it gets good battery life given you use strictly Apple software (or software optimized for it heavily) but in mixed workload situation it's not that impressive.

Using the M2 MacBook Pro of a friend I figured I could get maybe 4-5 hours out of its best case scenario which is better than the 2-3 hours you would get from a PC laptop but also not that great considering the price difference.

And when it comes to performance it is extremely unequal and very lackluster for many things. Like there is more lag launching Activity Monitor on a 2K++ MacBook Pro than launching task manager on a 500 PC. This is a small somewhat stupid example but it does tell the overall story.

They talk a big game but in reality, their stuff isn't that performant in the real world.

And they still market games when one of their 2K laptops plays Dota 2 (a very old, relatively ressource efficient game) worse than a cheapo PC.


> Using the M2 MacBook Pro of a friend I figured I could get maybe 4-5 hours out of its best case scenario which is better than the 2-3 hours you would get from a PC laptop but also not that great considering the price difference.

Any electron apps on it?


Yes, but I stopped caring about electron apps some time ago. You can't just drop or ignore useful software to satisfy Apple marketing. Just like you can't just ignore Chrome for Safari to satisfy the autonomy claims, because Chrome is much more useful and better at quite a bit of things.

I went the way of only Apple and Apple optimized software for quite a while but I just can't be bothered anymore, considering the price of the hardware and nowadays the price of subscription software.

And this is exactly my argument I you use the hardware in a very specific way, you get there but it is very limiting, annoying and inacceptable considering the pricing.

It's like saying that a small city car gets more gas mileage when what one needs is actually a capable truck. It's not strictly wrong but also not very helpful.

I think the Apple Silicon laptops are very nice if you can work within the limitations, but the moment you start pushing on those you realize they are not really worth the money. Just like the new iPad Pro they released; completely awesome hardware but how many people can actually work within the limitations of iPad OS to make the price not look like a complete ripoff. Very few I would argue.


or VMs. they should be getting way better out of that.


Apple switched to Intel chips 20 years ago. Who fucking cares about PowerPC?

Today, Apple Silicon is smoking all but the top end Intel chips, while using a fraction of the power.


Oh those megahertz myths! Their marketing department is pretty amazing at their spin control. This one was right up there with "it's not a bug; it's a feature" type of spin.


All I remember is tanks in the commercials.

We need more tanks in commercials.


Before macOS became NextStep it was practically a different company. I’ve been using Apple hardware for 21 years, when they got a real operating system. Even the G4 did better than the laptop it replaced.


Apple is always honest but they know how to make you believe something that isn’t true.


Yeah, the assumption seems to be that using less battery by one component means that the power will just magically go unused. As with everything else in life, as soon as something stops using a resource something else fills the vacuum to take advantage of the resource.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: