Hacker News new | past | comments | ask | show | jobs | submit login
Dissecting the Apple M1 GPU, Part II (rosenzweig.io)
574 points by dddddaviddddd on Jan 22, 2021 | hide | past | favorite | 188 comments



Everyone, remember that in 2019 Alyssa was in _High School_. Where do people get this knowledge? It's astounding.

She was literally reverse-engineering Mali for Panfrost starting in her Sophomore year.


Truly impressive work.

> Where do people get this knowledge?

Time, hands-on experimentation, and focus.

Anecdotally, reverse engineering and low-level hacking felt more popular back in the 90s and early 2000s. Back then, there were fewer distractions to soak up the free time of young tech enthusiasts. Old IRC chatrooms feel like a trickle of distraction relative to the firehose of Twitter, Reddit, 24/7 news cycles, and modern distractions.

A common complaint among the younger engineers I mentor is that they feel like they never have enough time. It's strange to watch a young, single person without kids and with a cushy 9-5 (or often 11am to 5pm at modern flex/remote companies) job complain about lack of free time. Digging deeper, it's usually because they're so tapped in to social media, news, video games, and other digital distractions that their free time evaporates before they even think about planning their days out.

It's amazing what one can accomplish by simply sitting down and focusing on a problem. You may not reach the levels of someone like Alyssa, but you'll get much farther than you might expect. And most importantly, you probably won't miss the media firehose.


There is a world of difference in free time between "having a 9-5 job" and "being in high school", though.

In high school, I was lucky enough to not have to do laundry, feed myself, or a variety of other tasks and chores that come along with adulthood. I also could sleep like crap and make it up during school the next day. Not to mention that the time I spent at school wasn't spent programming, so when I got home it wasn't more of the same, it was exciting. I could sit at the computer programing from 3pm to 2am.


>There is a world of difference in free time between "having a 9-5 job" and "being in high school", though.

Funny how this works. In high school I had a ton of free time. In college, not a lot of free time and very stressed. Entering the workforce, again a lot of free time and very stress free. I expect free time will disappear if children ever come into play


> I expect free time will disappear if children ever come into play

That would very much depend on how you choose to live.

As for me, living in Thailand and being a father of one daughter, I need to earn about 1.500 USD per month for our monthly needs. Which is easily achievable for me with less than a week of work. All extra money I earn above basic needs, I invest.

My current client sometimes has work for me 40 hours a week, sometimes only 40 hours a months. As such, I have plenty of time to dedicate to my own hobbies.

But this is only achievable because I left The Netherlands and decided to work remotely. And I do understand these kinds of choices are not possible for everyone (some people would find it very hard to live in another country, far away from family and friends, for example).

In our field and by making suitable choices it should be possible to have plenty of time for your own hobbies and projects.


I wouldn’t have a music career right now if I hadn’t relocated to Mexico in 2017

Wrote about it here https://link.medium.com/BAGPsTlChdb


I’d like to learn more about your journey... do you have a blog or anywhere you’ve written more about your experiences?


When I was in high school in 2004, I would fantasize about getting done with my work early and then catching the teacher on a good day where she let me get on the class computer so I could go to online forums and read more about programming. I sometimes wonder how far I would have been ahead had I had a smartphone in my pocket when I was in school like kids have today.

It doesn't surprise me that some kids capitalize on it instead of just using it to watch Twitch in class.


Yes, as a high school student who just fools around with their computer during online class (Linux, Gentoo, compiling and all that), it cannot be understated how many of my classmates play video games, watch YouTube, and watch Twitch. Those really are the big 3. I personally had to overcome a YouTube and video game addiction to get to a point where I was spending my free time in a mindful way, so I definitely can relate to everyone else.

YouTube hadn't really taken off until we were in middle school to early high school, so I couldn't imagine what internet addictions may be like in the future, seeing as quite a few toddlers of today grow up watching it. I suppose it's not too different to Television however.


Is doing laundry hard? I put it in the washing machine, turn that on to wash twice a week, put washed clothes on the line then take in when dry. I like to cook, have done since teens and before I can remember was tasked with food prep and washing dishes after a meal. Clean the house once per week engaging passion in various areas in this. And not viewing these as chores but simple things. For sixth-form and university, get to bed by 11, get to school by 8; not a super fan but appreciative of classes so sat in the school /uni library with books.


It isn't that it is hard, it's that it is a thing that has to be done, and so your brain cycles + interruptions are dedicated to these things.

Cooking is generally taken care of by parents, which means ancillary things like grocery shopping are too. You don't even have to dedicate brainwidth to "what groceries do we need to buy". There are so many tiny interruptions and bureaucratic overhead that keeps the brain slightly stressed as an adult that simply don't exist as a child, it is marvelous.


I think there's more to it tha n "if only you'd get off your butt and throw away that phone"

Knowing what to work on, what trends to be aware of, where to present your stuff is something that a modern elite collegiate is intimately familiar with.

Other teens and young adults, not so much. I myself having been one such and friends of other such.


For me, I have more than enough time, especially when working from home. It’s just that after work I feel like a brain dead zombie and programming is the last thing I want to do after work.

In the time before I started work and after finishing school I used to do so much open source stuff in the time that I now spend working.


That issue you describe, while intuitive and obvious to those of us who were around for the transition period is simple to arrive at. Keep in mind that youngsters nowadays may not have much if any experience with the before world or know someone reflective enough and willing to point it out to them. Many just assume we all mastered it somehow.

There is also the possibility, They might be right. Before, communication and the long access lag inherent in accessing data in remote systems meant change propagated by trickle. Now, change can happen and propagate across the world at breakneck speeds. Which means if one is to have an effect, one must be there/be aware.

This leaves precious little time for following the white rabbit. I think everybody could do with a little somber reflection on just the impact that rapid information propagation has had on the world.


Technology is moving faster nowadays.

I picked up JavaScript (or actually JScript) in 2001, when IE6 was this new hot thing. Back then you could cover most of the material on front-end in a week or two and not have to learn anything new for at least a year.

Fast forward 20 years later and browsers have major releases every few weeks while one of the most stable frameworks available - Angular - every six months.


As one of the older folks here on HN, I fondly remember the time when almost everything had to be done by reverse-engineering. Some friends of mine and I were the first people to reverse engineer the Commodore 4040/8050 disk drive "computers". We had to write all of the software to do this, including custom disassemblers that eventually spat out printouts that we annotated first by hand and later transcribed into source code files. There were no memory maps, and our knowledge of electronics was too rudimentary to figure out much by looking at the traces on the circuit boards. There were no places for folks to discuss these ideas - bulletin boards of the day were one-user-at-a-time things which meant that topics like this were way too niche to garner any discussion at all. I was fortunate enough to grow up in Toronto which was a hot-bed of Commodore hackers and we had a vibrant community that could gather to meet face-to-face to discuss ideas. But in between meetups we had to sit and think harder until we figured it out.

It was a wonderful time to learn about computing from first principles without the distractions that exist today.


It is commendable work for someone of any age.

But I think we undersell the abilities of college-aged adults in modern times. Watch some top violinists or gymnasts at that age, or just someone who had a couple of kids as a teen and held it together.

The young have unbounded ability; it is only wisdom and experience that they may lack.


At the age of 10 I was coding for the Timex 2068, using books like these ones,

https://www.atariarchives.org/

Yes, they are for the Atari, just trying to make a point here.

So any smart kid in the age of Internet, instead of having what is available at the local library, is quite capable to build up the skills to achieve this kind of work at the high school level.


Strongly disagree. There is a big difference between doing some interesting hacking at a young age and getting your GPU driver included in the mainline Linux kernel. The author appears to be a true CS prodigy.


In 8 and 16 bit home computers you wrote your own graphics driver, in Assembly, if you wanted to draw anything beyond the toy BASIC graphics commands.

I guess anyone doing games were a prodigy back then.


Yes, perhaps, and at that time it was also a much simpler proposition to do that.

I would imagine at that time there were also many fewer kids at a high-school level doing that kind of hacking, so it might not be totally wrong to say they were all prodigies.


I guess, C64, Amiga, Atari didn't had programmable graphics hardware after all.


The Atari did, at least in the technical sense, though not in the sense you mean it.

The Ataris had a display process called Antic that you supplied with a display-list, a set of tokens that defined which antic-mode was next as the electron-dot proceeded down the screen. Antic would halt the CPU and take over the bus in order to transfer data from main memory to use for display purposes, be it character or bitmap data

You could choose from a variety of modes (15 in all, IIRC) and add horizontal or vertical smooth scrolling to each mode-line individually by toggling a bit in the display-list entry.

One further bit allowed you to set a DLI (Display-list interrupt) where the CPU would be interrupted and run your code at that end of the specific display-line on-screen - you had the flyback interval to effect something (not a lot, but you could change colour registers, character-sets, do a few register operations, basically).

So it was minimally programmable, and certainly not comparable to today's stuff - but it was groundbreaking in its day.


Yep, now imagine a 12 year old kid going at it in 68000 Assembly, and showing their demos to other kinds in the gang.


Amiga to some extent did, you could upload programs to the Copper.


I know that was being sarcastic, following the same tone of previous answers. :)


High school can be time when you have lots of spare time for learning what you want. I remember spending one year, and almost all of the summer break reverse engineering and writing binary patches for the sl45i phone. There was a huge community around it at the time so you just joined a forum and started asking questions.


Started early. This is a blog post from 2016:

https://rosenzweig.io/blog/solving-bit-magic.html

I'm jealous of people who started programming as a kid.


It’s easy to think this sort of thing requires an incredibly level of knowledge but it’s often more about dedication and a little lateral thinking. You need a broad understanding of the process of putting something on screen, the ability to construct a program that does something, some way to observe the result, and the willingness to experiment.

Back in the 80s lots of kids that age wrote computer games and often found novel ways to use the hardware because even though video controllers might not be fully documented you still knew where their control registers were in memory and could just try altering the values in them.

Now, modern graphics hardware is more complex and controlled by things like command buffers and shaders, but you can apply similar ideas, and you’ve probably got easier tools for examining the state of your process. Write a program that does a simple thing and dump its state. Do a slightly different thing and compare the states. When you think you understand things either change your program and see if you’re correct or use a debugger to alter the state at runtime and see if you can change the visible result.


The M1 is actually especially forgiving about this, because it's good about isolating errors to your process, i.e. if you do something wrong only your test program will crash. For many GPUs, if you feed them garbage the whole machine panics.


Where did you get your knowledge? With places like HackerNews along with folks like yourself and myself writing posts about this kind of stuff it becomes trivial to walk down a rabbit hole for anyone of any age. Think back to how much you accomplished when you were younger because you were walking in the footsteps of others. :)


A magical thing about high school / college that people fail to appreciate is how few actual responsibilities you have at that age.

It's why kids are able to devote so many hours to video games and other time waster stuff.

Only difference is this person managed due to luck / skill / timing to end up on a more productive path.

Time commitment is the big impediment for most open-source projects. Late adolescence / young adulthood is one of the few periods of life where you both are competent enough to do useful things and have enough free time to actually commit to that work.


> anyone of any age

and intelligence and education and intellectual curiosity and non-working time and energy/stamina ...

That Venn diagram ends up with a pretty small number of humans ...


> With [...] folks like yourself and myself writing posts about this kind of stuff

Please could you link to some of the reverse engineering writing by you guys to which you're referring?


When I was 14 I discovered that my computer's CPU (a 186) had two on-chip programmable timers. One was used for DRAM refresh, so I used the other to get fine-grained timing so I could bit-bang the speaker output to get crude PWM sound.

Of course back then, people used to document the components they put in computers...


When I was 5, my grandmother the edusoft programmer let me learn machine-code programming from one of her books.


While this is a long way from being a proper graphics driver, it is a good indication that we're going to see a functional driver for Linux before too long. I was concerned this bit would take years to work out, this suggests it'll be months to get a working driver.

Chuffed.


I know enough about GPU to know how interesting this is, but not enough to fully get it.

I wish there were some "Dissecting the Apple M1 GPU for Dummies".


I imagine the "for graphics engineers" version has to get figured out before they can do the "for Dummies" version.


The published code on Github looks very nice! It is easy to read and understand.

Kudos to Alyssa!


Amazing!! This is progresssing literally 10 times faster then I expected.


Indeed. That single triangle likely is way closer to a million triangles than to zero triangles. https://rampantgames.com/blog/?p=7745:

“Afterwards, we came to refer to certain types of accomplishments as “black triangles.” These are important accomplishments that take a lot of effort to achieve, but upon completion you don’t have much to show for it only that more work can now proceed. It takes someone who really knows the guts of what you are doing to appreciate a black triangle.”


Roughly equidistant milestones for ports: 1. It crashes 2. Black screen 3. Any drawing at all 4. The rest of the friggin' engine


I remember when the DXVK prototype was reported to beable to render a single triangle. Cool, but far away from completion. A few months later it was running Witcher3.


This might be a stupid question but if someone wants to start with some basic GPU to get an an understanding of how these things work, what that GPU would be that's wide spread, not that archaic?


Intel's GPUs are fairly well-documented at https://01.org/linuxgraphics/documentation/hardware-specific...

IMO, writing some Vulkan (it being a thin layer over what a modern GPU's "actually" doing) would be good to get fundamentals down first, not the least of which because you'll have the validation layers to yell at you for incorrect usage instead of getting mysterious GPU behavior.


This blog series is quite good: https://hoj-senna.github.io/ashen-aetna/


Anything that supports shaders is fine, but M1 is a mobile-style GPU so it doesn’t behave exactly like a desktop GPU.


question from ignorance - Why not use Metal? Is that too embedded in the macOS system to be useful for something like Linux? Or is this for the sake of understanding the bare metal?


If you used metal as the graphics api on Linux, literally no existing Linux software would work with it unless you also used a layer like MoltenGL or MoltenVK (which have been written for a Mac system and would likely need modification). Linux graphics drivers also tend to have extra APIs for buffer management for X11/Wayland, which a molten compat layer probably doesn't do as molten is meant to run in-process with each app I believe.

Some of the Metal APIs are also a little intertwined with swift and objc.


of course. thanks!


There are no Metal implementations to use outside of Apple's proprietary software. But even if you had one, it would be useless on Linux for the existing software stack.


The goal is to write a driver for Linux, this is from scratch.


the metal driver would not work on linux. It was written for OSX. It's also closed source.


If she gets hired by Apple, I hope they don't ruin her too fast.


Read her other blog posts. She's super open source/libre software oriented. I find it a hard sell she would consider working for Apple.

My guess would be she bought the Mac specifically to tackle this project.



Wow, this is impressive. Would like to be updated on this.


OK I love this, but I am pretty sure my chances of getting a mac in the future are zero. From which side should I expect the PC response? Intel? AMD? Microsoft? ARM? Samsung? I think Apple willingly or not has just obliterated the whole consumer pc market.


The new Ryzen mobile processors should be interesting. Their GPU drivers (while not of the best code quality) are in the mainline Linux kernel. So it all should just “work”


Currently using a Renoir laptop. It's smoking fast but I had to install a bleeding edge kernel to get the display driver to work at all. That should be fixed in the next ubuntu release though.


5.10.x kernels are very stable and feature complete with AMD Ryzen Renoir - I update one almost weekly once new patch version is out on Ubuntu Kernel PPA. Here is nice script which makes the update trivial: https://github.com/pimlie/ubuntu-mainline-kernel.sh


Weird, I had no issues with the built in display on Ubuntu 20.04, but I had to update the kernel to 5.8 to get display out over USB-C to work. Now that Ubuntu 20.10 is out and uses 5.8, I'm just using that so I don't have to mess with custom, unsigned kernels.


I just installed 20.04.1 on a 4750U Lenovo T14s and everything just works as far as I can tell.


What I want is to take a mobile CPU and put it in a normal form factor for power saving and excellent integrated graphics. Why is this not possible?


The only company making mobile CPUs that are at all competitive in a laptop is Apple. We’ll have to wait to see what Qualcomm et al. manage to build.


The new mobile chips are basically a mix of new and old stuff, with them rebranding ryzen 2 parts. Kind of disappointing.


But the slightly tweaked "old stuff" is relatively low-end - up to and including the 5700U. You'll find it in thin 'n light, budget and business laptops that will have more than enough power from a Zen 2 core. If you really absolutely need more power, you'll know it and you'll be shopping for a 5800H (or above).

If you don't know enough about CPUs to even read reviews that compare the CPU to other CPUs, then you either don't need the extra IPC of Zen 3 (and you won't notice when you use your laptop day to day) or you just... don't care.

If you care, get a 5600U/5800U or H line and it will never affect you. The laptops these come in should be priced accordingly.


What exactly do you expect? x86 is quite competitive. M1 might be slightly better, but it's not like it's miles ahead.


I want a return to the status-quo when for 80% (it could be even less but let's not dwell in the number) of the price of an Apple laptop I could get a windows/linux machine matching or surpassing its specs(including stuff like battery life,energy consumption, screen dpi, noise, etc). This is not true now and I am not seeing an option in the short term.


That was never really possible, or certainly wasn't ever since the laptops went to HiDPI displays. There were just a lot of teenagers showing off their ability to make gamer PC parts lists and claim that it's better than a Mac, but the value of the display/trackpad/integration have been good for a long time. It's just now the quality of the SoC is that good.


I’ve had my m1 air for a few weeks now and going back to the trackpad on my Dell XP’s 15 feels like the stone ages... it’s amazing... and safari on the Mac even makes Jenkins and Jira fast...


Perhaps it is still true, but Apple is somehow becoming the budget option in that equation?


Fiddling around on an M1 MBA it felt faster than my 2020 16" MBP. It's half the weight, seems to get double the battery life, and costs less than 1/3.

I just can't even imagine what the gap is going to look like when Apple really refines this down.


>It's half the weight

Its 70% the weight at 70% the volume of the 16". What is the point of comparing the weight of a 13" and a 16" laptop?


It is faster despite the lower battery capacity and thermals. In fact, the Macbook air has no fan.


You should compare it to other than the old Apple laptops. The Ryzen models, such as ThinkPad T14 are very fast, and if you want to go tenfold from there, there is no comparison with the modern Ryzen desktop CPUs. Why Apple always failed with Intel and its thermals is why they feel so slow compared to the M1.


A 2020 16" Macbook Pro isn't old by any measure. This argument seems disingenuous.


But they are extremely bad with their thermals and Intel is way slower than whatever AMD is pushing right now. The thing is Apple just made quite mediocre hardware. And now they're back where the competition is.


Intel is not way slower than AMD in the mobile space.


The response from Intel seems to be betting on the Evo platform [1], with third parties announcing laptops like the XPG Xenia XE

1 https://www.intel.com/content/www/us/en/products/docs/evo.ht...


You have to read between the lines here. They make no claims about CPU performance - just integrated GPU performance from Xe (which is a big improvement from previous Iris GPUs.) Then they claim battery life (9 hours, FHD, 250-nits, etc.)

What that means is laptop OEMs will have to limit TDP on CPUs - probably 15W or less. Given current Intel chips being very power hungry, these are likely NOT going to be great CPU performers.

The only competition in CPU space to M1 will be Ryzen 5000U chips in the 15-25W thermal envelope. They should be ~19% more powerful/efficient than Ryzen 4000U chips, but I would not expect M1 levels of cool or battery life yet.


> Ryzen 5000U

Am I right to assume we can see benchmarks within the next few weeks?


Some laptops seem to be on sale now, and others were targeting January 26 as the release date. That being said, I do not know if there's an NDA from AMD or if all variations of the chips (i.e. H, U, HX, etc) will be available this month or over time in the next few months.

So I think there's a good chance in the next week or so we'll start seeing benchmarks.


So they're announcing a "platform". Smacks of managerial bottom-covering. Almost like forming a committee to investigate the problem.

- where was this 1/2/10 years ago?

- how would this address the fundamental CPU performance gap?

- Intel has no competitive GPU offering, yet another glaring failure on their part

- why would OEMs go along with this when Ryzen is a better CPU, GPU, aside from getting Intel bribes and the usual marketplace branding momentum?

- will this actually get ports migrated to laptops faster? It was criminal how long it took for HDMI 2.0 to hit laptops.

I get Intel doesn't own the stack/vertical integration, but Intel could have devoted 1% of its revenue to a kickass Linux OS to keep Microsoft honest a long time ago and demonstrate its full hardware.

Even if only coders/techies used it like Macbooks are standard issue in the Bay, it would have been good insurance, leverage, or demoware.



"I get Intel doesn't own the stack/vertical integration, but Intel could have devoted 1% of its revenue to a kickass Linux OS to keep Microsoft honest a long time ago and demonstrate its full hardware." Interesting point.. makes one wonder why didn't they do it while having a mountain of cash.


They were so in bed with Microsoft.

Microsoft being a true monopoly might have struck fear in the timid souls of Intel executives that they would go headlong for AMD.

Or Google had this opportunity for years, and half-assed ChromeOS. Or AMD. Or Dell/HP/IBM who sold enough x86 to have money on the side.

I don't buy that it would have been hard. Look at what Apple did with OSX with such a paltry market share and way before the iPhone money train came. First consumer OSX release was 2001.

Sure, Apple had a massive advantage by buying NeXT's remnants and Jobs's familiarity with it and the people behind it, but remember that Apple's first choice was BeOS.

So anyone looking to push things could have got BeOS, or an army of Sun people as Sun killed off Solaris. The talent was out there.

Instead here we sit with Windows in a perpetual state of the two-desktop tiled/old frankenstein, Linux DE balkanization and perpetual reinvention/rewrite from scratch, and OSX locked on Apple.


They do have something – Clear Linux [0]. Definitely not too mcuh investment, but they do differentiate by compiling packages for much newer instruction sets compared to other distros.

[0]: https://clearlinux.org/


The real differentiator would have been an army of good driver coders and contributors to KDE/GnomeX.


Actually Intel has small army of Linux driver developers. Last time I counted in git logs there was around 100 of developers who one way or another contributed to Linux graphics stack: kernel drivers, Xorg, Mesa, etc. We can't really know how many people are working behind the scenes.

Yeah of course it was possible for Intel to do more, but they're clearly largest contributor to Linux graphics stack anyway.


They had Intel Clear Linux, a server-oriented distro. Quite good at what it targeted.


Intel Clear Linux still exists and is still developed by Intel.


> I get Intel doesn't own the stack/vertical integration, but Intel could have devoted 1% of its revenue to a kickass Linux OS to keep Microsoft honest a long time ago and demonstrate its full hardware.

Moblin, MeeGo.


Yeah, on top of it all, given all of the shots of the reference models look vaguely like a Mac Book, it really feels to me like Intel dug around in their couch cushions to come up with a response.


That looks more like ultrabooks but not watered down again - I'm not impressed.


Hardly. Apple has nothing to offer for high end gaming and I don't think they care about it.

For me it's Linux with AMD both for CPU and GPU.


I'd be excited to see arm competition in the desktop space but I don't believe there are any arm chips that compete in performance to high end x86. Arm can do lots of cores which is very good for servers, but single threading performance is still a significant necessity on user end hardware.

The M1 is great because it's low power with optimized performance, but on a desktop you can have well over 500W+ and that's normal.

I don't see anyone else making only mobile arm chips for laptops other than trying to be like a Windows Chromebook. The software compatability will be a nightmare.


M1 cores outperform Comet Lake cores and are basically tied with AMD Vermeer despite using a fraction of the power.


Who cares if you have the darn thing plugged in anyway? Does the M1 outperform the Threadripper 3990x?


I think to understand the M1, you have to be a Mac laptop user. For years, Mac laptop performance lagged years behind high-end desktop performance -- they have been stuck on 14nm+ process chips with a mobile power budget, while desktop users have had 7nm chips that can draw 500W with no trouble. As a result, what M1 users tell you is fast is what PC desktop users have had for ages. A Threadripper and 3090 will blow the M1 out of the water in raw performance (but use a kilowatt while doing it, which a laptop obviously can't do).

At my last job, they issued us 2012-era Macbooks. I eventually got so frustrated with the performance that I went out and bought everyone on my team an 8th generation NUC. It was night and day. I couldn't believe how much faster everything was. The M1 is a similar revelation for people that have stayed inside the Mac ecosystem all these years.


Yeah after years of company-issued Macbook Pros I built myself a Ryzen 3900x dev machine last year and it was like waking up from one of those dreams where you need to do something urgently but your legs aren't cooperating.

Given the benchmarks I've seen I imagine the M1 would be a somewhat comparable experience, but using a desktop machine for software development for the first time since...2003(!) has really turned me off the laptop-as-default model that I'd been used to, and the slow but steady iOSification of MacOS has turned me off Macs generally. Once people are back to working in offices I'd just pair it with an iPad or Surface or something for meetings.


I've been high-end desktop Linux user all my life and Thinkpads were my primary choice for laptops. Last two years I've been using x1 carbon gen3 released in 2017 so it's not so far from current-gen Intel laptops.

Yeah Air on M1 cannot beat triple 1440p 164HZ monitor setup with high end desktop hardware, but it's still damn impressive. It's has slightly better 16:10 screen, more performance than any of current x1 thinkpads and Air that I bought is absolutely silent too.

Also might be in the US you have comparable prices on Thinkpad or some Linux-friendly laptops, but where I live I bought Macbook for 2/3 of comparable Thinkpad price. I've used to buy used ones, but now they became much more expensive due to high demand and much less of air travel (less smugling I guess).


I would never buy a Thinkpad outside of edu-deals. They cost less and get 3y on-site support. It's a good deal, but otherwise they are overpriced for what they delivered recently...

Right now, I am torn between a *FHD*, AMD X13, 16GB RAM/500GB SSD for about *1000€* and the Air 16/500 at 1400€. I hate my life for that decision right now...


I think it's a moot question, since they're not comparable here. Laptops have inherent thermal limitations (not just power) that don't allow something like the Threadripper to be workable.

If you wanted a fair comparison you'd wait to see what processor Apple puts in their Mac Pro in 2-3 years, and compare that to whatever is the Threadripper equivalent then.


Personally I like the noise level in my room being 32 dB and don't like PCs having to run the fan at full speed to show the desktop picture.


Any desktop PC, even with the beefiest Threadripper, won't run the fan at full speed when you're just browsing the web or watching videos.

Heck, if you're not squeezing the absolute maximum performance from a chip by overclocking (I am :P) you can run a 5950X on a regular tower cooler with the fan curve never coming close to 100% and it will still be incredibly fast.


That is fine and I get it. Just realize there are lots of people that need/want more power, variety, compatibility with a lower cost. Water cooling is plug and play these days for desktop rigs. The gaming laptop market has become shockingly good for professional level CAD/Engineering apps with the industry focused on drastically reducing noise levels (albeit still quite loud in comparison to fanless). Trade offs... trade offs as far as you can see...


They are also on a different process shrug


So they are competitive, because they are a node ahead?


When you shrink nodes you can either increase performance, cut power, or go with some mix of the two. TSMC's 5nm is 30% more power efficient OR 15% faster than their 7nm process.

Apple took the power reduction.

https://www.anandtech.com/show/16088/apple-announces-5nm-a14...


5nm, they are two nodes ahead compared to Intel’s 10nm


Process node improvements don’ty bring that much performance.


Given the massive drop in power consumption and therefore heat, they seem to bring inordinate amounts of performance in a mobile chip.


The apple a14 brought no improvements over the last generation despite being on a superior process node.


Even evolution within the same process node can bring noticeable performance improvements. Launch day AMD Zen 2 (TSMC 7FF) chips could barely clock to 4.2GHz, ones manufactured months later can often do 4.4.


> I don't see anyone else making only mobile arm chips for laptops other than trying to be like a Windows Chromebook. The software compatability will be a nightmare.

I'd expect Microsoft to make a run at this with their Surface line.

They've been trying to make ARM tablets for a years (see Windows RT), and they just recently added the x86-64 compatibility layer so that it could be actually useful instead of "it's an iPad but worse".

https://blogs.windows.com/windows-insider/2020/12/10/introdu...

Will it see any success with 3rd party developers probably not bothering to support ARM? Maybe for some people who spend most of their time in Edge, Mail, or Office. I have a hard time seeing it being as successful as Apple's change, since the messaging from Apple is "This is happening in 2 years, get on board or you'll be left behind" and the messaging from Microsoft is "Look, we made an ARM tablet, and will probably sell at least 10 of them."


I do think that's a relevant point for desktops. The M1 is incredible, but if you don't care about TDP, desktop can catch up fairly quickly for either Intel or AMD.

I don't see an obvious player currently working on a "broad market" high performance ARM chip for the commodity desktop.


> but on a desktop you can have well over 500W+ and that's normal.

You only see that on gaming rigs and high end workstations. The typical office machine or ma & pa PC is either a laptop or a mini-PC with mobile parts.


The rest of the PC market has a chicken and egg problem. Software won’t be developed for a new CPU until it has market share, but a new CPU has little chance of getting market share without software.


> but I am pretty sure my chances of getting a mac in the future are zero

Why?


For me:

1. A huge catalog of apps and games, old and new, that are mostly windows and linux, but all exclusively x86.

2. I don't want to get locked in to an ecosystem with no variety. For example, if Dell's offering laptop has a keyboard I hate or poor selection of ports, then I can switch to Lenovo or HP or go for a gaming laptop with an Nvidia GPU and use it to train NN, etc, the variety is amazing. While if Apple's next machine has a flaw I dislike, then too frikin bad, I'll be stuck with whatever Apple bestowes upon me.


I cannot speak for OP, but software is the reason for me. I can not run Solidworks on MacOS without bootcamp or other tricks, for example. Photogrammetry apps? GIS apps? CNC CAM apps? I mean, compare the Mac[1] compatible apps to the catalog of Apps Autodesk has.

The fact of the matter is GPU support on MacOS is just not there in the same way it is with windows. It is really hard to justify the price of Apple when you compare directly to Windows offerings spec for spec. When you compare the sheer volume of software available for Windows as compared to MacOS, especially if you need a discrete GPU there are really no comparisons.

[1]https://www.autodesk.com/solutions/mac-compatible-software


I'm always surprised when I see discussions about somebody dumping one operating system for another. Isn't your operating system choice dictated by the applications you want to run?


Presumably these people use cross-platform software.


Price for one, also I still consider PC and Linux (and god heavens even windows) more open than Mac. I also work with many windows-only automation software.


sorry but what's "PC" above, given it's not windows?


A generic term for hardware able to run linux and/or windows. Or any personal computer not build by Apple. You can have any combination of open/closed hardware and OS.


Apple willingly or not increased their market share? Seriously, the AstroTurf in this thread is off the charts.


They have bitten off an even bigger chunk of the meal they had nearly finished already:

- Developers

- Creatives

- Dutiful citizens of The Ecosystem

What they are no closer to biting off is gamers and hardware enthusiasts. Anyone who actually needs to open their PC for any purpose whatsoever.

I know Apple wants to be the glossy Eve to all the PC market's Wall-E's, but I will continue to shun them forcefully as long as they remain hell-bent on stifling the nerdy half of all of their users.

I assume the developer popularity is a result of the iPhone gold rush, which Apple exploited using exclusivity tactics. Therefore I consider it an abomination to see developers embrace the platform so thoroughly. iPhone development should feel shameful, and when there's no way to avoid it, it should be done on a dusty Mac Mini pulled from a cardboard box that was sitting in the closet ;)


> I assume the developer popularity is a result of the iPhone gold rush, which Apple exploited using exclusivity tactics. Therefore I consider it an abomination to see developers embrace the platform so thoroughly.

That's not why most developers I know use macs. We use them because we got tired of spending a couple days a year (on average) unfucking our Windows machine after something goes wrong. When you're a developer, you're often installing, uninstalling and changing things... much more than the average personal or business user. That means there are a lot of opportunities for things to go wrong, and they do. Even most Google employees were using macs until they themselves started making high end Chromebooks; tens of thousands of google employees still use macs. Some users have moved on to linux, but most stay on mac because they want to spend time in the OS and not on the OS. I can appreciate both perspectives, there's no right answer for everyone.

I share your wish that the hardware was more serviceable but everything is a compromise at the end of the day and that's the compromise I'm willing to take in exchange for the other benefits.

Some complain about the price, but the high end macbook pros aren't even more expensive than windows workstation laptops. Our company actually saved several hundred dollars per machine when we switched from ThinkPads with the same specs. Not to mention, our IT support costs were cut almost in half with macs.

So, aside from gaming or some specific edgecase requirements, it's hard for me to justify owning a PC. That said, I have one of those edgecase requirements with one of my clients so I have a ThinkPad just for them. But, it stays in a drawer when I'm not working on that specific thing.


On the gaming front, I've been trying GeForce Now recently and while it may not be great for competitive FPS games, it has otherwise destroyed any reason why I'd ever purchase another gaming PC unless they start jacking up the monthly price. It works on basically all platforms (including my iOS devices), it doesn't spin up my MBP's fans and doesn't eat through battery life. I don't have to worry about ARM vs x86, I don't even get locked in to the platform like Stadia, it connects to Steam, Epic and GOG.


Wow, obvious bias much? Great way to engage in reasonable, level headed conversation is to lead with telling people they should be ashamed for not holding your values and opinions. I honestly can't think of a better way to demonstrate to most people why they _should_ get a Mac than to just show off comments like yours.


Convince me otherwise! :)


> I think Apple willingly or not has just obliterated the whole consumer pc market.

They definitely won the "general public use laptop" market (if you conveniently ignore the rest of the laptop and OSX, which are utter crap IMO), but its important to understand that they really didn't invent anything, they just optimized. And optimizing something like this means you make the stuff that is most commonly used better, while reducing the functionality of the rest.

Compare the Asus Zephyrus RoG 14 laptop with the Macbook pro. G14 has the older Ryzen 9 4900HS chip, and while the single core performance of the MBP is better, the multi core is the same despite the 4900HZ being a last gen chip. G14 gets about 11 hours of battery time for regular use, MBP gets about 16, but the G14 also has a discrete GPU that is superior for gaming than the integrated GPU of the mac. Different configurations for different things.

Then, even ignoring the companies decision to mix and match components, the reason why you can buy any windows based laptop and install Linux on it and have 99% of the functionality working right out of the box is because of the standardization of the cpu architecture. With the Apple M1, even though Rosetta is very well built, its support is not universal across all applications, some run poorly or not at all.

And while modern compilers are smart, they have a long way to go before true cross operability with all the performance enhancements. Just look at any graphical linux distro, and the fact that all android devices out there run linux, but there isn't a way to natively run it on them without the chroot method that doesn't take advantage of the hardware.

So in the end, if you are someone that just wants a laptop with good performance and great battery life, and don't care about any particular software since most of your time will be spent on the web or in the Apple Ecosystem software, the M1 machines are definitely the right choice. However, if you want something like a dev machine with Linux where you just want to be able to git clone software and it work without any issues, you are most likely going to be better off with the "windows" based laptops with traditional AMD/Intel chips for quite some time.


And Apple will court (allow) developers until competition shows up. Then they will shut it down and obfuscate. The fact that Apple refuses to provide FOSS drivers themselves indicates this.


Apple has never provided drivers for other operating systems, including windows. The drivers used for BootCamp aren’t even developed by them.


Well, they clearly at least commissioned them from someone else, since they are the ones distributing them.


Intel Macs are literally PCs. The Windows drivers that Apple bundled up for Boot Camp were the same drivers that the respective component manufacturers provide for other PC OEMs to redistribute. In the entire history of Intel Macs, there have been very few components used by Apple for which there wasn't already an off-the-shelf Windows driver written by someone other than Apple for the sake of their non-Apple customers.


No, Macs have a lot of custom components, and the ones with Touch Bar or T2 added even more. Those have Apple developed Windows drivers.


They do include these too, but I'm mostly talking about stuff like the trackpads (before SPI, the protocol they ran over USB was also custom, not HID)


> Apple willingly or not has just obliterated the whole consumer pc market.

Apple probably has the best laptop out there as of today, but I don't think Apple sales performances are impacted that much by their hardware perf actually: around 2012-2015 or something they had several years with a subpar mobile phone, both on the hardware a and the software side, and it still sold very well. A few years later, they have the best phone on the market and… it didn't change their dynamic much: it still sells very well, as before. On the laptop market, they have been selling subpar laptops for a few years without much issue, and I guess it won't change much that they now have the best one.

Apple customers will get a much better deal for their bucks, which is good for their long term business, but I don't think it will drive that many people out of the Windows world[1] just for that reason (especially with the migration/compatibility issues which are even worse now than they where running on Intel).

Also, many people outside of HN just don't listen to Apple “revolutionary” announcement, they have used that card too much, for no good reason most of the time, so people just stopped listening (even my friends who are actually Apple customers).

[1]: which is where most people are tbh, and I don't think that many Linux people would switch either.


Agreed - since MacOS is even more of an entire eco-system, moving in and out of that is much more of a long-term commitment for most regular users.

People who are multi-platform in daily life, are much more likely to switch - and that's a rather small percentage of computer users (and of course very much over-represented here at HN).

> they have used that card too much

you can never have enough "magic" :-)


Wow it looks like I really pissed off a bunch of Apple fans by saying that they have at some point sold subpar products…


You might have to bite the bullet and get a mac - I don't see much promise from others in this space.

Intel's CEO change may save them, but they're definitely facing an existential threat they've failed to adapt to for years.

Amazon will move to ARM on servers and probably some decent design there, but that won't really reach the consumer hardware market (probably - though I suppose they could do something interesting in this space if they wanted to).

Windows faces issues with third party integration, OEMs, chip manufacturers and coordinating all of that. Nadella is smart and is mostly moving to Azure services and O365 on strategy - I think windows and the consumer market matter less.

Apple owns their entire stack and is well positioned to continue to expand the delta between their design/integration and everyone else continuing to flounder.

AMD isn't that much better positioned than Intel and doesn't have a solution for the coordination problem either. Nvidia may buy ARM, but that's only one piece of getting things to work well.

I'm long on Apple here, short on Intel and AMD.

We'll see what happens.


I just got my M1 Air. This thing is unbelievably fluid and responsive. It doesn't matter what I do in the background. I can simultaneously run VMs, multiple emulators, compile code, and the UI is always a fluid 60 fps. Apps always open instantly. Webpages always render in a literal blink of an eye. This thing feels like magic. Nothing I do can make this computer skip a beat. Dropped frames are a thing of the past. The user interface of every Intel Mac I've used (yes, even the Mac Pro) feels slow and clunky in comparison.

Oh, and the chassis of this fanless system literally remains cool to the touch while doing all this.

The improvements in raw compute power alone do not account for the incredible fluidity of this thing. macOS on the M1 now feels every bit as snappy and responsive as iPadOS on the iPad. I've never used a PC (or Mac) that has ever felt anywhere near this responsive. I can only chalk that up to software and hardware integration.

Unless Apple's competitors can integrate the software and hardware to the same degree, I don't know how they'll get the same fluidity we see out of the M1. Microsoft really oughta take a look at developing their own PC CPUs, because they're probably the only player in the Windows space suited to integrate software and hardware to such a degree. Indeed, Microsoft is rumoured to be developing their own ARM-based CPUs for the Surface, so it just might happen [0]

[0] https://www.theverge.com/2020/12/18/22189450/microsoft-arm-p...


So much this. M1 mini here. I am absolutely chuffed with it. It’s insanely good.

I’m going to be the first person in the queue to grab their iMac offering.


Are you saying you don't see much promise for AMD, Intel and Nvidia in the GPU space or with computers in general? I had a hard time following your logic.

Apple may own their stack, but there are a TON of use cases where that stack doesn't even form a blip on the radar of the people who purchase computer gear.


My prediction is x86 is dead.

External GPUs will remain and I think Nvidia has an advantage in that niche currently.

The reason stack ownership matters is because it allows tight integration which leads to better chip design (and better performance/efficiency).

Windows has run on ARM for a while for example, but it sucks. The reason it sucks is complicated but largely has to do with bad incentives and coordination problems between multiple groups. Apple doesn't have this problem.

As Apple's RISC design performance improvements (paired with extremely low power requirements) become more and more obvious x86 manufacturers will be left unable to compete. Cloud providers will move to ARM chipsets of their own design (see: https://aws.amazon.com/ec2/graviton/) and AMD/Intel will be on the path to extinction.

I'd argue Apple's M1 machines are already at this level and they're version 0 (if you haven't played with one you should).

This is an e-risk for Intel and AMD, they should have been preparing for this for the last decade, instead Intel doubled down on their old designs to maximize profit in the short term at the cost of extinction in the long term.

It's not an argument about individual consumer choice (though that will shift too), the entire market will move.


> My prediction is x86 is dead.

I don't see that. At least in corporate environment with bazillion legacy apps, x86 will be the king for the foreseeable future.

And frankly I don't really see the pull of ARM/M1 anyway. I mean, I can get a laptop with extremely competitive Ryzen for way cheaper than MacBook with M1. The only big advantage I see is the battery, but that's not very relevant for many use cases - most people are buying laptops don't actually spend that much time on the go needing battery power. It's also questionable how transferable this is to the rest of the market without Apple's tight vertical integration.

> I'd argue Apple's M1 machines are already at this level and they're version 0

Where is this myth coming from? Apple's chips are now on version 15 or so.


This is the first release targeting macOS, I'm not pretending their chips for phones don't exist - but the M1 is still version 0 for macs.

> "And frankly I don't really see the pull of ARM/M1 anyway. I mean, I can get a laptop with extremely competitive Ryzen for way cheaper than MacBook with M1..."

Respectfully, I strongly disagree with this - to me it's equivalent to someone defending the keyboards on a palm treo. This is a major shift in capability and we're just seeing the start of that curve where x86 is nearing the end.

“No wireless. Less space than a Nomad. Lame.”


> but the M1 is still version 0 for macs.

Fair enough, it's just important to keep in mind that M1 is a result of decade(s) long progressive enhancement. M2 is going to be another incremental step in the series.

> to me it's equivalent to someone defending the keyboards on a palm treo. This is a major shift in capability ...

That's a completely unjustified comparison. iPhone brought a new way to interact with your phone. M1 brings ... better performance per watt? (something which is happening every year anyway)

What new capabilities does M1 bring? I'm trying to see them, but don't ...


> "That's a completely unjustified comparison. iPhone brought a new way to interact with your phone."

People don't really remember, but a lot of people were really dismissive of the iPhone (and iPod) on launch. For the iPhone, the complaints were about cost, about lack of hardware keyboard, about fingerprints on the screen. People complained that it was less usable than existing phones for email, etc.

The M1 brings much better performance at much less power.

I think that's a big deal and is a massive lift for what applications can do. I also think x86 cannot compete now and things will only get a lot worse as Apple's chips get even better.


> People don't really remember, but a lot of people were really dismissive of the iPhone (and iPod) on launch.

I do remember that. iPhone had its growing pains in the first year, and there was a fair criticism back then. But it was also clear that iPhone brings a completely new vision to the concept of a mobile phone.

M1 brings a nice performance at fairly low power, but that's just a quantitative difference. No new vision. Perf/watt improvements have been happening every single year since the first chips were manufactured.

> I also think x86 cannot compete now and things will only get a lot worse as Apple's chips get even better.

Why? Somehow Apple's chips will get better, but competition will stand still? AMD is making currently great progresses, and it finally looks like Intel is waking up from letargia as well.


>M1 brings a nice performance at fairly low power, but that's just a quantitative difference. No new vision. Perf/watt improvements have been happening every single year since the first chips were manufactured.

I'd say the M1's improvements are a lot more than performance per watt. It has enabled a level of UI fluidity and general "snappiness" that I just haven't seen out of any Mac or PC before. The Mac Pro is clearly faster than any M1 Mac, but the browsing the UI on the Mac Pro just feels slow and clunky in comparison to the M1.

I can only chalk that up to optimization between the silicon and the software, and I'm not sure that Apple's competitors will be able to replicate that.


> "Why? Somehow Apple's chips will get better, but competition will stand still?"

Arguably this has been the case for the last ten years (comparing chips on iPhones to others).

I think x86 can't compete, CISC can't compete with RISC because of problems inherent to CISC (https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...)

It won't be for lack of trying - x86 will hold them back.

I suppose in theory they could recognize this e-risk, and throw themselves at coming up with a competitive RISC chip design while also somehow overcoming the integration disadvantages they face.

If they were smart enough to do this, they would have done it already.

I'd bet against them (and I am).


RISC vs CISC is not real. Anyone writing articles about it is uninformed and you should ignore them. (However, it's also not true that all ISAs perform the same. x86-64 actually performs pretty well though and has good memory density - see Linus's old rants about this.)

ARM64 is a good ISA but not because it's RISC, some of the good parts are actually moving away from RISCness like complex address operands.


Very much this. Intel is not that stupid. They went in the lab, built and simulated everything to find that the penalty of extra decoding has an upper bound of a few percent perf/Watt max once you are out of the embedded space.

OTOH, apple is doing some interesting things optimizing their software stack to the store [without release] reordering that ARM does. These sorts of things are where long term advantage lies. Nobody is ever ahead in the CPU wars by some insurmountable margin in strict hardware terms.

System performance is what counts. Apple has weakish support for games, for example, so any hardware advantage they have in a vacuum is moot in that domain.

Integrated system performance and total cost of ownership are what matters.


I'm confused - I thought the reason that it's hard for Intel to add more decoders is because x86 ISA doesn't have fixed length instructions. As a result you can't trivially scale things up.

From that linked article:

--

Why can’t Intel and AMD add more instruction decoders?

This is where we finally see the revenge of RISC, and where the fact that the M1 Firestorm core has an ARM RISC architecture begins to matter.

You see, an x86 instruction can be anywhere from 1–15 bytes long. RISC instructions have fixed length. Every ARM instruction is 4 bytes long. Why is that relevant in this case?

Because splitting up a stream of bytes into instructions to feed into eight different decoders in parallel becomes trivial if every instruction has the same length.

However, on an x86 CPU, the decoders have no clue where the next instruction starts. It has to actually analyze each instruction in order to see how long it is.

The brute force way Intel and AMD deal with this is by simply attempting to decode instructions at every possible starting point. That means x86 chips have to deal with lots of wrong guesses and mistakes which has to be discarded. This creates such a convoluted and complicated decoder stage that it is really hard to add more decoders. But for Apple, it is trivial in comparison to keep adding more.

--

Maybe you and astrange don't consider fixed length instruction guarantees to be necessarily tied to 'RISC' vs. 'CISC', but that's just disputing definitions. It seems to be an important difference that they can't easily address.


People are rehashing the same myths about ISA written 25 years ago.

Variable length instructions are not a significant impediment in high wattage cpus (>5W?). The first byte of an instruction is enough to indicate how long an instruction is and hardware can look at the stream in parallel. Minor penalty with arguably a couple of benefits. The larger issue for CISC is that more instructions access memory in more ways so decoding requires breaking those down into micro-ops that are more RISC like, in order that the dependencies can get worked out.

RISC already won where ISA matters -- like AVR and ARM thumb. You have a handful of them in a typical laptop plus like a hundred throughout your house and car, with some PIC thrown in for good measure. So it won. CISC is inferior. Where ISA matters it loses. Nobody actually advocates for CISC design because you're going to have to decode it into smaller ops anyway.

Also variable length instruction is not really a RISC vs CISC thing as much as also a pre vs post 1980 thing. Memory was so scarce in the 70s that wasting a few bits for simplicity sake was anathema and would not be allowed.

System performance is a lot more than ISA as computers have become very complicated with many many I/Os. Think about why American automakers lost market share at the end of last century. Was it because their engineering was that bad? Maybe a bit. But really it was total system performance and cost of ownership that they got killed on, not any particular commitment to a solely inferior technical framework.


I agree that's a real difference and M1 makes good use of it, it's just to me RISC ("everything MIPS did") vs CISC ("everything x86 did") implies a lot of other stuff that's just coincidences. Specifically RISC means all of simple fixed-length instructions, 3-operand instructions (a=b+c not a+=b), and few address modes. Some of these are the wrong tradeoff when you have the transistor budget of a modern CPU.

x86 has complicated variable instructions but the advantage is they're compressed - they fit in less memory. I would've expected this to still be good because cache size is so important, but ARM64 got rid of theirs and they know better than me, so apparently not. (They have other problems, like they're a security risk because attackers can jump into the middle of an instruction and create new programs…)

One thing you can do is have a cache after the decoder so you can issue recent instructions over again and let it do something else. That helps with loops at least.


Remember, M1 is on the leading edge 5nm fab process. Ryzen APUs are coming and may be competitive in terms of power consumption when they arrive on 5nm.

Apple software is also important here. They do some things very much right. It will be interesting to run real benchmarks with x64 on the same node.

Having said all that, I love fanless quiet computers. In that segment Apple has been winning all along.


> At least in corporate environment with bazillion legacy apps

They could just run them as virtual desktop apps. Citrix, despite its warts, is quite popular for running old incompatible software in corporate environments.


Ok, I still have questions.

To start... How would a city with tens of thousands of computers transition to ARM in the near future?

The apps that run 911 Dispatch systems and run critical infrastructure all over the world are all on x86 hardware. Millions if not Billions of dollars in investment, training, and configuration. These are bespoke systems. The military industrial complex basically custom chips and x86. The federal government runs on x86. you think they are just going to say, "Whelp, looks like Apple won, lets quadruple the cost to integrate Apple silicon for our water system and missile systems! They own the stack!"

Professional grade engineering apps and manufacturing apps are just going to suddenly rewrite for apple hardware, because M2 or M3 is sooooo fast? Price matters!!!! Choice Matters!!!

This is solely about consumer choice right now. The cost is prohibitive for most consumers as well, as evidence by the low market penetration of Apple computers to this day.


Notice how the only counter examples you came up with are legacy applications. This is the first sign of a declining market. No, Intel will not go out of business tomorrow. But they are still dead.

The growth markets will drive the price of ARM parts down and performance up. Meanwhile x86 will stagnate and become more and more expensive due to declining volumes. Eventually, yes, this will apply enough pressure even on niche applications like engineering apps to port to ARM. The military will likely be the last holdout.


You make bets on where the puck is going, not on where it currently is.

"How would a city with tens of thousands of HDDs transition to SSDs in the near future?"

It happens over time as products move to compete.

Client side machines matter less, the server will transition to ARM because performance and power is better on RISC. The military industrial complex relies on government cloud contracts with providers that will probably move to ARM on the server side.

It's not necessarily rewriting for Apple hardware, but people that care about future performance will have to move to similar RISC hardware to remain competitive.


Wow I feel like I'm back in 1995 or something. Stupid Intel doubling down on the Pentium! DEC, Sun, and Motorola RISC will extinct them!


I think we'll see a lot of ARM use cases outside of the Apple stack and x86 is dead (but it will of course take its sweet time getting there). For the longest time everyone believed at a subconscious level that x86 was a prerequisite due to compatibility. Apple provided an existence proof that this is false. There is no longer a real need to hold onto the obsolete x86 design.

The only way for Intel and AMD to thrive in this new world is to throw away their key asset: expertise in x86 arcana. They will not do this (see Innovator's Dilemma for reasons why). As a result they will face a slow decline and eventual death.


[flagged]


I've never worked on GPUs, will likely never work on drivers even, yet was able to follow along and find it interesting.

I'm going to upvote pretty much any reverse engineering monologue, especially one as accessible as this.


Are you saying this post is too nerdy for Hacker News? Stories come all the time, if you don't like it don't read it.

I didn't understand everything the author said, but I was able to get the gist of it. There GPU has a list of valid addresses and it's stored in a C style array with a terminator instead of a 1 indexed array with an object count in the Apple API style. The mismatch caused the author some confusion even after he discovered the existence of the buffer due to his code failing.

It's one of the many details that has to be understood in order to write an open source driver for the hardware. The open source driver will be necessary for people who want to install Linux on this Apple hardware.


> even after he discovered

She, fwiw.


Sorry. I completely forgot to check the byline.


small correction: the author is a woman


Yeah, well, you know, that's just like... your opinion, man.


...well, this is Hacker News, after all.


Then hide it and don't read it


Wow, child prodigy found


So Apple strategy seems to be to commit as little resources as possible to their products knowing that "open source" developers will do their work for free thus Apple will not have to pay extra salaries and taxes? I don't understand why people bother working for free to e.g. run Linux or even write a GPU driver? I get this is a nice developer challenge, and being involved with Apple stuff is still being seen as "cool", but why don't those developers actually support some truly open source projects instead of helping filthy rich company for nothing?


> So Apple strategy seems to be to commit as little resources as possible to their products knowing that "open source" developers will do their work for free thus Apple will not have to pay extra salaries and taxes?

Apple has no Linux strategy. Nobody is working for Apple for free. People are working on their own time (or supporting the project with their own money) because they want to see this happen.

There is no skin in this for Apple either way.

What I don't understand is why people insist on criticizing a project which won't fundamentally affect them at all. Having Linux on Mac isn't going to hurt you and stands to benefit the community as a whole.

While the GPU port is unlikely to benefit others, it's very likely some of the other work will. Any improvements to the Broadcom drivers will be useful for the entire community. Improvements and optimizations to ARM64 support will likewise benefit the whole community.

Really tired of the mis-directed zealots who think they have the right to tell other people where to direct their time and energy.


> Apple has no Linux strategy.

macOS comes with a free built in hypervisor for running things in VMs. That's how Linux is supported. (Although I guess not if you want to use the GPU.)


It's a good point, and well supported.

I was thinking about bare metal.


Maybe because ... those developers want to use Linux on hardware they bought and own instead of the built-in operating system? The developers are helping themselves, not Apple.

Apple could save some money by selling bare metal M1's without an OS installed, but then it might get into the hands of people who want a "cheaper Mac but your hacker friend can get it working" and it would damage their brand, so I see why they don't do it.


Well, is this M1 really _that_ good to commit considerable amount of time into it instead of working on something more meaningful or benefitting ones personal life more? Are they able to make a commercial product out of it? Unless it gives the person the same kick as fishing or building Lego - but these hobbies don't have a side effect of filling the pockets of a big co.

The idea about "cheaper Mac" is weak, because you can already buy a cheap Mac - it's not going to be the latest gen, but let's take into account that Intel has not made big progress in the last couple of years and then M1 is actually on the affordable side.

Isn't actually more damaging to their brand that they don't support their products that will benefit professional users and that they rely on people doing work for free and thus Apple is avoiding paying fair share?


> Well, is this M1 really _that_ good to commit considerable amount of time into it instead of working on something more meaningful or benefitting ones personal life more?

M1 is an ARM chip that's up there with Intel desktop PCs. That's awesome. It's possibly the real beginning of the end of the effective Wintel monopoly on personal computing and if we are going to continue to have Linux on hardware that's not locked-down phones it needs to happen. I'd certainly put my effort there if I had the skill.

> Isn't actually more damaging to their brand that they don't support their products that will benefit professional users and that they rely on people doing work for free and thus Apple is avoiding paying fair share?

Apple has $100 billion in cash. Whatever they are doing now, is working.


This is like building a house on a swamp. Without official Linux support, Apple can pull the plug anytime. It's likely what is going to happen is that eventually a viable open source project emerges that Apple didn't pay anyone to build and then they will announce how they embrace open source and tell their own developers to contribute few lines for PR.

> Whatever they are doing now, is working. If you are using child labour, avoid taxes, use anti-competitive measures, make stuff deliberately difficult to repair and easy to break and then have money to shut any politicians willing to look into their shady business then yes it is definitely working.


It's not about helping Apple. The M1 beats every x86 CPU in absolute single threaded performance, as well as multicore performance per watt. Hopefully AMD will close the gap (I don't have much hope for Intel), but for now it's an extremely attractive target for Linux.


The M1 beats every old x86 CPU in single threaded performance.

The M1 is slower in single threaded performance than any AMD Zen 3 CPU at 4.8 GHz or higher clock frequency and also slower than any Intel Tiger Lake or Rocket Lake CPU at 5.0 GHz or higher clock frequency.

For example the maximum recorded single-thread Geekbench 5 score for M1 is 1753, while Ryzen 9 59xx score above 1800 and up to 1876. The first Rocket Lake record at 5.3 GHz is 1896, but it is expected that it will score better than that, above 1900.

In benchmarks that unlike GB5 focus more on computational performance in optimized programs, Intel/AMD have an even greater advantage in ST performance than above. (For example, in gmpbench even the old Ryzen 7 3700X beats Apple M1, even when using on Apple a libgmp that was tuned specifically for M1.)

What is true is that Apple M1 is much faster (on average) in ST than any non-overclocked CPU launched before mid 2020.


I just want to say, I think we should also stop treating "performance/watt" as a mere "mobility" issue. It's also about the future of this planet and human survival. It's never okay to blow away energy, when we got alternatives.

I hope the M1 will bring a change to computation in general..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: