Hacker News new | past | comments | ask | show | jobs | submit login

1. If, and only if, you are doing ML or multimedia, get a 128GB system and because of the cost of that RAM, it would be foolish not to go M3 Max SoC (notwithstanding the 192GB M2 Ultra SoC). Full Stop. (Note: This is also a good option for people with more money than brains.)

2. If you are doing traditional heavyweight software development, or are concerned with perception in an interview, promotional context or just impressing others at a coffee shop, get a 32GB 16” MBP system with as large a built-in SSD as you can afford (it gets cheaper per GB as you buy more) and go for an M2 Pro SoC, which is faster in many respects than an M3 Pro due to core count and memory bandwidth. Full Stop. (You could instead go 64GB on an M1 Max if you keep several VMs open, which isn’t really a thing anymore (use VPS), or if you are keeping a 7-15B parameter LLM open (locally) for some reason, but again, if you are doing much with local LLMs, as opposed to being always connectable to the 1.3T+ parameter hosted ChatGPT, then you should have stopped at #1.)

3. If you are nursing mature apps along, maybe even adding ML, adjusting UX, creating forks to test new features, etc.. then your concern is with INCREMENTAL COMPILATION and the much bigger systems like M3 Max will be slower (bc they need time to ramp up multiple cores and that’s not happening with bursty incremental builds), so might as well go for a 16GB M1 MBA (add stickers or whatever if you’re ashamed of looking like a school kid) and maybe invest the savings in a nice monitor like the 28” LG DualUp (bearing in mind you can only use a single native-speed external monitor on non-Pro/Max SoCs at a time). (In many cases, can even use an 8GB M1 MBA for incremental builds because, after loading the project, the MacOS memory compressor is really good and the SSD is really fast and you can use a real device instead of a Simulator. But do you want any M2 MBA? No, it has inferior thermals, is heavier, larger, fingerprints easy, lack’s respect and the price performance doesn’t make sense given the other options. Same goes for 13” M1/M2 Pro and all M3 Pro.)

Also, make sure you keep hourly (or better) backups on all Apple laptops WHILE CODING. There is a common failure scenario where the buck converter that drops voltage for the SSD fails, sending 13VDC into the SSD for long enough to permanently destroy the data on it. https://youtu.be/F6d58HIe01A




> You can even get by with the 8GB M1 MBA because the MacOS memory compressor is really good and the SSD is really fast.

I thought that general consensus was that 8GB Macs were hammering the life of the SSDs? Yeah, they're fast, but people were talking about dozens of GB a day of swapping happening. And these aren't enterprise class SSDs, despite what Apple charges for them.


This just in for the 8GB (4GB?) crowd... https://github.com/lyogavin/Anima/tree/main/air_llm

..seems to be enabled by Apple's new MLX framework (which could become even faster https://github.com/ml-explore/mlx/issues/18 --ANE support would be particularly important on a lowly M1, which is the main system that would have been configured with only 8GB)


Opening Xcode and a small to medium project targeting a physical device with its own RAM will be fine..no SSD killing. If you are not doing INCREMENTAL builds or start flipping between apps, web tabs, streaming video and messaging while also Xcoding, the amazing thing is that it will work, but as you say, it will likely be hammering the SSD. I wouldn’t really recommend a dev buy 8GB if they can afford 16GB, but I wouldn’t really let them use only having 8GB as an excuse not to be able to make small to medium apps either, they just have to be more intelligent about managing the machine. (Xcode is I/O bound anyway.)


I'm curious if an 8GB MacBook can run a macOS VM, and if so how much memory can be allocated to it.


Sure, it operates on an over allocation pattern. If you try to use most of the RAM, the system will begin compressing blocks of memory and will eventually begin swapping them to super fast NVMe storage. This may be fine for desktop productivity apps and web browsing, but will make the system feel sluggish when flipping between unrelated contexts.


> and the much bigger systems like M3 Max will be slower (bc they need time to ramp up multiple cores and that’s not happening with bursty incremental builds)

Is there some microbenchmark that illustrates the problem, or is this just wild extrapolation from some deep misunderstanding (maybe a bad car analogy about turbo lag?)


Hmm..at least you reasoned that it is a "problem" (and I would say it's more analogous to VLOM switchover than turbo lag ROFL).

For now, if you infrequently do full builds (and instead do incremental builds) and are mainly dragging things around, adding small features, etc. you're better off with less cores, even if those cores do run slower.

I don't really want Apple to "care deeply" us [slowing our phones], so I'mma not help illustrate further because a proper solution could involve a massive Engineering task (maybe Nuvia knows) and even with $3T market cap, they pretty much cheap bastards unless it comes to food courts and office chairs.


Ok, so I'm pretty much convinced now that you're full of shit. I asked for information to understand what the hell you're talking about, and you just went further off the rails. Please don't behave like that here.


Good to know I have commercial options for overcoming my laptop shame at interviews. /s


A freelancer might be interviewed by a client in Entertainment, Hospitality, Construction, Real Estate or Academia and if they show up with a pimped MBA, all their prospect is going to see is that this person asking for a chunk of cash is using the same ‘puter as their kid, so they lose pricing power or maybe the whole gig. Likewise, a dev might go for an interview with a bunch of Pentium heads who think you need a Razor laptop with RGB keys to be legit and they’re going to want to see at least a 16” form-factor as table stakes. There is some variation between those extremes for devs trapped in a corporate setting, but none of it based in reality, the perception is the reality.

Bottom line: M1-MBA16 (or M1-MBA8) for incremental builds, 16” M2P-MBP32 (or M1M-MBP64) for full development or 16” M3M-MBP128 (or M2U-MS192) for AI/media dev ..the other models aren’t really for devs.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: