The first line of the smartctl output everyone's bandying about says that a total of 1% of the drive's capacity has been used, which means that in about 16.5 years (2 months * 99 / 12) their drive will go readonly.
What, exactly, is the problem with that?
It seems like there are a lot of assumptions in play in this "headline news":
(1) assuming that the reported numbers are valid for this measurement at all
(2) that they are not a bug in the IOKit implementation used by smartctl
(3) that they are not not a bug in smartctl itself
(4) that they are directly comparable to non-M1 hardware without further processing
(5) that they do not increment when Time Machine or other filesystem snapshots occur
(6) that they do not increment when APFS copy-on-writes occur
(7) that they do not include the total byte size of sparsely-modified files
I don't see anyone checking these assumptions yet, but if y'all do, supporting links on those points would improve this HN post considerably. There are other assumptions that could be tested too! Outrage is cool, but science is productive.
The first two in here[1] are reporting 3% of a 2TB drive in a couple months. And speculating what that would look like for a 256GB drive. So, not definitive, but if it does mean 24% in two months for a smaller drive, that's not great.
Edit: I suppose it makes sense that the people seeing lots of wear are the 2TB drive buyers. People that are willing to pay for that much NVME probably use it a lot.
I've had my 256GB SSD + 16GB RAM M1 MBA for a little over 2 months now. There was one time that I noticed a strangely large amount of swap usage for no discernible reason, and rebooted immediately. Otherwise, swap doesn't seem to be used that often.
I really don't understand why Apple is apparently incapable of making these laptops use standard, replaceable m.2 NVMe SSDs.
Seeing this thread made me concerned, but it looks like my SSD isn't something to be worried about yet:
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 34 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 6,963,442 [3.56 TB]
Data Units Written: 3,626,988 [1.85 TB]
Host Read Commands: 110,283,456
Host Write Commands: 59,878,323
> I really don't understand why Apple is apparently incapable of making these laptops use standard, replaceable m.2 NVMe SSDs.
Coz there's no standard M.2 form factor NVMe SSDs without a controller chip, and even if you can find one, there's no standard protocol to talk to it.
Recall that 3rd-party drive manufacturers are really bad at properly making Self-Encrypting Drives (lots of HN stories previously, just search for SED). It's very likely a factor for Apple to decide to implement its own SSD controller inside the T1/T2 chip on recent Intel Macs, and directly on die on Apple Silicon, so that Apple can fully control the data written to raw flash and be confident in its own implementation of encryption.
Charging a premium for more capacity is a standard Apple practice anyway, and the fact that such security-focused approach makes it unavoidable is a side-effect.
Aside from encryption, commodity NVMe SSDs are also really bad at correctly implementing idle power management. The Linux kernel is constantly adding to its list of drives that cannot safely use the deepest idle state because on many systems the drive won't wake back up after being put to sleep. Apple might be able to have a bit of an easier time since they control the host system so tightly, but they would still end up having to accommodate plenty of SSD bugs/quirks.
Apple could have made their own replaceable SSD boards.
It looks like they want their devices to become obsolete sooner which seems to be a good business idea but there is a risk of upsetting many customers. Time will tell.
Ouch, that's my only 2TB drive in a laptop running a rolling-release GNU/Linux distro with heavy swap usage, encryption and plenty of over-night compilations for a bit more than a year:
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 36 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 1%
Data Units Read: 56 263 143 [28,8 TB]
Data Units Written: 36 077 380 [18,4 TB]
Host Read Commands: 1 252 403 456
Host Write Commands: 1 018 672 820
Controller Busy Time: 15 360
Power Cycles: 234
Power On Hours: 10 255
Unsafe Shutdowns: 47
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
If we are more reasonable and say it's running for 12 hours out of the day then that works out at a continuous 50MB/s of writes.
For comparison, My daily Linux laptop (XPS 13, 512GB Samsung NVMe SSD) has a total of 3.7 TB of writes over 3 years, this is my work dev and home laptop, it's in constant use (although no video editing):
3.7 TB / (1095 days *24*60*60) * 2**20 = 0.041 MB / s
There are three orders of magnitude difference there. I can only think of three explanations: 1. the SMART reporting is wrong, 2. MacOS or M1 SSD controllers have serious write amplification issues, or 3. you are actually doing something that does need serious write throughput like lots of video editing (your stat's aren't impossible after all).
There's an option 4:
you are running out of RAM and macOS is doing a lot of swapping.
These SSDs can probably write on the order of 3000MB/s = 0.003TB/s, so you could end up with 155 TB total writes after just 155/0.003/3600 = 14 hours of RAM heavy workload.
It would have to be a legitimately RAM heavy workload though... running out of RAM due to leaving applications open (or browser tabs open), might result in swapping out those to disk until they are used again. But the level of swapping required to generate this much write usage would essentially be using the disk as RAM, as in the running program doesn't have enough RAM, either due to needing more than physically available or due to buggy memory management.
Right, I was thinking of things like large simulations where you need to keep a large dataset in RAM and update every datapoint at every iteration.
Or maybe multiple VMs running simultaneously doing CI jobs could also do it.
I don't think simple memory errors like leaks would cause it, since that would just end up filling the disk once, but wouldn't go through writes quite as fast.
That reminds me, when SSDs first came out we'd have to set noatime on nix stuff including macs, to prevents the OS from writing file access times every time a file is read, otherwise it would cause significant write amplification. Modern SSD controllers are clever enough to make this unnecessary these days, but it could be something similar... either with the SSD controllers themselves or a file system behavior spaced just right in time to turn 1 byte into 4096 bytes of effective NAND writes - That's only 24GB of requested writes turning into 100TB in the worst case.
Yeah, that's a lot. I've got a 5-year old 512 GB Samsung PM951 that's been a system drive in 3 Windows systems with 16-64 gigs of RAM, and over the course of 6200 power cycles and 16500 hours on it has only 35 TB written and 24 TB read
How do you even get an unsafe shutdown in a notebook with a battery, unless you yank it out, or a buggy OS that will not shutdown properly/preemptively on low power?
I've got a number of unsafe shutdowns listed on my M1 MBP, too. Most likely due to the kernel panics I get when rebooting and connected to my dock in clamshell mode.
It's not that they are incapable. They are unwilling.
Nobody should be surprised that one of the most successful proprietary technology companies on the planet sells highly proprietary technology. The 2011 Macbook Air had a 1.8" PATA ZIF hard drive.
A quick search shows PATA ZIF drives were also used in Thinkpads, Dell / HP / Acer / Asus laptops, and other portables like the iRiver. Replacement drives for that model of Macbook Air are available, doesn't seem like it is proprietary tech at all.
Apple still charges that kind of premium on RAM even on the Intel Mac mini where you can replace the RAM.
People still pay it, because most people probably don't know the RAM is replaceable, don't want to be bothered with doing it themselves, don't want to accidentally damage their new computer, don't realize the upgrade prices are high, or other reasons.
Apple has always charged high premiums for upgrades... premium upgrades aren't a strategy they suddenly invented when they started soldering components down.
Apple has reasons for soldering down components, reasons which I don't really agree with, but I don't think they're all that worried about a small percentage of people avoiding their upgrade prices.
> premium upgrades aren't a strategy they suddenly invented when they started soldering components down.
What you wrote is only a part of the story. Another one happens after using the computer for couple of years. RAM and SSD prices are declining over time, while software developers are finding exciting new ways to use more of them.
It’s more profitable for Apple when users replacing entire computers than just RAM + SSD. Unlike the initial premium upgrades, users have more motivation. Warranty’s expired, and the price difference is way larger.
I just upgraded my 2015 Macbook Pro with a 2TB SSD and a new battery. I'm planning to keep using it for a few years.
The 2015 MBPs were the last ones that had replaceable drives (it's a proprietary slot, but adaptors to M2 are available).
I still think that the 2015 models are the last laptops Apple actually targeted at pro users. They even added support for PCI NVMe drives in a recent macOS version, so these MBPs got MORE EXPANDABLE after release. It's crazy! (It does have soldered-on RAM though)
The only comparable recent Mac was the 2018 Mac mini -- with user upgradable RAM! It does have soldered on flash, unfortunately. But it has so many ports, and with external thunderbolt enclosures you can add GPUs, SSD Raids, ... It's pretty amazing for a small desktop and surprisingly usable as a developer workstation.
(I'm not counting the current Mac Pro because it is so outrageously expensive that I can't imagine that it makes sense for anyone except the most highest paid professsionals)
My 2012 MacBook Pro is still going strong! Use it daily. Upgraded to a 1TB SSD, 16GB RAM and I sincerely hope it'll last a few more years - still on Mojave though. I boot into Crapolina to do App Store work from a USB SSD, but will probably upgrade to Big Sour at some point with the horrendous menubar spacing.
I type this on my work MacBook where I've had to press backspace so many times due to the shift key not working and the i key repeating itself, with the touchbar flashing constantly next to the power button.... the battery has a worse rating too even though it's a 2016 versus my 2012. I hate this keyboard.
> It’s more profitable for Apple when users replacing entire computers than just RAM + SSD.
Are there any hard cold data on how often a mac user gets a new laptop vs a pc user? It would also be interesting to see how common it is for pc users to add ram/disk during their pc:s lifetime.
I've only just got a new Mac after having my last one for nearly a decade. I've had to replace two windows machines in that same time period. Not exactly cold hard data, but a sample size of 1.
There are also windows machines that make it hard to replace things mind you.
Same with me and a lot of people I Know. Macs are so reliable (in general, don’t want to get into a debate as a lot of viruses don’t target them as Windows is larger) that upgrading is not required for a long time.
It may change in the future if Apple introduces planned obsolescence, but so far no issues
Having done over a decade of desktop support for several companies, both apple and windows shops. This notion that Macs are more reliable doesn't hold water for me.
If we compare devices on the same price scale (premium category) as macbooks, amongst thousands of devices, we actually found them to last the same length without any upgrades (6-10years average)
What a lot of people neglect to consider is that non-Apple brands have an entire "budget" category that Apple does not participate in. It's wrong to compare this category with devices of another category, no matter the make and model.
Counterpoint: it's a lot more wasteful, especially for desktops, to own Apple.
Look at all the iMacs with recent, high quality 5K monitors that'll be scrapped, as they can't be used with another video source.
Unless you need a monster dedicated GPU, you could get a Mac Mini. Or if you need GPU power and you have money to burn, you could get a Mac Pro. No screens attached.
2015 and earlier MacBook Pros had replaceable SSD drives. Remove 10 pentalobe screws, take off bottom panel, remove 1 torx screw, pull out SSD, put new SSD in, reassemble, done.
The soldered on stuff is a recent change, and it's stupid for things like RAM, but even more stupid for things that wear like SSDs.
They have the "pro" line for a reason. It's meant for professionals. It's okay to solder RAM and SSD in lower-end models I suppose, but not in those people actually use to get their job done.
I must agree with this. About a year ago, our company's iOS development team was replaced. The managers bought the new developers company MBPs. However, after about six months they started complaining that they could not make progress on one feature because Xcode kept crashing when they opened an existing Storyboard file. Turns out the managers had of course cheaped out and bought the developers Macs with the lowest 8GB memory option. Meanwhile, the consultants who originally created those files had had much more sensible 32GB.
Now the MBPs for the whole team need to be replaced at great cost and double the environmental resource use. Without Apple soldering the memory our IT department would have certainly just upgraded the memory. In fact, when I ran into similar issue with needing 32GB to my 16GB Dell laptop, that's exactly what we did.
Theoretically yes, if they were for example managed by a leasing company.
In this case, the company actually only officially supports (leases) Windows laptops. Macs should not officially exist to start with, but are nevertheless required for iOS app development. So MBPs are handled as "extra" IT equipment. If there is no use for such a piece of IT equipment (e.g. it is underpowered or otherwise not fit for purpose) AND it contains sensitive business data (like a developer computer almost certainly does), it is actually a security issue for the company.
So for security reasons the company would actually prefer that such computer be DESTROYED when there is no use for it anymore. Sadly you can not even remove the hard drive from a MBP and sell/give it to a employee for personal use, so Apple soldering the components on the mainboard is a double whammy.
Don't recent Apple computers have hardware encryption by default through the T2 chip? Just throw away the key and all data on disk should be irretrievable. Replacing the disk in this case would add no security, so it would be the wasteful course of action.
I'm not the kind of person who throws electronics away, but, you know, there's a proven and reliable way to decrease the environmental cost: don't make your damn devices disposable. Design them to be taken apart and make consumable components easily replaceable by the end user. But I guess non-disposable devices don't make charts go up as much as disposable ones do.
Case in point: the M1 Mac Mini internals are half electronics, half air. They could've easily fit all kinds of slots and modular components in there, yet they deliberately decided not to.
Not everyone does. There is even a thriving market for such kinds of user replaceable upgrade proving that more and more people were opting to do the upgrades from reliable third-parties than Apple.
> Apple has reasons for soldering down components, reasons which I don't really agree with, but I don't think they're all that worried about a small percentage of people avoiding their upgrade prices.
while i don’t think many of us are used to thinking at scale, any percentage of apple’s customers making literally any choice about something that purchase from apple, is a f-ton of money.
I'm guessing supply chain and repair and troubleshooting and a lot more things. the Macbook Air now comes with a single board and a daughter board for the headphone jack. when it breaks the service replaces the board reducing any time (money) wasted on troubleshooting like ssds etc. also it allows apple to make their manufacturing a lot more streamlined. the more complicated the assembly (yes even an ssd means additional assembly and tests then you gotta provide support for third party ssds etc and troubleshoot that etc.). Adding more parts means the more complications and more chance of failures not less.
What that really means is less opportunity to fix individual components, more opportunity for $1499 upsells when warranty runs out. Cutting off third party repair and secondary used markets. Win win win in Apple eyes.
>What that really means is less opportunity to fix individual components
That has been the trend in the PC and electronics industry for a century though. Notice how people don't fix their TVs/Radios/etc. anymore? Heck, even in the car industry.
>Cutting off third party repair and secondary used markets.
Well, and somehow these machines get high "consumer satisfaction" ratings, and have high use periods, and retain a lot of resale value.
I think this sort of criticism of Apple usually lacks much substance. Everybody know what Apple products are, if you want something that’s highly customizable and easy to upgrade then why not just buy one of the countless different products that offer those features? Apple’s products tend to work very well, are typically well made, and reliably last a rather long time. I personally enjoy working on them more than any Linux system, so I buy them for that purpose. If I wanted a computer to tinker with, I wouldn’t buy a Mac.
Yet these highly integrated Macs have very long average service lives, high customer satisfaction and maintain high second hand resale values. So clearly that isn’t true.
clearly these machines are a few months old at most. we clearly have a lot to learn about how they work and what’s likely to break. If they’re anything like the last time apple made a big new hardware change, if you want a functional computer, you’ll have to watch out for dust.
That also means repair and troubleshooting for custom configurations really sucks. Either they have to keep a bunch of configurations on hand or the user is SOL waiting for basically a new computer.
My 2016 MPB is also at around those R/W numbers but a bit higher in the used area. I am however not sure what those Power On Hours are refering to, since I have been using this machine daily for many hours.
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 37 Celsius
Available Spare: 87%
Available Spare Threshold: 2%
Percentage Used: 5%
Data Units Read: 98,665,764 [50.5 TB]
Data Units Written: 84,525,474 [43.2 TB]
Host Read Commands: 980,835,796
Host Write Commands: 811,309,011
Controller Busy Time: 0
Power Cycles: 14,955
Power On Hours: 359
Unsafe Shutdowns: 21
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Might want to consider browsing using something else (a large tablet e.g.?) until this issue is resolved. My machine is almost identical to yours. My reported writes are about 1.31TB or less than 1/40th your number. I do some native compiling and some browsing. Chrome is a well known memory hog.
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 25 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 5,849,123 [2.99 TB]
Data Units Written: 2,568,550 [1.31 TB]
Host Read Commands: 75,126,642
Host Write Commands: 32,909,838
Controller Busy Time: 0
Power Cycles: 160
Power On Hours: 42
Unsafe Shutdowns: 3
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
I had a ridiculously high failure rate with the Vertex 2. I also still an OCZ drive of that era somewhere, which would work... But if you didn't use it for a while, it would be completely blank when you next used it.
Using socketed M.2 SSDs introduces a class of problems ("the SSD isn't seated correctly") and also makes the device thicker. There are probably other reasons but I'll bet those are the two big ones.
Plain and simple, they don't want you to take their devices apart. They use hardware and techniques that are extremely anti-consumer. From non-standard form factors, to non-serviceable components, to literally fighting against Right-To-Repair in 20 states [1]. And then when you hold out on them they push you updates to sandbag your performance.
Why so many tech professionals and hackers still swear by this company and feign surprise everytime they get shafted by proprietary design choices is baffling and frustrating.
If they wanted you to play inside, they wouldn't use torque screws. If they wanted you to replace that, they wouldn't have glued it together. If you were supposed to take the keyboard off there wouldn't be 60+ screws (literally) holding it on.
You're supposed to be a good consumer and consume this one so you can buy a new one.
While I agree with your argument, I want to call out a specific point:
> If they wanted you to play inside, they wouldn't use torque screws
Torque screws are superior to Phillips heads because they are far more resistant to cam-out; which is a bigger problem than needing to buy a screwdriver kit for $10 off eBay once in your life.
> Torque screws are superior to Phillips heads because they are far more resistant to cam-out[.]
I work at a repair shop currently, and can confirm this. Give me a Torq or a hex screw any day over a Phillips head.
You do have to be careful with them though, because once they do cam out, you're pretty much dead in the water, while with Phillips you can often hack your way around it.
Edit: I also agree with the rest of the argument. If you've got the budget (and I assume that if Apple is an option, you do), please consider buying something with Linux officially supported. Between Dell, Lenovo, and smaller scale setups like System76, there are plenty of options.
> please consider buying something with Linux officially supported. Between Dell, Lenovo, and smaller scale setups like System76, there are plenty of options.
My biggest complaint about them is heat and performance: my MBP seems to be almost always ready to take off. Apple really did hit the sweet spot with the M1 and I don't see x86 laptops catching up anytime soon. If they release 32+GB machines, I'll be seriously tempted to get my Macs refreshed even though I know they are essentially discardable machines now.
Also, Apple is the only manufacturer I can count on offering US keyboards without any hassle. Lenovo does that with Thinkpads and with Dell you need to work to find the models that are available with that option.
Those are all valid points. None of them are deal-breakers for me personally, but I understand how they can be.
I wish I could like the M1. I'm honestly really looking forward to ARM and RISC machines becoming more common, as long as there are manufacturers who make them at least somewhat reparable.
The groundwork was done for embedded systems, but they added the desktop features and even make a few laptops with it now. But those are generally not very well-regarded, since the x86 emulation is slow and there is basically no native software.
The parent's argument stands, even if the details are slightly off.
Apple switched from torx to pentalobe screws because torx wasn't proprietary enough.
Also, they keyboard wasn't held down with 60+ screws, it was held down with rivets which is arguably worse. They could painstakingly be drilled, tapped, and replaced with screws if you were really masochistic.
What does this even mean, when you can easily buy both types of screwdrivers at any hardware store?
Reportedly Torx, and later pentalobe screws are used because of their tiny size and lower profile, a philips head at that scale (0.8mm) is instantly stripped.
I agree that torx is useful. It's also an international standard.
Pentalobe is not an improvement over torx - torx was arguably working fine for Apple, but they decided to invent and start using a proprietary, non-standard screw anyway. The consensus at the time was that Apple did this to deliberately make it harder to get into their products.
The only reason you can find them now, 12 years later, is because third parties started to make the hardware, because it turns out getting into Apple hardware is something a lot of people eventually want or need to do.
Apple's attempt to lock you out of your own hardware didn't work, because ifixit and others spent tens of thousands of dollars on custom tooling. It was a big deal at the time, the hardware wasn't available anywhere, let alone at a hardware store.
It was an angry article by iFixit that started the whole controversy. They could very well have done it for reasons related to licensing, or machinering, or space savings, or who knows what else. The fact the MBP 17" never used them, while the smaller models did, hints towards that.
You may have taken the stories at the time at face value, but within weeks you could buy pentalobe screwdrivers and they surely did not cost 'tens of thousands of dollars' to develop - screws are cold pressed and very simple to produce.
> they keyboard wasn't held down with 60+ screws, it was held down with rivets which is arguably worse. They could painstakingly be drilled, tapped, and replaced with screws if you were really masochistic.
I just replaced a keyboard in a Macbook Air. It is screws around the outside, then the dozens of rivets on the inside. If you're okay with the old keyboard being destroyed, you just rip it out: the rivets just pop out, and the replacement keyboard gets put back in with screws. So it was a pain, but the drilling/tapping isn't required.
Total cost $35, plus some quality time with my kid.
> Apple switched from torx to pentalobe screws because torx wasn't proprietary enough.
That was a simple and cheap change with clear benefits for Apple in terms of discouraging modifications to devices they support so the cost versus benefits actually makes sense for Apple there.
Whereas to suggest they designed a whole property storage controller interface just for the purpose of discouraging user servicing doesn't make sense at all. If that was the goal why not just use off-the-shelf NVMe modified to have some kind of secret power-on key or something?
There are lots of ways they could spin that as a benefit for the user (i.e. security).
It would be a poor explanation, but even when there's a good explanation, as you can see it is still dismissed as just "good optics". So how do you tell the difference between "good optics" and features that provide real benefit?
Some of my common CNC manufacturing runs use jigs with screws to hold odd-shaped products cut from large plate stock. I’ve tried many different styles of screws. Philips is great if you match the driver to the screw. The wrong driver will sort of work ok. If you buy a pack of Dewalt driver bits, you will note that each one is labeled. Match that label to your screws, and you should have a good fastener experience.
When I need to use a screw thousands of times, in manufacturing processes, I buy 18-8 Philips.
I use Torx only for one specific application: for user-serviceable finished products that have previously had support problems with idiots who can’t be bothered to select the right screwdriver. Some folks simply can’t do it. So, the most problematic products end up with Torx fasteners, which problematic customers mess up by using the wrong size Torx or even hex wrenches, and my support guys have a script for these problems that goes through “you didn’t use a T-10 wrench; there is nothing more to be said and nothing to be done for it. It’s black and white. Sorry.”
I understand proprietary screws being anti-consumer but torque screws?
Every building site I have been on almost exclusively used torque screws and every hardware store sells torque screwdrivers in every size. I wish more manufacturers would use torque screws since the heads don't show the same kind of wear.
And yes manufacturers aren't building their devices with them being repairable as primary design goal. This is understandable and there always will be a trade off. Even devices that are designed to be relatively easy to repair like Lenovo ThinkPads or HP Pro/EliteBooks have these trade-offs depending on the form factor.
The socket doesn't have to be on top of a pcb, given apple's history of "tweaking" standards its hard to believe the thicker argument. Given how reliable m.2 is (I've personally never seen a problem with one that seems to be seating related). I'm betting socketing the SSD actually makes the device more reliable long term since your looking at just replacing a disk in a couple years if it fails vs basically replacing the entire motherboard, which has got to be the most expensive part.
So, this is likely just penny pinching the manufacture, and assuming the customer will bear the repair fee if it breaks.
You're thinking about reliability in a different way from Apple. By the time the SSD fails, it's not Apple's problem anymore, it's your problem. (Or, more likely, Louis Rossman's problem.) Conversely, an SSD that gets jostled out of a socket if the product gets bumped the wrong way (maybe because the courier kicked the package around) means time and money spent on in-warranty repairs.
Also, Apple never actually used M.2. All of their socketed SSDs are proprietary form factors, though there are cheap adapters you can buy that will let you use M.2 drives.
<i>an SSD that gets jostled out of a socket if the product gets bumped the wrong way</i>
You know the other side of m.2 is screwed in, right? Anything that manages to "jostle" a m.2 drive out of the socket has likely also destroyed the machine.
> Using socketed M.2 SSDs introduces a class of problems ("the SSD isn't seated correctly")
I can't get around to how people say such stuff with a straight face - Apple is a 20-30 year old trillion dollar hardware company and its engineers can design high-tech processors, but apparently can't ensure that a RAM or SSD remains fixed in a slot? Do you really believe that?
> There are probably other reasons
Sure there are, and I'll list the major ones - planned obsolescence (soldered parts are harder to repair) and increased profits through price gauging on upgrades.
> planned obsolescence (soldered parts are harder to repair) and increased profits through price gauging on upgrades.
Neither of these really makes sense.
Planned obsolescence: Apple laptops retain their resale value for a very long time. Even if 8GB of RAM is no longer enough for you, there are plenty of people who'll still be able to use the device for many more years. The days are long over where computers became obsolete after a few years without upgrades.
Price gauging on upgrades: Almost everyone buys the base configuration, and almost no-one upgraded RAM or HD even when the hardware used to allow it. Whatever increased profit Apple are making from upgrades is going to be absolutely tiny.
If Apple really wanted to price gauge on upgrades, they'd at least stock the 16GB models in stores. My guess is that it's barely worth Apple's while to make these models at all, in a purely economic sense. What you're paying for isn't the cost of the RAM/storage itself, but the money necessary to make it worth the bother of making additional models which almost no-one buys.
> Planned obsolescence: Apple laptops retain their resale value for a very long time.
Planned obsolescence here means limiting upgrades and repairs, and making repairs or replacement so costly that you prefer to buy a new device if the current model develops performance issues or has hardware failures. With the M1 this is even worse now as you can't even run other base OSes on it reliably, thus making its users susceptible to systemic obsolescence (not including hardware or software compatibility when the OS is upgraded).
It has nothing to do with the resale value of a product. (It is a humbug argument as nobody can predict what the value of your device will be tomorrow if some newer technology comes along. Moreover, soldered parts and non-user-replaceable battery actually make such laptops even less desirable in the seconds market because they cannot be upgraded, unlike the older Apple laptops).
> Almost everyone buys the base configuration, and almost no-one upgraded RAM or HD
I'd really like to know your source for this.
> What you're paying for isn't the cost of the RAM/storage itself, but the money necessary to make it worth the bother of making additional models which almost no-one buys.
Again, share your source for this really ridiculous argument - they wouldn't need to bother with all this if they didn't solder the parts in the first place.
Planned obsolesce does have a bit to do with resale value, as it determines how much it costs to sell your existing device and replace it. This in turn has implications for whether upgrades are worth the money. Few upgrade ever for any reason; even fewer upgrade if it's economically feasible to sell your old model and buy a newer one. Apple doesn't need to do anything nefarious to stop people upgrading their MacBooks.
In looking at repairs you have to factor in reliability as well as replaceability. We're seeing a trend towards laptops becoming harder to repair but also intrinsically more reliable. No-one complains that laptop CPUs can't be replaced, for example, because CPUs are reliable enough that it's not an issue. We're well on the way to the same being true of internal flash storage. On top of that, Apple is reaping significant benefits in energy efficiency, reliability and performance from closely integrating components. That is something that benefits everyone who buys a MacBook. Hardly anyone benefits from removable RAM or SSDs.
I think some people mistakenly think that Apple could just stop 'soldering down' the RAM and SSD, but they must not realize how closely integrated everything is in the M1 MacBooks. The idea that Apple did all of this extraordinarily expensive R&D just so that they could sell a few more RAM or SSD upgrades is bordering on a conspiracy theory. As I said, there is so little demand for 16GB MacBook models that Apple doesn't even stock them in stores.
Overall, I just don't see any evidence that Apple has bad motivations here. It seems to me that you are just speculating uncharitably.
> We're seeing a trend towards laptops becoming harder to repair but also intrinsically more reliable.
The trends are separate and not related. I have a 10+ year old laptop with replaceable parts and it still runs great without any issues. With the European Union introducing the Right to Repair bill, I expect to see a reverse of this trend soon, and more repairable electronics in the future. If Apple and the others stop their selfish and unethical lobbying against the Right to Repair movement in the US, then the American consumers will also enjoy the same benefits and not be taken for a ride.
> The idea that Apple did all of this extraordinarily expensive R&D just so that they could sell a few more RAM or SSD upgrades is bordering on a conspiracy theory.
Perhaps it does for the ignorant. But it is already recognized that firms like Apple that indulge in this already know that the profit from such unrepairable devices offset the additional expense on the R&D required to create it. Even the wikipedia page on planned obsolescence specifically points this out:
Producers that pursue this strategy believe that the additional sales revenue it creates more than offsets the additional costs of research and development, and offsets the opportunity costs of repurposing an existing product line.
And, as mentioned already, everyone already knows how Apple is actively lobbying in the US against the Right to Repair bill thus clearly proving that what you call a "conspiracy theory" is indeed a deliberate and entrenched business practice in Apple.
Sure, but as failure of any given component becomes less likely, the advantages of making it replaceable cease to outweigh the disadvantages. An M1 MacBook Air with replaceable RAM and SSD would not have the same performance, battery life or form factor. 99% of Apple's customers care way more about those things than they care about upgradeability.
>But it is already recognized that firms like Apple that indulge in this already know that the profit from such unrepairable devices offset the additional expense on the R&D required to create it.
I'm baffled by this claim. If all Apple wanted was to make their laptops unupgradeable then they could just solder on generic CPU, RAM and SSD components – no R&D needed.
I try not to use the term 'conspiracy theory' lightly, but the claim that Apple's transition to the M1 architecture is motivated primarily by 'planned obsolescence' really is a conspiracy theory.
Discrete replaceable components use up more space, which leaves less space for battery. On top of that, Apple are most likely getting power consumption and performance benefits from integrating the SSD controller and reducing the length of the traces to the RAM chips by an order of magnitude (https://news.ycombinator.com/item?id=25258797). Take a look at the logic board for an M1 MacBook Air: https://photos5.appleinsider.com/gallery/38927-74332-MBA-Tea... There's just no room for RAM sticks and and M2 slot. You could make a different laptop with those features, but it's a laptop that from the point of view of most consumers would be worse.
>but it's a laptop that from the point of view of most consumers would be worse
"Worse", but the insignificantly "better" one is landfill in a few years time when the battery/storage/keyboard/anything has a fault, or even just when it needs more RAM or storage.
Do you have any stats on this? I'd be surprised if Apple laptops ended up in landfill quicker than their competitors' on average, given the huge second hand market. You also have to bear in mind that the vast majority of 'broken' laptops just get thrown away or put in a cupboard, regardless of whether it would be theoretically possible to repair them.
It's the usual Apple stuff, "we've put so much amazing into this laptop that it doesn't matter that it's starved of RAM!". New phones have the same amount of RAM as this laptop.
You will never convince me that an SSD sprinkled with Apple magic is better than having enough RAM. NEVER.
Also you will never convince me that a non-replaceable storage is somehow necessary, or better in some way than replaceable storage, even if it shaves tens of nanometers extra thickness from the laptop. NEVER.
Apple has been doing this ridiculous stuff for decades now and its victims just keep on falling for it.
Punched in the face over and over again - from dongles, to batteries, to proprietary connectors that are abandoned the next year, to lack of headphone sockets, to overheating GPUs because of inadequate heatsinks, to unibody that isn't actually unibody and bends when you tilt the screen, to needing to replace your motherboard because your keyboard got a speck of dust in it, to screens that crack if you look at them wrong, to phones that don't work if you hold them wrong, ad nauseum. Please sir, can I have some more?
They've been punched in the face for so long that now they get a headache when they're not getting punched. It's some kind of bizarre form of masochistic Stockholm Syndrome.
THE WORST PART OF IT ALL is that the market success of this consumer-hostile garbage influences the rest of the industry and ruins other products like a cancer, so now it's super hard to find a phone with a headphone socket or replaceable battery.
Fuck this shit, FUCK APPLE, and fuck their customers for not using their wallets to demand better, and thereby encouraging and normalizing the terrible behavior of this horrible company.
8GB isn't enough for everyone, but it is enough for a lot of common consumer tasks. If you check out comparisons between the 8GB and 16GB M1 models, it's surprising how hard you have to push them before the difference in RAM becomes apparent.
I personally would have liked to see Apple bump up the base spec to 16GB. But hey, the 16GB models are available if you need them.
It is bizarre to me how many problems I've literally never seen before in my life some people on this site can come up with to defend Apple's decisions, especially things like this on a website which purports to be Hacker News.
How could an M.2 SSD be seated incorrectly? It's either in and screwed down or it isn't. As for making the machine thicker, this obsession with shaving every last micron off has become almost cartoonish. I'll take serviceability and good keyboard travel over fashion statements any day, even if my laptop has to be a millimeter or two thicker because of it.
Once that soldered-in SSD goes, that expensive fashion statement is just more e-waste.
Come on man, nobody has problems with their SSD not being seated correctly with any other brand. There are screws to hold them in, they're not going anywhere.
The thickness thing - how thick is an M.2 SSD? 3mm? Get real.
Apple is consumer-hostile and their products are disposable, that's all there is to it.
> I really don't understand why Apple is apparently incapable of making these laptops use standard, replaceable m.2 NVMe SSDs.
Because those SSDs don't have Apple magic. Apple's SSDs are designed to be directly attached to the M1 CPU to provide low latency, high throughput storage -- what amounts to nonvolatile RAM, allowing an M1 Mac with 16 GiB max to vastly outperform an x86 PC with much more RAM at the same tasks.
No, really, though, it's so they can charge an arm and a leg at the Genius Bar for SSD replacements.
For perspective: 2 year old MacBook Pro 2018 256Gb. (Mostly used for development purposes)
Critical Warning: 0x00
Temperature: 37 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 42%
Data Units Read: 892,035,478 [456 TB]
Data Units Written: 786,976,871 [402 TB]
Host Read Commands: 4,989,739,415
Host Write Commands: 2,554,081,641
Controller Busy Time: 0
Power Cycles: 137
Power On Hours: 2,132
Unsafe Shutdowns: 70
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
And a 6 month old Intel i3 MacBook Air 256Gb
Critical Warning: 0x00
Temperature: 48 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 1%
Data Units Read: 59,113,734 [30.2 TB]
Data Units Written: 47,319,687 [24.2 TB]
Host Read Commands: 596,571,150
Host Write Commands: 318,913,173
Controller Busy Time: 0
Power Cycles: 93
Power On Hours: 384
Unsafe Shutdowns: 21
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
PM951 NVMe SAMSUNG 512GB
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 52 Celsius
Available Spare: 100%
Available Spare Threshold: 50%
Percentage Used: 4%
Data Units Read: 30,314,686 [15.5 TB]
Data Units Written: 35,402,028 [18.1 TB]
Host Read Commands: 601,326,085
Host Write Commands: 793,617,758
Controller Busy Time: 17,248
Power Cycles: 3,897
Power On Hours: 9,506
Unsafe Shutdowns: 343
Media and Data Integrity Errors: 0
Error Information Log Entries: 5,351
This is from my daily driver 2016 Dell XPS13 9350.
Used for years with rolling-release linux + windows for gaming. multiple reinstall of every OS and since a year, hackintosh.
If that was the concern they could just solder a plain NVMe drive. Blocking user repairs doesn't require the creation of a whole new purpose-built storage controller interface, so that can't explain their rationale here.
>making these laptops use standard, replaceable m.2 NVMe SSDs
Because not using them gives Apple control of the exact specifications and capabilities of their M1 SSDs.
Else they're bound to be compatible with the lowest common denominator standard connectors and drive implementations.
Aside from profit ("paying more for custom parts"), this is also how they control "the whole widget", and how they can innovate (like with the M1 CPUs/SoC components) when others drag themselves with me-too standard parts.
Was the allure of a Mac ever that "you can build your own from standard parts"? If anything, it was always the opposite: it's not a PC.
If there’s no data available about 256GB models, then there’s no way to assess whether their speculations are valid. I look forward to more data.
My 1TB NVMe drive in my Intel mac desktop is at 1% after 18 months, and shows some 46TB of writes by their data collection method. At least 600GB of the data is music albums that haven’t changed since I first copied it over, and I’m mildly curious whether it’s truly writing 32GB/day for 18 months (since I never run out of RAM and I don’t do large I/O activities).
But they’re all focused on M1 so I’m staying out of it and letting them do so.
For contrast, my PC “system” PCIe drive bought in 2018 shows 6,7TB, and “data” SATA SSD has 1,5TB wear, which is used for games, etc. The drive at work had around 8-10TB since 2016 last time I checked.
I was never concerned with wear, have moderate demands/usage and used these drives as much as needed.
> People that are willing to pay for that much NVME probably use it a lot.
I guess it depends on the use case. I have a 2TB drive for my home macbook pro because that's where all my photos/videos get synced to. There shouldn't be a lot of reads/writes to a significant part of my drive even though it's large.
It also seems to me there are a couple jumps in the initial tweets here (particularly the assumption of the same write rate on 256 GiB models as on 2 TiB models). But regarding several of the assumptions you mentioned here:
> (4) that they are directly comparable to non-M1 hardware without further processing
> (5) that they do not increment when Time Machine or other filesystem snapshots occur
> (6) that they do not increment when APFS copy-on-writes occur
> (7) that they do not include the total byte size of sparsely-modified files
smartmonctl is a tool that inspects the drive's built-in statistics about physical writes. If it shows significant jumps from snapshots, copy-on-write, and sparse files, it's because those feature aren't working as designed in minimizing physical writes.
I also find it unlikely that a bug in smartmonctl or IOKit would show incorrect but plausible-looking values. More likely in the SSD's firmware. But I'm not getting alarmed just yet. (I also don't own an M1 Mac, so that's easy for me to say.)
As always, this thread is full of FUD instead of facts.
You are spot on with your 1% statement (which could be as little as half that, due to rounding). The percentage used is base on the figures set by the manufacturer. It is the one used in drive warranties. Also - worth pointing out that it doesnt mean the drives stop taking writes at 100%. This counter actually goes up to 255 (per spec), where 255 means 255+. Its literally just for life "expectancy".
Some people saying to ignore that and to look at TBW. They simply dont know what they are talking about! No discussion around the nand type on these drives. Is it SLC, MLC, TLC, QLC? or is it one if the hybrid modes, such as pSLC, or iTLC which act like the higher categories? This can mean the difference between <1k and >100k+ P/E cycles.
We simply dont know what the OS is doing. I know that I can have WAY more applications open (and remaining responsive) on my 8GB M1, than i could my 16GB i7. It doesnt feel like traditional swap - so maybe its not. Maybe its way more aggressive, suspending entire applications to disk in the background - taking advantage of the massive bandwidth to make the UX better at the cost of drive writes. We simply dont know.
From following the original thread, the vast majority of people reporting 1% usage since launch. There are a handful with higher usage than that. We dont know what these people are doing on those machines, but its evident from the figures that that the writes generally scale to the amount of ram on the machine. 8GB machines generally seem to be a lot less usage than 16GB. Maybe these people with HUGE amounts of writes are running a memory hog like chrome, which consumes all available ram on the machine, and which is paging when they background it. Some people reporting 8hr daily usage since launch with 200 drive power on hours (drive hours arent same as uptime, as drives can be suspended for power saving), others reporting 800 for same "uptime". Clearly it is due to the workload.
EITHER WAY, the "percentage used" is definitely the figure to go on, because only the manufacturer knows exactly what NAND types they are using (unless someone wants to enlighten us?) as well as the wear levelling algorithms on the controller.
> Maybe they are suspending entire applications to disk in the background
This tactic of suspending unused tabs has MASSIVELY reduced by browser memory usage. Originally with "The Great Suspender" then built into FF natively when I switched to that. This should be used everywhere.
It's seriously game changer for some like me who keeps open 100 tabs.
What really matters is the number of write cycles drive performed, not some imaginary dumbed down single number "Percentage Used" indicator.
> their drive will go readonly.
Cn you point me to a drive manufacturer/model which reliably goes into READ ONLY mode after encountering a defect/exceeding wear limit? Hint: Even most expensive Intel server drives will silently die on you despite claiming read only fallback.
Just in case I all for science and now gonna try to debug what processes cause heavy writes to disk. Easier for me because I don't really use much except browser and some games.
> The first line of the smartctl output everyone's bandying about says that a total of 1% of the drive's capacity has been used, which means that in about 16.5 years (2 months * 99 / 12) their drive will go readonly.
Problem is that my SSD end up 3% dead two months after I bought M1 and after month of daily browsing / Netflix. So that news got me really worried about what gonna happen if I actually move my files to that laptop, connect it to DropBox and and gonna compile C++ on it and might be do some Java web development. What if I end up with 5% capacity loss a month?
On top of that, you don't get answers by just posting numbers and blaming Apple.
Numbers do not mean anything without context.
It would be really interesting to understand what applications folks with both higher and lower numbers use daily. And what their developer habits are. Do they use a specific browser or a specific toolchain?
But ultimately, someone should probably write a little app or daemon that can just keep an eye on what specific processes are doing disk i/o. I think if people ran that for a while and then posted _that_ output, there would probably be some better answers.
Agreed. I've seen thousands of mac minis in CI farms writing 1+ TB / day (all day, every day) and they're slowly starting to die 4 years in. It's hard to write flash to death with that kind of write load. I wouldn't worry about that.
Well, maybe using 16 years old machine today - but what about using current machine in 16 years ? A lot of stuff has stabilized and has been "good enough" for many use cases for quite a while.
Well, on Mac and other proprietary hardware & software combos.
Enterprise Linux distros such as RHEL regularly provide 10-12 years of support you can pay for per major version and presumably you will be able to upgrade the major version to reset the time.
Same thing for community Linux distros - Fedora, Debian of OpenSUSE already work on current 16+ years old hardware and one can expect that to continue in the future.
I still use my apple 16GB ipod touch to play music, its in a speaker that has a dock for it. Still works, and I use it to play music. The hardware works. But almost none of the software works, except the music player. The browser no longer works, otherwise I would play music off of spotify or youtube. That thing is probably 13 years old now.
Performance improvements have slowed a great deal since then. Assuming you aren't doing anything particularly heavy, you can easily get away with using a Sandy Bridge CPU (10 years old) for most tasks. You'll likely still be able to use it in 5 years just fine. That applies even moreso for something as performant and efficient as the M1.
Case in point, my ThinkPad T420 has been in use since 2011. Even though it's no longer my main driver, the performance is more than good enough for when I need a backup device.
For comparison, 16 years ago, we're talking about single-core 32-bit CPUs, the very first version of their Centrino brand, DDR2 RAM and IDE/PATA HDDs. There's just not much you can do with those limitations.
The performance improvements due to parallel computing hardware (think Metal GPUs and their successors) will be so insane in 16 years that your current laptop won't stand a chance at doing anything what will be considered standard then.
Amdahl's law is only relevant if you can't change your algorithms. If you can, then most of the time it's a non-issue.
The human brain runs off umptillion neurons each running at 100-1000 Hz (depending on measure). Not GHz, MHz or even kHz. Hz. Obviously it's an extreme case, but if general intelligence can be achieved without fast serial computing then there should be lots of things we can do with a few dozen cores all running at GHz rates.
That's bullshit. All of the programs I have written recently had an incredible speedup after porting relevant parts of the program from single-threaded to GPU.
Right now we are hitting the threshold of incompetent programmers and tools long before any other.
Only occasionally, I have an old windows laptop with a CD rom drive. I have some old backups photos on CDs. Its from the 2000s so not sure how old it is exactly.
Still use my 2010 MBP 15”. I changed out the battery twice and swapped in a SSD drive. It’s my only dvd drive. I also have a 2011 Air 11” that still works.
My 2011 MBA from grad school is pretty much done, its so slow. I replaced the battery and the charging port and its doing better but still about to end. Good 10 years though.
My 2017 MBP is showing 97% capacity. It's my daily driver. So at this rate I can expect the SSD to last 291 years? Really?
Either these numbers are bogus, or the capacity drop over time is far from linear.
> (2) that they are not a bug in the IOKit implementation used by smartctl
"While we're looking into the reports, know that the SMART data being reported to the third-party utility is incorrect, as it pertains to wear on our SSDs" said an AppleInsider source within Apple corporate not authorized to speak on behalf of the company. The source refused to elaborate any further on the matter when pressed for specifics.
In addition to what's been said, Flash cells lose data over time and need to be rewritten.
The controllers take care of this. But an SSD that has gone read-only, assuming it can in fact go read-only, is an SSD that will quickly lose all its data.
Literally the entire thread appears to be "people who can't do maths correctly, misinterpreting a third-party tool which may or may not be accurately reporting the information in the first place, running around like their hair is on fire."
Apple's own Activity Monitor shows hundreds of GBs of disk writes per hour on my M1 Macbook Pro. Smartctl output showing 10+ terabytes of writes per month looks about right. Apple doesn't provide lifetime TBW numbers but e.g. Samsung seems to only cover about 150-250 terabytes written total for various 250GB SSD models.
Or the trillion dollar proprietor of the software can disprove these claims and escape defamation. Why don't you take the first step in the assumptions you pointed out because nobody seems to be interested in damage control.
I read this article and immediately tried it on my own 2014 MacBook out of curiosity. Turns out its a third party command line tool that doesn't come with macOS that's reporting statistics on a proprietary system on chip SSD that has only been available for 2-3 months. This thing is probably not using an off the shelf SSD controller and the chances are the information is just wrong. Some of the stats reported down the thread look like they don't stack up as in impossible amounts of data written in the time the drive is reported as being awake.
If it is true and M1 computers start bricking in two years time, which I find unlikely, then these people can take their computers to be fixed, if that doesn't happen, a company can't sell something doesn't work, so I would take it to small claims and exercise my contractual right for the goods supplied to be fit for purpose.
I think its Apples turn to respond. I don't see this as anyone elses responsibility. Until then, all available data clearly points to a premature death of many M1 machines.
Percentage Used: 3%
Data Units Read: 47,004,809 [24.0 TB]
Data Units Written: 47,469,528 [24.3 TB]
Host Read Commands: 153,293,725
Host Write Commands: 218,787,006
It's been in-use for two months only and I haven't even compiled anythig on it and all my usage was: light gaming in few 2D games, Firefox and some films.
You're where I am and I've had this Intel MBP for nearly 4 years and haven't been nice to it (compiling, etc).
Percentage Used: 3%
Data Units Read: 57,377,073 [29.3 TB]
Data Units Written: 77,652,525 [39.7 TB]
Host Read Commands: 1,297,472,434
Host Write Commands: 1,855,797,459
I have 10x your host r/w commands. I wonder if the size of the SSD's blocks on these new MBP are very large so you get large physical writes for small OS writes? Or maybe the controller is bad at coalescing writes?
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 42 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 23%
Data Units Read: 439,149,863 [224 TB]
Data Units Written: 407,143,345 [208 TB]
Host Read Commands: 2,930,228,690
Host Write Commands: 1,777,317,283
Controller Busy Time: 0
Power Cycles: 103
Power On Hours: 1,691
Unsafe Shutdowns: 49
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
```
Mine is about 20 days old. MacBook Air M1 16Gb 1TB Drive:
Percentage Used: 0%
Data Units Read: 2,219,368 [1.13 TB]
Data Units Written: 2,141,099 [1.09 TB]
Host Read Commands: 28,719,296
Host Write Commands: 26,061,193
Controller Busy Time: 0
Power Cycles: 179
Power On Hours: 14
Not sure what to make of the power cycles stat either. It's mostly plugged in at my desk.
Option A: the tool is accurate, and you've writing 1 TB in 14 hours (73 GB/hr for the entire time the drive has been on).
Option B: The tool is interpreting the SMART data incorrectly, or the drive isn't reporting it correctly.
I mean I don't know which is correct, but it seems odd that in 20 days of ownership, the drive has been awake for only 14 hours, but been writing solidly at the rate of a gig a minute for the whole time.
I've been looking at the Activity Monitor and disk writes for the kernel task are crazy high (hundreds of GBs per hour), especially when Rosetta is involved. The Rosetta daemon seems to use a lot of memory, likely causing excessive swapping. Another suspicious process is... Safari bookmark sync? Can't even image why it needs to write 10GB per hour.
> "The Rosetta daemon seems to use a lot of memory, likely causing excessive swapping."
I haven't seen this. The rosetta daemon, oahd, is using only 1.3MB on my system (8GB M1 MacBook Air) and I have several large translated apps running. If it's stuck using lots of memory, I guess that's probably a bug.
I double-checked the numbers reported by the driver against doing actual writes to the file system and the numbers reported by the OS, and they match exactly when there is no other activity: writing a 1GB file increases the number by 1GB and a few kilobytes of metadata.
Once the memory is full, it starts swapping a lot and then things go bad.
For the record, here are the numbers from this box: 900GB written in 20 power-on hours, on a 256GB driver.
Critical Warning: 0x00
Temperature: 26 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 15,019,377 [7.68 TB]
Data Units Written: 1,759,297 [900 GB]
Host Read Commands: 101,021,092
Host Write Commands: 14,010,727
Controller Busy Time: 0
Power Cycles: 75
Power On Hours: 20
My Data Units Written has gone up by 0.3TB in the last hour doing nothing but surfing the web on Safari. I would like to believe that this points to Option B. This is on a 512GB SSD/8GB RAM MBA
Are you talking about M1 Macbook Air? Since Apple sell M1 Macbook Air with 30W adapter and I guess it can draw more than 30W, but personally I haven't seen any of it while playing both GPU / CPU intensive games.
Adding my data points: Macbook Air M1 16GB RAM, 512GB SSD. I have had the laptop for 4 days, and so far only used it for OS updates, a little Discord usage, web browsing (Firefox) and a small amount of programming (no Spotify).
Percentage Used: 0%
Data Units Read: 596,588 [305 GB]
Data Units Written: 404,196 [206 GB]
Host Read Commands: 8,532,827
Host Write Commands: 3,851,891
What should possibly make Firefox use disk more heavily on M1 Mac than it does on Linux? I've been using Firefox for decade and it never caused 0.5TBW / day on my Samsung SSDs.
I've had mine between two and three months. My writes are significantly lower than yours.\n
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 27 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 8,424,532 [4.31 TB]
Data Units Written: 5,496,149 [2.81 TB]
Host Read Commands: 122,463,637
Host Write Commands: 59,624,967
Controller Busy Time: 0
Power Cycles: 200
Power On Hours: 60
Unsafe Shutdowns: 6
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
I mostly stick to Apple applications because they seem to get much better performance, i.e. Safari instead of Firefox. Other applications that I use are Emacs, discord, mail, calendar.
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 26 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 21,290,808 [10.9 TB]
Data Units Written: 17,018,375 [8.71 TB]
Host Read Commands: 128,112,411
Host Write Commands: 89,074,700
Controller Busy Time: 0
Power Cycles: 174
Power On Hours: 80
Unsafe Shutdowns: 6
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
8GB/256GB three months old. Mostly used to write code using VS Code and browsing with Chrome. I've also compiled a lot of Haskell code.
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 24 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 3,019,145 [1.54 TB]
Data Units Written: 3,392,635 [1.73 TB]
Host Read Commands: 55,076,557
Host Write Commands: 29,421,973
Controller Busy Time: 0
Power Cycles: 100
Power On Hours: 28
Unsafe Shutdowns: 16
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 34 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 2%
Data Units Read: 117,513,026 [60.1 TB]
Data Units Written: 110,686,292 [56.6 TB]
Host Read Commands: 506,322,545
Host Write Commands: 351,505,939
Controller Busy Time: 0
Power Cycles: 389
Power On Hours: 388
Unsafe Shutdowns: 40
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 32 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 16,603,470 [8.50 TB]
Data Units Written: 15,747,066 [8.06 TB]
Host Read Commands: 95,905,593
Host Write Commands: 62,087,286
Controller Busy Time: 0
Power Cycles: 93
Power On Hours: 52
Unsafe Shutdowns: 7
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
FWIW I've installed each incremental MacOS update the day it came out.
That's double the reads/writes of my 16 GB/1 TB 2020 Intel MBP for less than 1/3 the Power On Hours. Ratio between data total and read and write commands is quite different though.
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 32 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 8,413,842 [4.30 TB]
Data Units Written: 8,138,306 [4.16 TB]
Host Read Commands: 187,292,832
Host Write Commands: 154,212,201
Controller Busy Time: 0
Power Cycles: 105
Power On Hours: 171
m1 mac mini 256 GiB drive, 16 GiB memory. Usage is as a thin client to beefier (louder, and hotter) machine in another room; youtube; slack; zoom; discord. Used since 22 Jan 2021.
data point
Percentage Used: 0%
Data Units Read: 1,815,191 [929 GB]
Data Units Written: 1,854,253 [949 GB]
Host Read Commands: 31,584,364
Host Write Commands: 25,360,962
data point verbose
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 31 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 1,815,191 [929 GB]
Data Units Written: 1,854,253 [949 GB]
Host Read Commands: 31,584,364
Host Write Commands: 25,360,962
Controller Busy Time: 0
Power Cycles: 102
Power On Hours: 33
Unsafe Shutdowns: 7
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Percentage Used: 0%
Data Units Read: 7,920,747 [4.05 TB]
Data Units Written: 1,490,848 [763 GB]
Host Read Commands: 90,988,727
Host Write Commands: 34,766,350
Usage: 3 months~ wrote an ts web app in development over about 6-8 weeks and then 4 weeks of regular gaming (Steam:EU4). Development was all in terminal (vim + npm).
On my older (3+ year now) MPB 16 with a 512GB drive I get:
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 41 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 22%
Data Units Read: 621,054,721 [317 TB]
Data Units Written: 483,003,547 [247 TB]
Host Read Commands: 9,028,855,382
Host Write Commands: 3,924,827,022
Controller Busy Time: 16,789
Power Cycles: 8,373
Power On Hours: 6,828
Unsafe Shutdowns: 375
Media and Data Integrity Errors: 0
Error Information Log Entries: 271
My 2016 MBP shows very similar numbers, though my % is even higher (26%):
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 30 Celsius
Available Spare: 88%
Available Spare Threshold: 2%
Percentage Used: 24%
Data Units Read: 356,923,637 [182 TB]
Data Units Written: 354,603,847 [181 TB]
Host Read Commands: 2,817,724,668
Host Write Commands: 2,270,199,931
Controller Busy Time: 0
Power Cycles: 21,443
Power On Hours: 996
Unsafe Shutdowns: 21
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Wow. How long ago did you get the MBP 16"? I've been running this system as my daily driver for >12 months and see very different numbers from you:
smartctl 7.2 2020-12-30 r5155 [Darwin 20.4.0 x86_64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number: APPLE SSD AP8192N
Serial Number: XXXXXXXXXXXXX
Firmware Version: 1161.100
PCI Vendor/Subsystem ID: 0x106b
IEEE OUI Identifier: 0x000000
Controller ID: 0
NVMe Version: <1.2
Number of Namespaces: 1
Local Time is: Tue Feb 23 17:01:28 2021 PST
Firmware Updates (0x02): 1 Slot
Optional Admin Commands (0x0004): Frmw_DL
Optional NVM Commands (0x0004): DS_Mngmt
Maximum Data Transfer Size: 256 Pages
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 0.00W - - 0 0 0 0 0 0
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 37 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 53,827,033 [27.5 TB]
Data Units Written: 37,342,497 [19.1 TB]
Host Read Commands: 704,072,000
Host Write Commands: 669,901,451
Controller Busy Time: 0
Power Cycles: 234
Power On Hours: 528
Unsafe Shutdowns: 74
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Pardon me. I have 256GB model. While I used it daily for almost 2 months now it's still sounds crazy to have 25TBW for my usage. I didn't used it for any programming or compilation, no brew package builds either. All I did is used Firefox, downloaded like 30GB of torrents and then watched some Netflix. Also I haven't even moved any of my files to this laptop: so no photo archive, no email sync or dropbox, literaly no usage.
All I can hope for is that it's some kind of bug in SMART or might be Apple gonna fix it soon. Since I use my Kubuntu desktop with Samsung 850 PRO 256GB way more heavily and I barely got to 30TBW in like 4 years.
Torrents suck for SSD lifetime, the chunk size is typically 16kb but chunks aren’t downloaded whole and in order, so you might end up with a lot of wear and tear unless the file system and the torrent client/library are both working together to eliminate the partial writes.
However, it doesn’t seem that torrents are a common factor in user accounts, and 30GiB in torrents would not account for that much write amplification, so carry on!
Yea i'd personally be pretty annoyed if my macbook was just writing that much data in the background slowly wearing out my ssd, 25tb over 2 months is just silly.
checked my 2015 15" that i've been using non stop for over 3 years and it's at 96tb which is 2.6 a month.
You can also keep an eye on the data counter in activity monitor while it's on. At that kind of numbers you're doing to catch it even if it's periodic: it's got a running total too since the Mac was switched on.
Apple better come to my house and fix it here or just bring me a new laptop. I'll take a normal intel at this rate. I don't have the time for them to monkey around with it.
So around 8.2 TB written in 2.5 years of daily use. And I have had syncthing running on it past few months to sync some git repos as an experiment. (And I run a rolling release distro so lots of updates.) In comparison the M1 data posted here looks out of whack.
I generally use bpftrace to find if anything keeps writing to my SSD - for the most part I don't find any misbehaving programs on modern Linux distros. Assuming dtrace still works on M1 Macs you might be able to find what is writing to the disk.
Another Linux user data point: XPS13(9310)/512GB nvme/16GB RAM/2GB swapfile/kubuntu 20.10/kernel 5.11 (in use since ~Dec 5, 2020):
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 33 Celsius
Available Spare: 100%
Available Spare Threshold: 50%
Percentage Used: 0%
Data Units Read: 631,319 [323 GB]
Data Units Written: 1,157,883 [592 GB]
Host Read Commands: 4,650,820
Host Write Commands: 7,329,774
Controller Busy Time: 16
Power Cycles: 37
Power On Hours: 45
Unsafe Shutdowns: 15
Media and Data Integrity Errors: 0
Error Information Log Entries: 1
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
I do Linux kernel dev and have done quite a few kernel compiles at this point (using Ubuntu's full config) as well as general browsing and some Docker workloads and Spotify... Judging by 'Power On Hours' combined with 'Data Units Written', looks like there's a bug to me.
As a side, I don't know why 'Available Spare Threshold' is 50%, and the 1 'Error Information Log Entries' appears to be successful results of some test.
On linux there are a bunch of "laptop mode" configurations that can minimize the time the disk will be woken up to actually write out data. The price is that you'll lose the last N minutes of work on a hard crash. And it only works when you have enough RAM to avoid swapping and keep dirty pages in memory. And your workloads don't explicitly call fsync. But if setup correctly your drive will spend most of its time sleeping.
If you run ZFS, you can set sync=disabled for your filesystems. This will disable fsync.
Unlike most (all?) other filesystems, that's actually safe. ZFS doesn't reorder writes between transaction groups, so after a crash you'll get a consistent state from however many minutes ago.
(However, txgs have a time limit of 5 seconds by default. You also need to increase that.)
As it relates to the Mac's, I'm starting to wonder if their SMART reporting is just faulty. Power on Hours seems to be definitively wrong, I'm starting to wonder if all of the SMART data is bogus and there is no issue.
Another Linux datapoint, X1 Carbon 6th Generation, daily use.
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 35 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 25%
Data Units Read: 22,242,132 [11.3 TB]
Data Units Written: 74,693,212 [38.2 TB]
Host Read Commands: 540,582,518
Host Write Commands: 1,857,635,922
Controller Busy Time: 5,131
Power Cycles: 884
Power On Hours: 3,497
Unsafe Shutdowns: 261
Media and Data Integrity Errors: 0
Error Information Log Entries: 882
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 35 Celsius
Temperature Sensor 2: 37 Celsius
How do you only have 558 PoH with ~900 days of daily use? Do you only use it for 30m per day?
Here's a 512GB WD Black from my home lab server. It runs ~10 super light usage VMs with Docker stacks that include 2 GitLab instances, 2 GitLab runners, 3 Nextcloud instances, 3 Redmine instances, 1 Gitea instance, 2 Drone runners, 1 Minio instance, 1 Nexus instance, 1 Emby instance (w/transcoding), and various reverse proxies, etc..
The write endurance is supposed to be 300TBW, so it should really be over 20% used but says 0%.
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 38 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 0%
Data Units Read: 95,066,245 [48.6 TB]
Data Units Written: 135,910,315 [69.5 TB]
Host Read Commands: 972,466,473
Host Write Commands: 2,504,003,547
Power On Hours: 17,383
Error Information (NVMe Log 0x01, max 256 entries)
No Errors Logged
Compare it to a 500MB Crucial MX500 under the same load which is supposed to have 180TBW:
That's 68% used after ~30TBW (59,166,051,308 sectors*512 = 30,293,018,269,696 bytes).
What I've learned from trying to diagnose those MX500s is that TBW doesn't really matter all that much. It's the P/E cycles that really count. For example, the MX500s are rated for 1500 erase cycles (#173 above) IIRC.
I've also become skeptical of many SMART implementations. I know the PoH on the Crucials is incorrect because when they were new they were reporting 45 days of PoH on a system with 76 days of uptime. So if the manufacturers can't get something as simple as PoH right, how can anything be trusted?
I have a 1TB MX500 and a much older 240GB M500 and the PoH stats are similarly unrealistic for both. The MX500 has ran close to 24/7 for two months now and reports 328h which is like 14 days. The M500 reports 12k hours (~1.35 years) but it's 5+ years old and I'm sure it should be higher.
OTOH the number of power cycles they report seem to be too high, at 45 and 2332 respectively. Maybe that's related?
Just yesterday people were telling us that Apple revolutionized memory and are only installing such low amounts of RAM because hey, you got that super fast SSD right there it's basically RAM by a different name.
Well, whoops. I guess there are some side effects.
Rules of software engineering and engineering in general: you can't break Einstein. RAM is generally slow to begin with, but the disk is slower. RAM should be plenty to be used as disk cache, not vice versa. (edit: omitted the last sentence)
Presumably people getting 16GB of RAM are also running more memory intensive workloads than those not shelling out for the upgrade, so it could be a non-factor.
Yeah, although in this case it takes me back to ~10 years ago. I didn't think 8GB was a lot back then either, and we typically tried for at least 16.
Its pretty disappointing how little advancement has been made in RAM in recent years. It seems like everything should come with at least 16GB now with 32GB being readily available, but I guess noone is working on the cost of desktop/laptop memory.
Wow those W520 were heavy. I got a few dumped on me at an old job. They were going to throw them away. I put a few together that I sold for 400 each in 2018, great returns. I didn’t even think they would bring 200
I remember having 8 memory slots for my Cyrix 40Mhz 486 clone in 1994. I had 4MB of memory in four slots. I was thrilled when a friend gave me 4 256KB memory sims for an extra 1MB of RAM, which significantly sped up my computer :)
I had a 75 MB (yes, MB) drive in the first computer I bought. It was top of the line at the time. I remember telling my wife that I couldn’t imagine ever filling it up. Ha! Now I feel like a pauper if my phone has a 64GB drive ;-)
At the time it would have served pretty much all normal users enough, so I don’t see what’s wrong with the statement. It’s like today telling a home user 32GB should be enough for just about anything a normal user would do with their home PC.
I wouldn’t advise anyone to get 128GB (or more) “just in case”. If they have a special need case, then, sure , otherwise no.
I think I forget sometimes because I'm working with this stuff daily now, but yeah the evolution of computing technology is just astounding in scale and pace.
I've noticed kernel_task's disk usage is heavily skewed towards writes (e.g. 7GB read for 170GB written over a couple of hours). Assuming it is indeed swapping pages to disk and reads/writes are counted properly, either the dynamic pager or some application/system process are making extremely unfortunate life decisions. Lots of dirty pages are never read back, like there is a huge cache of... something.
In general you will write out pages to swap before evicting them from memory. The goal is never having to wait for a page to finish being flushed to disk to allocate a new page. This means you will sometimes access a page that had been written out to disk but is still available to memory. If you modify it it may be written out again before being read. So it isn't unreasonable to have swap writes be higher than swap reads.
However those numbers seem extreme probably some bad tuning or just neglecting to account for the full cost of writing to disk.
One thing: any machine client&server I've used in the past 20 years, the swap is the 1st thing to disable.
If there is a need for swap, install more RAM.
> Terrabytes of usage must be a bug.
Garbage collection setups with relatively large heaps are extra horrid when it comes to memory pattern usage while running full GC... and respectively swap.
Swap has value. E.g. it lets you have a tmpfs that's larger than your physical RAM. Or you can process memory dumps of a machine larger than yours. Or you can just freeze a process group and have it dumped to disk so other processes can use the RAM in the meantime. It can definitely help avoiding the OOM killer.
Swap occupancy and swap activity are not the same. The former is fine, the latter should be kept small.
> Or you can process memory dumps of a machine larger than yours.
You should be able to that in software anyways - instead of loading to memory entirely, all it has to do is memory map the file dump.
About the need to freeze a process (group). I don't quite see how that's useful on a server. On a desktop machine I have never run into such a case where closing the application would not suffice. Is there an example?
Last - using the swap pretty much means no disk cache.
> You should be able to that in software anyways - instead of loading to memory entirely, all it has to do is memory map the file dump.
Should, perhaps. But in practice I have had analyzers gobble up more memory than available.
> About the need to freeze a process (group). I don't quite see how that's useful on a server. On a desktop machine I have never run into such a case where closing the application would not suffice. Is there an example?
Long-running renderjob, preempted by a higher-priority one. Technically they're resumable so we could stop them and resume the other one later from a snapshot file but that isn't implemented in the current workers. So as long as the machine has enough ram+swap it's easier to freeze the current job and start the higher-priority one.
> Last - using the swap pretty much means no disk cache.
I don't know all the tunables of the swap subsystem well enough but I have seen swap being used before running out of physical ram. I assume some IO displaced inactive applications.
Of course, that's the point. If you cannot have enough memory, you don't buy them. Even the old haswell (2013) Acer laptop has 16GB; the skylake lenovo laptop I use for work has 32GB.
You're not wrong, but I have to wonder how much phone RAM is just the new megapixels, i.e. bigger number = more impressive spec sheet.
iOS is obviously a lot stricter on background activity than Android but manages to work great on 3-6GB.
I can comfortably do development work on my laptop with 16GB, and until last year was managing mostly okay on a 5 year-old machine with 8GB.
When you factor in the sort of stuff people actually do on a phone, surely 12-16GB is a massive waste? You can make the argument that it will become more useful as the phone ages, but by that point it will have probably stopped receiving software updates.
Arm and x86 have a similar number of instructions. The tradeoff is that with x86 you end up with more compact code because the instructions are variable length; but on arm the actual decoding is easier so you have more space for i$ (or anything else you want).
No, the RAM is separate chips / dies, albeit incorporated into the same module as the CPU. On pictures of the M1[1], the RAM is the plastic covered chips next to the main heat spreader for the SoC.
It's completely baffling how a little meme like this, which is in no way accurate, survives and thrives not just at large but specifically on HN. Can we track the meme's point of origin?
It's not that far off-base, given that most people don't understand the difference between on-package, where a component is bundled with the CPU on the same chip, and on-die, where a feature is actually part of the same litho process as the CPU.
There's really no meme to trace here; just a popular misunderstanding of semiconductor terminology.
The RAM is on-chip but not on-die... but most people (even developers) don't know the difference between a chip, a die, and a core to begin with. It's a pretty minor error, although such errors are a good sign that someone either doesn't know what they're talking about or is careless with terminology.
Apple themselves are responsible for it, all their promotional material for the chip have included unified memory right alongside all the on-die modules
Speaking for myself, this is the first time I've heard of multiple modules in the same chip. Hearing about it now and knowing generally how dies[1] are manufactured and packaged into ICs it's an obvious thing to do, but yeah never crossed my mind.
It's cool, hey? How AMD (and others) are now doing their CPUs is a good example: there's often several CPU "chiplets" and then one IO chiplet on the same package. [0] is the first good search result.
Given what they've done with the M1, I highly suspect Apple will do something like that for their higher end machines.
What makes you think HN is that different from anywhere else?
And besides, memes persist much more readily anywhere where tribalism comes into play e.g. Politics, Apple vs. [Insert your OS here], Any Tesla thread etc.
Back in 2016, a Spotify bug resulted in writes of approximately 700GB/day [1]. No SSD died, judging from the lack of a class action lawsuit. And at least for my MBP SSD, I can tell you it is still doing very much alright, despite being thrashed for anywhere between a few weeks to a few months.
Having said that, do apply any temporary fix. Even if the effect will probably be negligible, it's still unnecessary wear.
Can't this easily be a userspace app overwriting a logfile or something thousands of times over? Spotify is rebuilding a cache in the background every minute, or some audio editing app misuses a sqlite database and it's writing the same stuff every second etc?
Any app/program can write to the disk. Why this is being framed as a hardware/OS error?
The problem is that the ssd is not replaceable and people are saying the main cause of this heavy writing is the swapping to disk. So a hardware/OS problem.
I think it wouldn't get a lot of attention if the ssd could be replaced.
This isn't a correct statement, I think what you mean is it isn't user replaceable. Take one to Apple who can and does replace these due to failure through their service program. Outside warranty you will of course be paying for that.
More likely that apple will just price gouge on a motherboard replacement and just trash the old one. And of course, they will do this only after attempting to upsell the user on an entirely new system.
It reeks of shade to intentionally grenade the hardware just to get more door traffic at retail locations.
> It reeks of shade to intentionally grenade the hardware just to get more door traffic at retail locations
Apple already forces owners of "vintage" MBPs to come in to a store for repair (or repair-related warranty) work, even if the failing component is identical to that of a non-vintage model. You'd think they would prefer to send those devices in to a repair depot with lots of inventory for older parts, but now that you mention it I guess they figured out that foot traffic converts into sales at a non-zero rate.
Every part of this is a made-up, unsupported, bad-faith, outright lie. No part of this is “more likely” based on my 30 years of experience of actually dealing with Apple. What’s actually more likely, in my experience, is Apple going above and beyond to cover repairs, even out of warranty, for issues that are even partially their fault.
“Attempting to upsell” doesn’t pass the laugh test. And it’s incredibly crass and irresponsible of you to toss around words like “intentionally grenade the hardware” without the slightest hint of evidence.
> Every part of this is a made-up, unsupported, bad-faith, outright lie. No part of this is “more likely” based on my 30 years of experience of actually dealing with Apple. What’s actually more likely, in my experience, is Apple going above and beyond to cover repairs, even out of warranty, for issues that are even partially their fault.
You are engaging in projection.
> “Attempting to upsell” doesn’t pass the laugh test.
Then there should be no reason to force the user to come in to a retail shop to get approved repairs on their machines then, and apple can save lots of money by going to mail-in repairs.
> And it’s incredibly crass and irresponsible of you to toss around words like “intentionally grenade the hardware” without the slightest hint of evidence.
There's hundreds of examples in this thread alone.
No, I'm not projecting. No, Apple doesn't "force" anyone to come into retail shops; that is simply false. You can mail hardware in. Some people aren't near an Apple Store.
And no, there are zero examples in this thread of any evidence that Apple has intentionally harmed its own hardware.
They're not meaningfully replaceable without paying apple a lot of money for an ssd, assuming you're out of warranty, hastening planned obsolescence. How's that?
People understand apple can replace them. Nobody wants to pay apple to replace an ssd.
Apple can't though. It's soldered to the motherboard, so replacement also involves replacing the CPU and ram. Suddenly a $50 part has turned into about $500 (or $1000 with apple price gouging on ram and storage prices).
Since it's not happening to everyone (check the original twitter thread), I'm 100% sure it's third party apps writing in small 4kb chunks with O_DIRECT file opened, therefore creating write amplification effect where 4kb write becomes SSD block size write (for example 4mb), therefore one 4kb write per second, instead of being 0.3GB/day, becomes 345.6GB/day.
Can you explain this more? I always thought writes were by the page and erases were by the block, so the scenario you're describing would only be likely with TRIM disabled and an awful garbage collection algorithm.
I could be wrong though because I don't understand the implications of O_DIRECT.
That's an interesting thought. Given the prevalence, I almost wonder if it's a bug in an upstream framework or maybe a crazy default only on M1 Mac's in some language.
Chrome would also be unsurprising. Theyre known for their memory churn already.
My first thought when reading about this is that logging gone awry is a very likely cause. Could be the OS itself or apps doing it, but in either case a fix should be pretty reasonable.
> The engineer explains problem relates to a "regression in the com.apple.security.sandbox kext (or one of its related components)" in macOS 10.15.6. As part of the investigation, it was discovered com.apple.security.sandbox was allocating millions of blocks of memory containing just the text "/dev" and no other data.
> Why this is being framed as a hardware/OS error?
Because this error closely matches the new Tesla hardware replacement (MCU mmc is not a "wear item" or not, depending on who you ask) and like that chip, the M1 also has non-replaceable parts which seem to be wearing faster than normal.
Maybe they transitioned from an earlier MacBook and feel like they are running the same apps? Though, of course, new architecture means that's a bad assumption.
I suppose it could be Mail, Spotify, iCloud, or anything really. But if its happening on all the new M1 macs, what do you think the common denominator is?
If you follow the original twitter thread - https://twitter.com/marcan42/status/1364409829788250113 - you'll find that it's not affecting all M1 macs, and affects many intel macs, therefore it's almost guaranteed to be third party app's behaviour (writing 4kb chunks with O_DIRECT will lead to SSD write amplification)
I think Spotify insist on using 10% of my drive. I can’t configure it as far as I know (any more). The only solution that I know of is `rm -rf <cache>`.
Hmm... My M1 Air 16GB from December is an order of magnitude higher than that, but I do usually have 10 VS Code windows open, lots of Node instances, databases, Slack, and LOTS of browser tabs. Should I be worried?
Percentage Used: 1%
Data Units Read: 74,899,871 [38.3 TB]
Data Units Written: 71,233,417 [36.4 TB]
If you can, run iosnoop in background and note every program that writes 4kb chunks a lot.
These are wearing your SSD down.
And no, they will not be combined together because most of these shitty apps have O_DIRECT flag set or call fsync() after each write(), making the OS obligated to hit SSD with 4kb write.
SSD's don't operate that way, though, a 4kb write is guaranteed to become a bigger write since SSD cell/block is usually either 512kb, or 1mb, or 2mb, or hell even 4mb. So one 4kb per second becomes 4mb per second, and that's 345.6GB/day.
Pretty scary how one shitty app can ruin your ssd so fast, huh? I saw google drive app do 50 small writes per second. That's ~2TB/day.
That's really interesting. Doing some Google'ing there seems to be almost no discussion on this issue. I would happily disable fsync() at the OS level for all apps at the expense of possible data corruption as this is only a dev machine and everything important is backed. I wouldn't know how to do this though.
I've investigated this in some depth for my personal boxes with slow HDDs, in lieu of a more formal writeup, here is the best solution I've found. First:
This lies to the OS and tells it that drive writes don't need to be fsync'd. Replace "sda" with whatever your drive in question is, naturally. Note that this is not persistent, you'll need to configure your init system to do this on boot. You can verify it's working by looking at /sys/block/sda/stat (see https://www.kernel.org/doc/html/latest/block/stat.html).
Next, in /etc/fstab configure your filesystem to be mounted with "barrier=0" (note that only some filesystems support this), which will often prevent data from getting written out to the disk at all, instead getting kept in cache.
You still need the first part because the filesystem layer won't cover all possible cases--for instance, LVM thin provisioning will issue a manual flush below the filesystem layer once per second, and there's no way to remove that.
One problem I haven't managed to solve is detection--in the unlikely event things don't shut down properly (e.g. a kernel panic), how do I find this out so that I can restore from a backup (rather than having something subtely corrupted somewhere)? This is conceptually easy with some global bit in a special sector used as a mutex, but I don't know of any existing off-the-shelf solution implementing this.
The equation is (months machines was used for)/(percentage used). Assuming you've used the machine for 2 months that comes out to 200 months or 16 years left. I suspect you won't care by then.
25% of the available space gone after four years could very well be a problem.
Also, if writes continue at the same rate then wear will increase exponentially, as the same number of writes are distributed over an increasingly smaller amount of space.
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 1%
Data Units Read: 53,339,111 [27.3 TB]
Data Units Written: 49,807,244 [25.5 TB]
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 3%
Data Units Read: 231,234,195 [118 TB]
Data Units Written: 182,951,675 [93.6 TB]
btw. this is my macbook pro 16gb 2019 (intel) with ~800 hours:
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 91,665,449 [46,9 TB]
Data Units Written: 88,819,714 [45,4 TB]
heavy usage, with vm and i've written to the 1tb nvme disk multiple times and needed to clean stuff to get free space (multiple times)
16gig model checking in with fairly heavy usage since late November
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 1%
Data Units Read: 11,700,881 [5.99 TB]
Data Units Written: 38,044,983 [19.4 TB]
Adding to the data point, this is my 16GB Mac Mini 1TB I've been using as a primary computer since December (so ~3 months). I have been doing some development on this machine as well as porting packages for ARM for MacPorts. I also have Syncthing and Time Machine running.
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 9,664,566 [4.94 TB]
Data Units Written: 5,755,588 [2.94 TB]
This sounds to me like a write amplification issue caused by some software as mentioned elsewhere in the thread (first party or third party).
The post alleges that this this is due to "aggressive swap". I suspect that the MacOS developers at Apple know their target hardware, so I'm wondering (if this is the situation) whether the developers realized it, but the product was shipped anyway.
FWIW, even as a developer who mostly stays out of the kernel, swap has long been on my mind for Linux laptops (e.g., keeping pre-SSD spinning-rust hard drives sleeping), and I pretty much always disable swap, even on desktops. Part of the rationale is, if I can't fit all the processes in (now) several gigabytes of RAM, something probably needs an OOM-euthanizing.
Swap is most useful, when you (almost) never look at the swapped out data again.
That might sound stupid, but it occurs often enough in practice: eg code or data structures that were only used when starting off the program in question.
An optimally coded program wouldn't benefit from this. But real world programs often do.
I wouldn't link the usefulness of swap to the quality of programs. It has a lot to do with user behavior. If a user opens 100 tabs in a web browser but never looks at most of them, swapping unused ones to disk if they don't fit in RAM anymore is a very useful behavior (considering the alternative is killing the entire program).
You could argue an optimally coded program should do this themselves, however, that is not so easy when multiple programs are running concurrently and fighting for RAM. On an 8GB machine, how much RAM should your web browser use? The answer is "it depends". If there are no other programs running, it can use all 8GB. If you are running an IDE, a chat client and a compiler in the background, it should probably be using less.
The individual programs don't have enough information to make this judgement call - and making very conservative estimates will lead to much more swapping than actually required. The operating system is in charge of allocating memory for all those programs and can effectively make the judgement of what should be swapped out or not.
I always disabled swap on Linux (back when I was using HDDs) because it was completely useless anyway. As soon as you ran out of RAM and it started using swap the entire system would grind to a halt and you'd basically have no choice but to restart it, killing all processes rather than just random ones.
There doesn't seem to have been any effort by Linux developers to fix that (e.g. provide a kernel level GUI to let you pick which processes to kill when out of RAM).
> There doesn't seem to have been any effort by Linux developers to fix that (e.g. provide a kernel level GUI to let you pick which processes to kill when out of RAM).
There is no and will be no GUI for that, especially considering that many linux devices do not have any.
However, that does not mean there is no effort to improve the situation. The main player here is, surprisingly, Facebook. Their effort is going to be integrated into systemd in the form of systemd-oomd and Fedora 34 is going to ship it enabled out of the box (see https://fedoraproject.org/wiki/Changes/EnableSystemdOomd).
On the Subject of Swap and SSD write I have previously reported many issues to Apple over the years. Earliest report going back to 2016. And of course heard no feedback as usual.
You may want to check Activity Monitor > Disk > Kernel_Task for Disk Write. This shows how much swaps write, along with other apps you have been using since your last reboot. You can check your last reboot time in System Information > Software.
Currently Launchd wrote 50GB which I have no idea why. Corespotlightd wrote 70GB. And SafariBookMarkagent with 4GB. And Kernel_Task 4.5TB.
These numbers are over the course of 30 days.
Generally speaking Mac has been very Swap heavy for a long time. And Big Sur seems to be pushing this further as there are evidence shown this isn't seems to be specific to M1.
On Safari, if you have lots of Tabs, Clicking Tab Overview will force all Tabs to Reload. Which will write hundreds of GB of Data if you have lots of Tab, especially if they were originally sitting idle. ( i.e You close Safari and you reopen it where Tabs not focused are not loaded ).
If you have iCloud Drive, there may be some cases where you are downloading few hundreds GB of iCloud Data a day if you have certain Apps that constantly update its files. WhatsApp Backup used to ( and may still be the case ) cause this problem if you have a large WhatsApp History. WhatsApp download a 50GB copy, only to realise it has a new copy on iCloud and re-download again. Doing it many times per day.
Assuming some tweaked options, iCloud could be downloading all of your iPhone / iPad Backup to your Mac as cache. Again, if you have 100GB of Backup and that is downloaded once or twice per day. It adds up.
Disabling iCloud Drive tends to help.
Last time we had Electron Apps, or specifically Spotify making hundreds of GB per write per day due to a bug.
But as usual, Apple apologist around the world are quick to point out these are non-issues. Luckily HN still have some sanity. Hopefully this is finally enough to finally push Apple to fix it. Windows or Linux doesn't have the same write data count even when the usage pattern are roughly the same.
Best part is that the SSD is soldered into the board and it can't be replaced.
Even worse. In this case for the M1 Macs when the SSD dies, the Mac becomes bricked and if you have no backups then the data on the dead SSD is lost.
A typical web dev may rely on running a couple of electron apps, docker, IntelliJ and lots of chrome tabs to get their work done. Alongside everyday use looks to me that the life of this SSD is going to be shortened very quickly.
I already said this before [0] and it seems to them 'it is not an issue.' Well here we are and this thread is full of other concerned users.
My late mid 2014 MBP has the kernal_task heavy write issue. I has written over 15TB the last 61 days. The new nvme disk I installed two months ago is already at 2% wear:
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 2%
Data Units Read: 45,859,839 [23.4 TB]
Data Units Written: 44,541,716 [22.8 TB]
Host Read Commands: 296,573,912
Host Write Commands: 218,840,299
Controller Busy Time: 6,954
Power Cycles: 138
Power On Hours: 1,662
A quick Google directed me at:
- Spotlight indexing packages folders (npm, pip etc). Add those to the "privacy" tab in the spotlight settings to skip indexing.
- The battery and or logic board may be failing.
I've definitely seen behavior like this on Mohave on my old 2012 MBA, Tb of data written to the (128G!) drive over the course of the uptime of the machine.
There was no iCloud Drive on it, Safari not generally in use. Memory pressure wasn't extraordinarily high, but it wasn't a well specc'd machine for the current day.
I had this exact issue and had to disable SIP to disable swap because I was swapping over 2TB a day. Everyone I chatted to online about it jumped over themselves to tell me I was an idiot, and I somehow needed to swap even though I have 16gb ram.
Needless to say I'm having a little chuckle. It's a lesson for us all about blindly believing things.
Not a thing, though I didn't benchmark! Only stupid thing is you can't run iOS apps while SIP is disabled, and re-enabling SIP turns swap back on. iOS apps aren't something I care about so it doesn't really affect me.
Intel iMac I bought in november 2020 (~3 months of use) with 128GB of RAM, so it never swaps:
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 42 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 95,824,868 [49.0 TB]
Data Units Written: 89,046,642 [45.5 TB]
Host Read Commands: 1,365,215,816
Host Write Commands: 837,815,676
Controller Busy Time: 0
Power Cycles: 145
Power On Hours: 604
Looking at iosnoop I see dropbox, google drive, backbalze, arq and other programs heavily abusing my SSD with tons of small (4kb) writes.
With age of 105 days, that's 433 GB/day.
My SSD's block size is 45.5TB/89046642 which is ~512kb.
433 GB/day divided by blocksize is 9.8 hertz, meaning I've had apps write ~10 times per second on average.
This corresponds with what I'm seeing at iosnoop. Tons of third party apps open files with O_DIRECT flag or call fsync() right after each write. SQLite databases are most common culprit, since by default it flushes to disk every small insert/replace. And every sqlite database I've seen by third party app just uses defaults (none of them are using WAL, for example).
OS has to flush that to disk if O_DIRECT or fsync() is used, therefore leading to write amplification.
https://www.sqlite.org/atomiccommit.html -- Each insert gets into journal first, that's 4kb there, then index gets updated, that's another 4kb, and then it gets committed to main database, that's another 4kb. There might be more, but in what I'm observing in iosnoop it's coming as multiple of 3.
And since I'm on 512kb SSD blocks, that's 1.5MB per single insert/update.
I forced every sqlite database on my disk to be WAL and changed synchronous to NORMAL from default (FULL), and it reduced I/O activity greatly without any ill side effects in months, most likely developers of these apps aren't even aware of these tunables:
4k (Page size) writes with fsync is the standard way to commit database pages reliably. My understanding is that modern SSDs also have the 4K page as the minimum unit of input. Can you refer me to the documentation or article that shows a page write causes a whole block to be written?
Got my base M1 mini with 8GB of RAM and 256GB SSD on February 13th. With just simple content consumption (browser, Spotify, messaging) as of now it has written over 20TB of data! That is insane.
I had to switch off Time Machine on MacbookHD (main disk) by excluding it from the Time Machine preferences, and reinstall the OS to erase the previous copies it had made, all ~400GB of it (on my 499GB disk)... As the user, I was not asked for permission to allow Time Machine to use up to 80% of available disk capacity. It was backing up so many copies of the data until I noticed I had left only 8GB of free space and I only noticed because Quicktime screen recording started crashing due to insufficient disk space. Basically, the decision by Apple is to use up to 80% of the available disk space to save copies of the data on that disk without user permission, and because it checks and reconciles the amount it's using every so often you can easily end up with 0% free capacity if you e.g. record a lot of videos in between.
This is extremely good to know, thank you - I have turned off my M1 Mini and probably won’t be booting it until Apple announces a fix and explanation about the hard drive issue.
I am not quite sure if this Time Machine business is directly related (though it might be due by poorly optimized rewrites, particularly if there’s outstanding hardware shenanigans with encryption on M1s). It is baffling that the disk usage appears to be a deliberate decision.
Only after publication on news sites they fixed it.
Won't be surprised if the bug regressed and happened again.
I don't use Spotify (not available in my country), so I wouldn't know, but I bet half of the reports are because of some badly written third party app constantly writing in 4kb chunks with O_DIRECT or fsync() all the time.
I don’t switch it off, so that I can have remote access to
It, but 20TB in 10 days sounds a lot.
Also worth noting that I see my swap fluctuating from 6-8GB almost all the time (granted I have multiple browser tabs and windows, maybe like 10-30 ish ?)
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 9,670,166 [4.95 TB]
Data Units Written: 6,898,714 [3.53 TB]
Host Read Commands: 75,916,955
Host Write Commands: 41,914,954
Power Cycles: 80
Fwiw, my swap tends to hover around 2.5-3gb lately
Percentage Used: 2%
Data Units Read: 84,461,830 [43.2 TB]
Data Units Written: 69,369,816 [35.5 TB]
Host Read Commands: 2,125,724,765
Host Write Commands: 1,440,781,009
Controller Busy Time: 0
Power Cycles: 117
Power On Hours: 1,253
Youtube is/was disk killer for some reason. Used to listen a lot of music from it, but the constant heavy disk usage put me off. 16GB on Linux, zero swapping, Chrome browser. Switched to old school MP3 playing via Audacity and never looked back.
You can see what is wearing your disk under process / activity manager.
The M1 behaviour can it be Chrome + Youtube,Netflix,Spotify usage related?
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 1%
Data Units Read: 109,446,813 [56.0 TB]
Data Units Written: 68,569,966 [35.1 TB]
Host Read Commands: 2,922,206,492
Host Write Commands: 2,180,242,949
Controller Busy Time: 4,463
Power Cycles: 13,345
This is after 40 months, during which it has been my main computer for both home and work. So about 10.5 TB of writes per year.
Prior to the iMac I had a 2008 Mac Pro for work and a 2009 Mac Pro for home. Those were doing about 7.8 TB per year of writes combined, but that includes an SSD on the home one that was used for Time Machine. Subtracting out that, it was about 5.2 TB per year combined writes, so about half of what I do on the iMac.
Thanks! I have run this command now (not on an M1 but on a 2018 MacBook Pro), and I'm reporting that 31% of my file system accesses were coming from ESET Security (namely esets_daemon, esets_proxy, ERAAgent, esets_gui), and 19% from Chrome. I'm actually using Chrome, but if ESET degrades the SSD lifespan by 31%, I'm gonna be annoyed.
Years ago I had a 2015 MacBook Pro that used to write daily swaps in the terabyte range for about two weeks. Not a single incident but consistently 1-3 tb/day. Performance wasn’t impacted at all and so I only noticed by chance when I looked at the activity screen for other reasons
It persisted through restarts and was only fixed when I reinstalled the system (and all my application software. Setup remained the same so no idea what it could’ve been)
With a few assumptions about the built-in SSD I concluded that I’d be above the most likely applicable TBW within 200 days!
Just to add some more data here, M1 MBA 16GB Ram / 256GB SSD which I've been using for almost exactly two months:
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 34 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 5,315,871 [2.72 TB]
Data Units Written: 2,249,290 [1.15 TB]
macOS: How to Disable Homebrew Analytics
In case you are not aware or do not want to send homebrew analytics here is how to turn it off:
brew analytics off
"According to their GitHub page, Homebrew’s anonymous user and event data have a 14 month retention period, which is the lowest possible for Google Analytics."
This has to be wrong readings, is the M1 swapping really totally different? My normal last gen Intel Mac has 0% usage and it's nearly one year old (yes, I use it nearly daily).
From the sounds of it, the same reading they are using to suggest the M1 will only last 1-2 years has been observed on older Macs. So this whole thing seems overblown.
This is completely anecdotal with nothing to back it up but it seemed like I swap far more aggressively when I have x86 processes running...not sure if this is related to Rosetta 2
My gut feeling is that this is an error of the reporting tool - or perhaps more specifically, an interpretation of the results.
I think it is highly likely that these accesses are hitting some sort of cache (possibly page files, or SRAM, etc), not actually hitting the actual TLC/QLC part of the storage.
Obviously I could be wrong, but I’d be surprised if Apple made this big of a mistake and didn’t catch it internally before now. If they did, then there are some serious quality issues in their process.
Many iOS apps use Realm DB which is memory mapped and calls fsync on every write. This wasn’t obvious to iPhone/iPad users but will wear out SSDs very quickly.
How does AppleCare deal with SSD issues like this? I have heard that for batteries, they'll replace if the maximum usable capacity is below 80% of what it should be. I assume they've had a similar policy for SSD? Arguably the SSD is even more critical than the battery, because you can at least hypothetically use a laptop plugged in.
Yes, I imagine it has been very rare in the past. I'm wondering if anyone has come across the issue before (or has a friend who works at Apple and knows the rules). I would hope that if they give you a new logic board that they'd at least be able to transfer your data. I know that they always make you sign a waiver saying that you've backed everything up and if you lose the data that's fine. Hopefully they would at least try to transfer the data for you, if this turns out to be a bug like it sounds like.
I'm going to check the stat from my M1 Macs, but here is from the last MBP before 16" - 2019 15" MBP
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 32 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 2%
Data Units Read: 79,321,757 [40.6 TB]
Data Units Written: 63,313,296 [32.4 TB]
Host Read Commands: 2,312,199,690
Host Write Commands: 1,098,534,950
Controller Busy Time: 0
Power Cycles: 130
Power On Hours: 1,213
Unsafe Shutdowns: 52
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Stats for my heavily used MacBook Pro (15-inch, 2018)
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 37 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 2%
Data Units Read: 45,932,352 [23.5 TB]
Data Units Written: 48,394,214 [24.7 TB]
Host Read Commands: 1,069,241,760
Host Write Commands: 698,888,162
Controller Busy Time: 0
Power Cycles: 137
Power On Hours: 661
Unsafe Shutdowns: 48
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 25 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 933,459 [477 GB]
Data Units Written: 670,152 [343 GB]
Host Read Commands: 15,779,474
Host Write Commands: 7,614,634
Controller Busy Time: 0
Power Cycles: 78
Power On Hours: 6
Unsafe Shutdowns: 3
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
```
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 26 Celsius
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 4,167,081 [2.13 TB]
Data Units Written: 2,807,840 [1.43 TB]
Host Read Commands: 57,601,178
Host Write Commands: 45,778,010
Controller Busy Time: 0
Power Cycles: 168
Power On Hours: 43
Unsafe Shutdowns: 8
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
```
Percentage Used: 0%
Data Units Read: 5,170,416 [2.64 TB]
Data Units Written: 2,977,938 [1.52 TB]
Host Read Commands: 89,184,979
Host Write Commands: 34,892,977
Mostly being used for building web apps, somewhat heavy video editing loads, and messing around in python. Using a lot of different software, much of it (entire Adobe Suite) via rosetta. I'm not sure what to make of these stats yet.
> Maybe we should consider listing which OS version everyone is using? I never updated from 11.0.1 and these stats seem comparatively low. Latest Big Sur is 11.2.1.
M1 MBA 8GB/256GB, mostly safari with ~20 tabs avg and youtube, no spotify
Available Spare: 100%
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 25,160,936 [12.8 TB]
Data Units Written: 23,041,292 [11.7 TB]
Host Read Commands: 207,408,853
Host Write Commands: 98,285,538
Controller Busy Time: 0
Power Cycles: 173
Power On Hours: 101
I guess Apple will disable these statistics for users similar to what they did with the "battery remaining time" back in the day (not that long ago :P).
Available Spare: 100%.
Available Spare Threshold: 99%
Percentage Used: 0%
Data Units Read: 4,645,245 [2.37 TB]
Data Units Written: 4,045,006 [2.07 TB]
Host Read Commands: 30,123,041
Host Write Commands: 17,403,647
Controller Busy Time: 0
Power Cycles: 85
Power On Hours: 18
A dumb question regarding flash programming: assume the erase block is 16MB, and there is a fresh block to write to. If I slowly append-write (not modifying existing data) to the block, would subsequent writes require the whole block to be erased/reprogrammed?
The use case is usually about append-only logs: MacOS constantly generates tiny log lines and those must go somewhere? Does Write Amplification make logging a bad use case on SSD?
I have the cheapest one and I have much better stats, and I use it for development, pretty much compiling everything, all the time, rust, rm -rf node_modules, that kind of stuff, every day, 12-16hs for two months. Looks good.
Percentage Used: 0%
Data Units Read: 10,224,513 [5.23 TB]
Data Units Written: 9,526,626 [4.87 TB]
Is there a way to get the drive SMART status without having to install 3rd party tools that run as root (or another safe way, e.g. through a VM or rolling back an APFS snapshot afterwards, etc.)?
I wouldn't mind writing a bit of code either to get these stats. I have a 2020 Intel Mac Mini and an old MacBook Pro with a Samsung Evo 840.
Apple devices have always been lacking RAM, especially iPhones and iPads. Macbooks used to be upgradeable but now with everything soldeted on PCB that is no longer the case... so keep that in mind and buy new Macs with highest RAM option, otherwise your device may become obsolete sooner
Haven't seen this mentioned: I wake up to a double digit GB Finder process every other day. I leave it plugged in during the night. Wear is 3% or 48TB of writes.
crazy, day one m1 8gb ram with 512gb hd here.
this laptop has never seen spotify and safari is my main browser - no compilation or other heavy ram use either.
The real problem here is that the M1 macbooks do not have a replaceable SSD. Even without this issue, SSDs do die sometimes and on every non Apple laptop you just buy a new m.2 sdd and everything is all good.
I have had macbook SSDs die in the past and thanks to it being a swappable module back then, it was no big deal.
> The real problem here is that the M1 macbooks do not have a replaceable SSD. Even without this issue, SSDs do die sometimes and on every non Apple laptop you just buy a new m.2 sdd and everything is all good.
In the past, when the SSD dies on these older Macbooks, you just replace the SSD with a new one. Very important for long term use when it eventually goes out of warranty, since there would be no need to go to the Apple Store to get it replaced for a fee.
On the M1 Macbooks, thats it. It's bricked and your data is lost and no way to recover it. (If you didn't backup in advance) You might as well buy another one.
In my case, I will continue to use my old Macbook and wait until Apple fixes the excessive writes or swapping issues in a newer version of macOS. By then I would be getting a Macbook with a newer M1X or M2 processor that doesn't have the excessive swapping issues and the software ecosystem for Apple Silicon would already have caught up with it and will be optimised for the processor.
I said it before with Apple products [0] [1] [2] [3], I always ignore the first generation unless you want to be a bleeding edge beta tester trying to get work done or you're after collector editions.
This looks overtly malicious. There's really no excuse for people to be seeing over 30TB of writes to their drives if they only use their system for browsing the web or dev work.
My main NVMe drive's stats, after almost a year of ownership, after 3 OS installs (windows and linux, though I moved windows to a smaller less used sata ssd a few months ago) and frequently moving big files on/off network storage, installing games, and dev work for small personal projects etc:
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 0%
Data Units Read: 200,214,119 [102 TB]
Data Units Written: 27,304,663 [13.9 TB]
Host Read Commands: 616,298,176
Host Write Commands: 387,970,043
I really don't understand how people tolerate these massive stacks of closed software that tend to have extremely opinionated and forced automatic updates, much less pay a premium for it.
If I started a company today everyone would probably get something like the system76 laptops.
And this is Exhibit A on why I don't want any computer I spend a lot of money on to have a non-replaceable SSD. Inevitably, it will turn into a brick, or at least a poor excuse for a laptop that only runs off external drives.
This might be the case, as my 8GB MBP M1 has seen a lot of wear on the SDD. With that being said, the machine has been super fast, and I’m running docker all day, emacs, Spotify, node, Firefox with 5-10 tabs. I’ve almost never run into issues, I just close apps when I’m not using them.
With 24GB, I rarely needed it all, but a good portion was cache/buffer, I.e. 10GB. Of course you can just swap to SSD and get pretty much the same performance, until you don't..
So I have a MacBook with 16gb ram for work, and an M1 MacBook with 8GB
Normal usage on the Intel MacBook, ~10GB, on the air with similar workload ~5GB
Not apples to apples for sure, but there is some magic going on with the M1. Perhaps the different instruction set?
On the other hand, I bought into the hype that the 8GB was fine for pretty much everyone.
I had a M1 Air with 8GB of RAM and ended up having to buy a M1 Air with 16GB of RAM since the swapping during my normal workflow made the machine unusably slow. Dozen Chrome tabs, Angular 2+ application being served, Webstorm, GUI Git client, Messages, and a few other little things.
I wouldn't bet on it being just the instruction set itself but it is true that given the chance x86 compilers will often go mental with the SIMD extensions which can lead to lots of fairly long instructions rather than a lower throughput but much shorter block
Ok, just my perspective, that you dont need to share. I am typing on lenovo P51, not really a new thing, but I have two TB SSDs, 2 slots for normal ram (32gb currently), i7, no overheating, great keyboard, battery and simple possibility to just dismantle the lower part (or if needed the whole laptop) and blow all the dust out or vacuum clean it. Running linux where all the hardware (except the fingerprint reader which I would not use anyway) is supported.
Once something dies I can replace it at any time on my own and the prices for replacement parts are actually going down, not up. It is "a tad" heavier but on the other side, I would carry the backpack with me regardless of weight (and now I can skip a fitness session :D).
Why on earth are you even buying apple laptops? Ok, I do understand it for nontechnical people but we are on HN. I just dont understand a reasoning why you dont get yourself a proper laptop instead and all the convenience that comes with that. "Its beautiful" reason?
I would imagine you actually do work on your laptop and the look is not your primary criteria when buying a work rig (although I have always loved Thinkpad boxy designs).
My perspective that you don't need to agree with, is that the P51 is a hideous monster that weighs exactly twice as much as my MacBook, the asymmetric keyboard + touchpad would drive me crazy, it doesn't run macOS and has worse battery life. Also my fingerprint reader works, just like every other part of the machine, along with all the software I use. There's no "except".
What, exactly, is the problem with that?
It seems like there are a lot of assumptions in play in this "headline news":
(1) assuming that the reported numbers are valid for this measurement at all
(2) that they are not a bug in the IOKit implementation used by smartctl
(3) that they are not not a bug in smartctl itself
(4) that they are directly comparable to non-M1 hardware without further processing
(5) that they do not increment when Time Machine or other filesystem snapshots occur
(6) that they do not increment when APFS copy-on-writes occur
(7) that they do not include the total byte size of sparsely-modified files
I don't see anyone checking these assumptions yet, but if y'all do, supporting links on those points would improve this HN post considerably. There are other assumptions that could be tested too! Outrage is cool, but science is productive.