Hacker News new | past | comments | ask | show | jobs | submit login
M1 Mac owners are experiencing high SSD writes over short periods of time (linustechtips.com)
691 points by voisin on Feb 23, 2021 | hide | past | favorite | 531 comments



The first line of the smartctl output everyone's bandying about says that a total of 1% of the drive's capacity has been used, which means that in about 16.5 years (2 months * 99 / 12) their drive will go readonly.

What, exactly, is the problem with that?

It seems like there are a lot of assumptions in play in this "headline news":

(1) assuming that the reported numbers are valid for this measurement at all

(2) that they are not a bug in the IOKit implementation used by smartctl

(3) that they are not not a bug in smartctl itself

(4) that they are directly comparable to non-M1 hardware without further processing

(5) that they do not increment when Time Machine or other filesystem snapshots occur

(6) that they do not increment when APFS copy-on-writes occur

(7) that they do not include the total byte size of sparsely-modified files

I don't see anyone checking these assumptions yet, but if y'all do, supporting links on those points would improve this HN post considerably. There are other assumptions that could be tested too! Outrage is cool, but science is productive.


The first two in here[1] are reporting 3% of a 2TB drive in a couple months. And speculating what that would look like for a 256GB drive. So, not definitive, but if it does mean 24% in two months for a smaller drive, that's not great.

[1] https://linustechtips.com/topic/1306757-m1-mac-owners-are-ex...

Edit: I suppose it makes sense that the people seeing lots of wear are the 2TB drive buyers. People that are willing to pay for that much NVME probably use it a lot.


I've had my 256GB SSD + 16GB RAM M1 MBA for a little over 2 months now. There was one time that I noticed a strangely large amount of swap usage for no discernible reason, and rebooted immediately. Otherwise, swap doesn't seem to be used that often.

I really don't understand why Apple is apparently incapable of making these laptops use standard, replaceable m.2 NVMe SSDs.

Seeing this thread made me concerned, but it looks like my SSD isn't something to be worried about yet:

    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:           0x00
    Temperature:                34 Celsius
    Available Spare:            100%
    Available Spare Threshold:  99%
    Percentage Used:            0%
    Data Units Read:            6,963,442 [3.56 TB]
    Data Units Written:         3,626,988 [1.85 TB]
    Host Read Commands:         110,283,456
    Host Write Commands:        59,878,323


> I really don't understand why Apple is apparently incapable of making these laptops use standard, replaceable m.2 NVMe SSDs.

Coz there's no standard M.2 form factor NVMe SSDs without a controller chip, and even if you can find one, there's no standard protocol to talk to it.

Recall that 3rd-party drive manufacturers are really bad at properly making Self-Encrypting Drives (lots of HN stories previously, just search for SED). It's very likely a factor for Apple to decide to implement its own SSD controller inside the T1/T2 chip on recent Intel Macs, and directly on die on Apple Silicon, so that Apple can fully control the data written to raw flash and be confident in its own implementation of encryption.

Charging a premium for more capacity is a standard Apple practice anyway, and the fact that such security-focused approach makes it unavoidable is a side-effect.


Aside from encryption, commodity NVMe SSDs are also really bad at correctly implementing idle power management. The Linux kernel is constantly adding to its list of drives that cannot safely use the deepest idle state because on many systems the drive won't wake back up after being put to sleep. Apple might be able to have a bit of an easier time since they control the host system so tightly, but they would still end up having to accommodate plenty of SSD bugs/quirks.


Apple could have made their own replaceable SSD boards.

It looks like they want their devices to become obsolete sooner which seems to be a good business idea but there is a risk of upsetting many customers. Time will tell.


It seems I have it worse than almost anyone.

2.5 month old, 1 TB, M1 MBA:

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        46 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    5%
  Data Units Read:                    315,158,527 [161 TB]
  Data Units Written:                 303,071,863 [155 TB]
  Host Read Commands:                 1,331,719,025
  Host Write Commands:                1,036,195,203
  Controller Busy Time:               0
  Power Cycles:                       167
  Power On Hours:                     762
  Unsafe Shutdowns:                   7


I have almost 3 months old 1TB KINGSTON SA2000M81000G as linux root and data written includes transfering 400GB of old stuff to this drive.

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        44 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          10%
  Percentage Used:                    0%
  Data Units Read:                    627,718 [321 GB]
  Data Units Written:                 1,845,489 [944 GB]
  Host Read Commands:                 5,302,076
  Host Write Commands:                204,065,596
  Controller Busy Time:               107
  Power Cycles:                       117
  Power On Hours:                     765
  Unsafe Shutdowns:                   0


That's an impressive amount of rip and tear for a drive in a laptop.


I put my stats here for the sake of reporting.

3.5 years old Samsung 970 evo 500GB

SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 28 Celsius Available Spare: 100% Available Spare Threshold: 10% Percentage Used: 0% Data Units Read: 9,208,241 [4.71 TB] Data Units Written: 21,345,634 [10.9 TB] Host Read Commands: 132,245,782 Host Write Commands: 346,141,507 Controller Busy Time: 924 Power Cycles: 312 Power On Hours: 2,453 Unsafe Shutdowns: 114


Ouch, that's my only 2TB drive in a laptop running a rolling-release GNU/Linux distro with heavy swap usage, encryption and plenty of over-night compilations for a bit more than a year:

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        36 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          10%
  Percentage Used:                    1%
  Data Units Read:                    56 263 143 [28,8 TB]
  Data Units Written:                 36 077 380 [18,4 TB]
  Host Read Commands:                 1 252 403 456
  Host Write Commands:                1 018 672 820
  Controller Busy Time:               15 360
  Power Cycles:                       234
  Power On Hours:                     10 255
  Unsafe Shutdowns:                   47
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0
  Warning  Comp. Temperature Time:    0
  Critical Comp. Temperature Time:    0
Your values are indeed insane.


That's... close to 2TB / day?


> Data Units Written: 303,071,863 [155 TB]

That's insane, I don't trust it though:

  155 TB / (75 days *24*60*60) * 2**20 = ~25 MB / s
If we are more reasonable and say it's running for 12 hours out of the day then that works out at a continuous 50MB/s of writes.

For comparison, My daily Linux laptop (XPS 13, 512GB Samsung NVMe SSD) has a total of 3.7 TB of writes over 3 years, this is my work dev and home laptop, it's in constant use (although no video editing):

  3.7 TB / (1095 days *24*60*60) * 2**20 = 0.041 MB / s
There are three orders of magnitude difference there. I can only think of three explanations: 1. the SMART reporting is wrong, 2. MacOS or M1 SSD controllers have serious write amplification issues, or 3. you are actually doing something that does need serious write throughput like lots of video editing (your stat's aren't impossible after all).


There's an option 4: you are running out of RAM and macOS is doing a lot of swapping.

These SSDs can probably write on the order of 3000MB/s = 0.003TB/s, so you could end up with 155 TB total writes after just 155/0.003/3600 = 14 hours of RAM heavy workload.


It would have to be a legitimately RAM heavy workload though... running out of RAM due to leaving applications open (or browser tabs open), might result in swapping out those to disk until they are used again. But the level of swapping required to generate this much write usage would essentially be using the disk as RAM, as in the running program doesn't have enough RAM, either due to needing more than physically available or due to buggy memory management.


Right, I was thinking of things like large simulations where you need to keep a large dataset in RAM and update every datapoint at every iteration.

Or maybe multiple VMs running simultaneously doing CI jobs could also do it.

I don't think simple memory errors like leaks would cause it, since that would just end up filling the disk once, but wouldn't go through writes quite as fast.


The symmetry between reads and writes precludes a lot of options (like logging gone mad). I'm not even sure most SWAP ends up read back in...


That reminds me, when SSDs first came out we'd have to set noatime on nix stuff including macs, to prevents the OS from writing file access times every time a file is read, otherwise it would cause significant write amplification. Modern SSD controllers are clever enough to make this unnecessary these days, but it could be something similar... either with the SSD controllers themselves or a file system behavior spaced just right in time to turn 1 byte into 4096 bytes of effective NAND writes - That's only 24GB of requested writes turning into 100TB in the worst case.


Yeah, that's a lot. I've got a 5-year old 512 GB Samsung PM951 that's been a system drive in 3 Windows systems with 16-64 gigs of RAM, and over the course of 6200 power cycles and 16500 hours on it has only 35 TB written and 24 TB read


How do you even get an unsafe shutdown in a notebook with a battery, unless you yank it out, or a buggy OS that will not shutdown properly/preemptively on low power?


I've got a number of unsafe shutdowns listed on my M1 MBP, too. Most likely due to the kernel panics I get when rebooting and connected to my dock in clamshell mode.


Not a joke - are you running Chrome?


I am.


Just try to run the Activity Monitor.app, click on Disk tab and look for disk-hungry process.


"kernel_task" ... wrote 142GB in 80 minutes


looks like swap... can u show

  # vm_stat
  # top -o faults
just 1.5Tb for my 2m old m1 mini.. xcode/firefox


Not OP, but I'll post what I've found:

  > vm_stat
  Mach Virtual Memory Statistics: (page size of 16384 bytes) 
  Pages free:                               28209.
  Pages active:                            114788.
  Pages inactive:                          111624.
  Pages speculative:                         1301.
  Pages throttled:                              0.
  Pages wired down:                         99067.
  Pages purgeable:                          11340.
  "Translation faults":                 340573628.
  Pages copy-on-write:                    8586888.
  Pages zero filled:                    184635739.
  Pages reactivated:                     62859723.
  Pages purged:                          12373577.
  File-backed pages:                        88759.
  Anonymous pages:                         138954.
  Pages stored in compressor:              645107.
  Pages occupied by compressor:            127459.
  Decompressions:                        62433647.
  Compressions:                          82841473.
  Pageins:                               10295559.
  Pageouts:                                177162.
  Swapins:                               16692043.
  Swapouts:                              17670838.

  > top -o faults
  PID    COMMAND      %CPU TIME     #TH    #WQ  #PORT MEM    PURG   CMPRS  PGRP  PPID  STATE    BOOSTS           %CPU_ME %CPU_OTHRS UID  FAULTS    COW     MSGSENT    MSGRECV    SYSBSD     SYSMACH
  533    WindowServer 10.5 06:21:59 21     5    2792- 876M-  209M+  298M   533   1     sleeping *0[1]            0.17876 1.03550    88   24771383+ 131690  354813610+ 136121938+ 237020876+ 554000668+
  2862   Safari       0.0  62:48.95 10     3    7051  480M   6400K  316M   2862  1     sleeping *0[37332]        0.00000 0.00000    501  13919243  76306   67003396   20901240   54405367+  167230473
  2883   com.apple.We 0.0  16:36.04 92     3    926   393M   384K   301M   2883  1     sleeping *19137[3734]     0.00000 0.00000    501  4032470   132     7204377    3215767    45641766+  32182159
  491    mds          0.0  11:00.86 5      2    422   66M    0B     52M    491   1     sleeping *0[1]            0.00000 0.00000    0    3296694   154     6396987    1450677    27494382   5594171
  792    mds_stores   0.0  14:00.57 4      2    93-   72M-   16K    61M    792   1     sleeping *0[1]            0.00000 0.00000    0    2786911   1956    4619344+   1304164+   15524959+  4296749+
  2694   Terminal     7.6  03:17.73 8      2    311   303M-  37M+   104M-  2694  1     sleeping *0[6352]         0.85412 0.15218    501  2034254+  338     1151091+   265791+    1517195+   269400
I'm at 1.32TB on my almost 1 month old MBA (8GB RAM/512GB DISK).


my info:

  Mach Virtual Memory Statistics: (page size of 16384 bytes)
  Pages free:                               14959.
  Pages active:                            405692.
  Pages inactive:                          374907.
  Pages speculative:                        28768.
  Pages throttled:                              0.
  Pages wired down:                         91521.
  Pages purgeable:                           3805.
  "Translation faults":                  24292564.
  Pages copy-on-write:                     729689.
  Pages zero filled:                     14619003.
  Pages reactivated:                       712889.
  Pages purged:                            265424.
  File-backed pages:                       361509.
  Anonymous pages:                         447858.
  Pages stored in compressor:              270669.
  Pages occupied by compressor:             93710.
  Decompressions:                          405581.
  Compressions:                            790223.
  Pageins:                                1093581.
  Pageouts:                                  3406.
  Swapins:                                      0.
  Swapouts:                                     0.
> top -o faults

  PID   COMMAND      %CPU TIME     #TH   #WQ  #PORT MEM    PURG   CMPRS PGRP PPID STATE    BOOSTS          %CPU_ME %CPU_OTHRS UID  FAULTS   COW   MSGSENT   MSGRECV   SYSBSD    SYSMACH    CSW        PAGEIN
  596   firefox      1.8  25:24.36 101   3    4607  1050M- 24M    197M  596  1    sleeping *0[2679]        0.61835 0.00000    501  1801083+ 15982 30520748+ 10145476+ 37431119+ 74055473+  25506295+  10499
  136   WindowServer 3.8  34:09.24 17    5    2570- 1069M- 3008K+ 84M   136  1    sleeping *0[1]           0.04109 0.69237    88   1554555+ 29134 88175354+ 30598736+ 69071968+ 130099954+ 19109478+  1791
  734   Textual      0.0  01:16.23 9     1    858   144M   16K    25M   734  1    sleeping  0[2051]        0.00000 0.00000    501  833293   1717  745088    111931    478143    1563730    470159     2023
  2195  Xcode        0.0  01:43.32 14    1    856   278M   144K   114M  2195 1    sleeping  0[1168]        0.00000 0.00000    501  701255   3278  1626845   546420    1139890+  1959450    732276+    148341
  2650  Simulator    0.0  01:05.27 4     2    267   27M    0B     8976K 2650 1    sleeping *0[1402]        0.00000 0.00769    501  695662+  211   755586+   680393+   1078007+  2178634+   1065718+   134
  539   Terminal     0.6  01:01.28 8     2    304   107M   25M    16M   539  1    sleeping *0[1375+]       0.06887 0.02264    501  582200+  463   520635+   71597+    365523+   1208051+   360370+    1522
  2209  SourceKitSer 0.0  00:25.22 2     1    21    714M   0B     468M  2209 1    sleeping  0[672]         0.00000 0.00000    501  473966   51040 3676      1103      1047453   5748       17114      12740
  653   Microsoft Re 1.6  13:51.17 33    7    462   536M   12M    148M  653  1    sleeping *0[845]         0.00000 0.00000    501  454751   21517 9976456+  1770226+  11942684+ 47364219+  15135688+  82
  602   plugin-conta 0.0  01:01.70 39    1    278   362M   0B     87M   596  596  sleeping *1[3]           0.00000 0.00000    501  407089   2481  32035     12156     2744155   110109     996847     155
  3576  GarageBand   2.7  09:19.30 23    2    822   1088M  16K    664M  3576 1    sleeping *0[288]         0.01295 0.00000    501  266176   2365  7887976+  76435     9436303+  19319436+  18860820+  930
  292   mds_stores   0.2  01:18.32 5     3    95    24M+   16K    9056K 292  1    sleeping *0[1]           0.00000 0.17381    0    252741+  93    158368+   58900+    1418145+  178062+    301682+    42331
  2305  lldb-rpc-ser 0.0  00:18.56 4     1    62    1049M  0B     201M  2305 2195 sleeping \*0[3]           0.00000 0.00000    501  213604   1380  575628    287827    710177    307114     244010     32467
no swap at all.. but i don't use sleep mode for my system, uptime ~10h. and no rosetta apps too. 16Gb/500Gb M1 Mac Mini.

looks like Safari and WindowsServer use swap all the time. try to shutdown at least 1 per day - maybe it will help.


I'm curious is there any correlation with having chosen the 8GB ram version?

There must be a lot of swapping happening to make that work.


It's not that they are incapable. They are unwilling.

Nobody should be surprised that one of the most successful proprietary technology companies on the planet sells highly proprietary technology. The 2011 Macbook Air had a 1.8" PATA ZIF hard drive.


> The 2011 Macbook Air had a 1.8" PATA ZIF hard drive.

I.e. standard, high volume drive used in iPods and other small devices


It doesn't help your argument that the only device you mention by name is also built by Apple.


A quick search shows PATA ZIF drives were also used in Thinkpads, Dell / HP / Acer / Asus laptops, and other portables like the iRiver. Replacement drives for that model of Macbook Air are available, doesn't seem like it is proprietary tech at all.


   > I really don't understand why Apple is apparently incapable of making these laptops use standard, replaceable m.2 NVMe SSDs.
Are you even serious? Because then they wouldn't be able to charge $250 premium for 250GB of SSD.


Apple still charges that kind of premium on RAM even on the Intel Mac mini where you can replace the RAM.

People still pay it, because most people probably don't know the RAM is replaceable, don't want to be bothered with doing it themselves, don't want to accidentally damage their new computer, don't realize the upgrade prices are high, or other reasons.

Apple has always charged high premiums for upgrades... premium upgrades aren't a strategy they suddenly invented when they started soldering components down.

Apple has reasons for soldering down components, reasons which I don't really agree with, but I don't think they're all that worried about a small percentage of people avoiding their upgrade prices.


> premium upgrades aren't a strategy they suddenly invented when they started soldering components down.

What you wrote is only a part of the story. Another one happens after using the computer for couple of years. RAM and SSD prices are declining over time, while software developers are finding exciting new ways to use more of them.

It’s more profitable for Apple when users replacing entire computers than just RAM + SSD. Unlike the initial premium upgrades, users have more motivation. Warranty’s expired, and the price difference is way larger.


I just upgraded my 2015 Macbook Pro with a 2TB SSD and a new battery. I'm planning to keep using it for a few years.

The 2015 MBPs were the last ones that had replaceable drives (it's a proprietary slot, but adaptors to M2 are available).

I still think that the 2015 models are the last laptops Apple actually targeted at pro users. They even added support for PCI NVMe drives in a recent macOS version, so these MBPs got MORE EXPANDABLE after release. It's crazy! (It does have soldered-on RAM though)

The only comparable recent Mac was the 2018 Mac mini -- with user upgradable RAM! It does have soldered on flash, unfortunately. But it has so many ports, and with external thunderbolt enclosures you can add GPUs, SSD Raids, ... It's pretty amazing for a small desktop and surprisingly usable as a developer workstation.

(I'm not counting the current Mac Pro because it is so outrageously expensive that I can't imagine that it makes sense for anyone except the most highest paid professsionals)


My 2012 MacBook Pro is still going strong! Use it daily. Upgraded to a 1TB SSD, 16GB RAM and I sincerely hope it'll last a few more years - still on Mojave though. I boot into Crapolina to do App Store work from a USB SSD, but will probably upgrade to Big Sour at some point with the horrendous menubar spacing.

I type this on my work MacBook where I've had to press backspace so many times due to the shift key not working and the i key repeating itself, with the touchbar flashing constantly next to the power button.... the battery has a worse rating too even though it's a 2016 versus my 2012. I hate this keyboard.


> It’s more profitable for Apple when users replacing entire computers than just RAM + SSD.

Are there any hard cold data on how often a mac user gets a new laptop vs a pc user? It would also be interesting to see how common it is for pc users to add ram/disk during their pc:s lifetime.


I've only just got a new Mac after having my last one for nearly a decade. I've had to replace two windows machines in that same time period. Not exactly cold hard data, but a sample size of 1.

There are also windows machines that make it hard to replace things mind you.


Same with me and a lot of people I Know. Macs are so reliable (in general, don’t want to get into a debate as a lot of viruses don’t target them as Windows is larger) that upgrading is not required for a long time.

It may change in the future if Apple introduces planned obsolescence, but so far no issues


Having done over a decade of desktop support for several companies, both apple and windows shops. This notion that Macs are more reliable doesn't hold water for me. If we compare devices on the same price scale (premium category) as macbooks, amongst thousands of devices, we actually found them to last the same length without any upgrades (6-10years average)

What a lot of people neglect to consider is that non-Apple brands have an entire "budget" category that Apple does not participate in. It's wrong to compare this category with devices of another category, no matter the make and model.


Fair point


Counterpoint: it's a lot more wasteful, especially for desktops, to own Apple. Look at all the iMacs with recent, high quality 5K monitors that'll be scrapped, as they can't be used with another video source.

Such a waste.


Unless you need a monster dedicated GPU, you could get a Mac Mini. Or if you need GPU power and you have money to burn, you could get a Mac Pro. No screens attached.


Yet they stopped packaging accessories with the iPhone to help save the environment lol.

I love my Macs but Apple definitely are not environment focused, for the reasons you state


I agree with you but I also think that part of it is:

- If you make the SSD replaceable, many more people will replace theirs.

- Apple doesn't want to put an SSD access port into their laptops for design/strength/reliability reasons.

- Replacing the SSD would therefore entail people disassembling the computer.

- Apple doesn't want people disassembling the computer

* solution = solder it onto the motherboard.


2015 and earlier MacBook Pros had replaceable SSD drives. Remove 10 pentalobe screws, take off bottom panel, remove 1 torx screw, pull out SSD, put new SSD in, reassemble, done.

The soldered on stuff is a recent change, and it's stupid for things like RAM, but even more stupid for things that wear like SSDs.


- If you make the SSD replaceable, many more people will replace theirs.

Very few non technical consumers, which is the vast majority, ever upgrade their computer.


Which means designing in an affordance for it adds a cost for a feature used by a few.


They have the "pro" line for a reason. It's meant for professionals. It's okay to solder RAM and SSD in lower-end models I suppose, but not in those people actually use to get their job done.


I must agree with this. About a year ago, our company's iOS development team was replaced. The managers bought the new developers company MBPs. However, after about six months they started complaining that they could not make progress on one feature because Xcode kept crashing when they opened an existing Storyboard file. Turns out the managers had of course cheaped out and bought the developers Macs with the lowest 8GB memory option. Meanwhile, the consultants who originally created those files had had much more sensible 32GB.

Now the MBPs for the whole team need to be replaced at great cost and double the environmental resource use. Without Apple soldering the memory our IT department would have certainly just upgraded the memory. In fact, when I ran into similar issue with needing 32GB to my 16GB Dell laptop, that's exactly what we did.


Or they could have just bought the max version the first go around. If it's for production the +$1k/dev is less than a few days of pay.


> double the environmental resource use

I mean, they're not defective, they can be refurbished.


Theoretically yes, if they were for example managed by a leasing company.

In this case, the company actually only officially supports (leases) Windows laptops. Macs should not officially exist to start with, but are nevertheless required for iOS app development. So MBPs are handled as "extra" IT equipment. If there is no use for such a piece of IT equipment (e.g. it is underpowered or otherwise not fit for purpose) AND it contains sensitive business data (like a developer computer almost certainly does), it is actually a security issue for the company.

So for security reasons the company would actually prefer that such computer be DESTROYED when there is no use for it anymore. Sadly you can not even remove the hard drive from a MBP and sell/give it to a employee for personal use, so Apple soldering the components on the mainboard is a double whammy.


Don't recent Apple computers have hardware encryption by default through the T2 chip? Just throw away the key and all data on disk should be irretrievable. Replacing the disk in this case would add no security, so it would be the wasteful course of action.


Company policies don't care about crypo used I guess.


I guess more like they don't have a contract with Apple that says Apple guarantees the data security is protected by their tech.


Ah, fair enough.


Apple can decrease environmental cost of early replacing MBP by not providing chargers, like they did with iPhones.


I'm not the kind of person who throws electronics away, but, you know, there's a proven and reliable way to decrease the environmental cost: don't make your damn devices disposable. Design them to be taken apart and make consumable components easily replaceable by the end user. But I guess non-disposable devices don't make charts go up as much as disposable ones do.

Case in point: the M1 Mac Mini internals are half electronics, half air. They could've easily fit all kinds of slots and modular components in there, yet they deliberately decided not to.


Oh, great, yes, that is certainly going to have a huge impact.

Because the chargers are known to have a huge impact because of all the rare earth in them.


> People still pay it

Not everyone does. There is even a thriving market for such kinds of user replaceable upgrade proving that more and more people were opting to do the upgrades from reliable third-parties than Apple.


As I said:

> Apple has reasons for soldering down components, reasons which I don't really agree with, but I don't think they're all that worried about a small percentage of people avoiding their upgrade prices.


while i don’t think many of us are used to thinking at scale, any percentage of apple’s customers making literally any choice about something that purchase from apple, is a f-ton of money.


I'm guessing supply chain and repair and troubleshooting and a lot more things. the Macbook Air now comes with a single board and a daughter board for the headphone jack. when it breaks the service replaces the board reducing any time (money) wasted on troubleshooting like ssds etc. also it allows apple to make their manufacturing a lot more streamlined. the more complicated the assembly (yes even an ssd means additional assembly and tests then you gotta provide support for third party ssds etc and troubleshoot that etc.). Adding more parts means the more complications and more chance of failures not less.


What that really means is less opportunity to fix individual components, more opportunity for $1499 upsells when warranty runs out. Cutting off third party repair and secondary used markets. Win win win in Apple eyes.


>What that really means is less opportunity to fix individual components

That has been the trend in the PC and electronics industry for a century though. Notice how people don't fix their TVs/Radios/etc. anymore? Heck, even in the car industry.

>Cutting off third party repair and secondary used markets.

Well, and somehow these machines get high "consumer satisfaction" ratings, and have high use periods, and retain a lot of resale value.

Something seems contradictory here...


I think this sort of criticism of Apple usually lacks much substance. Everybody know what Apple products are, if you want something that’s highly customizable and easy to upgrade then why not just buy one of the countless different products that offer those features? Apple’s products tend to work very well, are typically well made, and reliably last a rather long time. I personally enjoy working on them more than any Linux system, so I buy them for that purpose. If I wanted a computer to tinker with, I wouldn’t buy a Mac.


Yet these highly integrated Macs have very long average service lives, high customer satisfaction and maintain high second hand resale values. So clearly that isn’t true.


clearly these machines are a few months old at most. we clearly have a lot to learn about how they work and what’s likely to break. If they’re anything like the last time apple made a big new hardware change, if you want a functional computer, you’ll have to watch out for dust.


That also means repair and troubleshooting for custom configurations really sucks. Either they have to keep a bunch of configurations on hand or the user is SOL waiting for basically a new computer.


Is there something different about the drives on non-M1 Macs? My MBP 16 is showing way more read/write than that and still zero percent used.

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        36 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    108,058,416 [55.3 TB]
  Data Units Written:                 95,524,343 [48.9 TB]
  Host Read Commands:                 1,004,502,098
  Host Write Commands:                558,965,971
  Power Cycles:                       155
  Power On Hours:                     644
  Unsafe Shutdowns:                   36


My 2016 MPB is also at around those R/W numbers but a bit higher in the used area. I am however not sure what those Power On Hours are refering to, since I have been using this machine daily for many hours.

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        37 Celsius
  Available Spare:                    87%
  Available Spare Threshold:          2%
  Percentage Used:                    5%
  Data Units Read:                    98,665,764 [50.5 TB]
  Data Units Written:                 84,525,474 [43.2 TB]
  Host Read Commands:                 980,835,796
  Host Write Commands:                811,309,011
  Controller Busy Time:               0
  Power Cycles:                       14,955
  Power On Hours:                     359
  Unsafe Shutdowns:                   21
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0


It's been ~1700 days since mid 2016, I would guess the power cycles value there is swapped with the power on hours.

Or you power cycle 10 times almost every day.


The SSD will experience a power cycle every time you close the lid and the laptop goes to sleep.


Mine, also 256, 8GB RAM, bought on 12/26/2020. Primarily a browsing machine (Chrome).

    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        35 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          99%
    Percentage Used:                    3%
    Data Units Read:                    117,454,110 [60.1 TB]
    Data Units Written:                 108,475,036 [55.5 TB]
    Host Read Commands:                 471,949,425
    Host Write Commands:                309,499,261


Might want to consider browsing using something else (a large tablet e.g.?) until this issue is resolved. My machine is almost identical to yours. My reported writes are about 1.31TB or less than 1/40th your number. I do some native compiling and some browsing. Chrome is a well known memory hog.

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        25 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    5,849,123 [2.99 TB]
  Data Units Written:                 2,568,550 [1.31 TB]
  Host Read Commands:                 75,126,642
  Host Write Commands:                32,909,838
  Controller Busy Time:               0
  Power Cycles:                       160
  Power On Hours:                     42
  Unsafe Shutdowns:                   3
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0


I haven't even installed Chrome on this laptop. I switch between Safari and Firefox.

I wonder if Chrome's memory usage pattern is a contributing factor.


wtf, 55TB+ read/write in 2month? what the heck are you doing with your laptop?

This is my 11y/o SSD ( OCZ Vertex2 64GB ) on my laptop:

241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 3904

242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 5120


> what the heck are you doing with your laptop?

Did you even read the headline? I mean, it's kinda what this is all about.


I had a ridiculously high failure rate with the Vertex 2. I also still an OCZ drive of that era somewhere, which would work... But if you didn't use it for a while, it would be completely blank when you next used it.


Using socketed M.2 SSDs introduces a class of problems ("the SSD isn't seated correctly") and also makes the device thicker. There are probably other reasons but I'll bet those are the two big ones.


Plain and simple, they don't want you to take their devices apart. They use hardware and techniques that are extremely anti-consumer. From non-standard form factors, to non-serviceable components, to literally fighting against Right-To-Repair in 20 states [1]. And then when you hold out on them they push you updates to sandbag your performance.

Why so many tech professionals and hackers still swear by this company and feign surprise everytime they get shafted by proprietary design choices is baffling and frustrating.

If they wanted you to play inside, they wouldn't use torque screws. If they wanted you to replace that, they wouldn't have glued it together. If you were supposed to take the keyboard off there wouldn't be 60+ screws (literally) holding it on.

You're supposed to be a good consumer and consume this one so you can buy a new one.

[1] https://www.theverge.com/2020/7/30/21348240/apple-right-to-r...


While I agree with your argument, I want to call out a specific point:

> If they wanted you to play inside, they wouldn't use torque screws

Torque screws are superior to Phillips heads because they are far more resistant to cam-out; which is a bigger problem than needing to buy a screwdriver kit for $10 off eBay once in your life.


> Torque screws are superior to Phillips heads because they are far more resistant to cam-out[.]

I work at a repair shop currently, and can confirm this. Give me a Torq or a hex screw any day over a Phillips head.

You do have to be careful with them though, because once they do cam out, you're pretty much dead in the water, while with Phillips you can often hack your way around it.

Edit: I also agree with the rest of the argument. If you've got the budget (and I assume that if Apple is an option, you do), please consider buying something with Linux officially supported. Between Dell, Lenovo, and smaller scale setups like System76, there are plenty of options.


> please consider buying something with Linux officially supported. Between Dell, Lenovo, and smaller scale setups like System76, there are plenty of options.

My biggest complaint about them is heat and performance: my MBP seems to be almost always ready to take off. Apple really did hit the sweet spot with the M1 and I don't see x86 laptops catching up anytime soon. If they release 32+GB machines, I'll be seriously tempted to get my Macs refreshed even though I know they are essentially discardable machines now.

Also, Apple is the only manufacturer I can count on offering US keyboards without any hassle. Lenovo does that with Thinkpads and with Dell you need to work to find the models that are available with that option.


Those are all valid points. None of them are deal-breakers for me personally, but I understand how they can be.

I wish I could like the M1. I'm honestly really looking forward to ARM and RISC machines becoming more common, as long as there are manufacturers who make them at least somewhat reparable.


>ARM and RISC machines

What OS would they run? Linux?

I wonder if Windows will ever run on ARM/RISC?


Windows already runs on ARM.


Oh wow, I didn't know that.


The groundwork was done for embedded systems, but they added the desktop features and even make a few laptops with it now. But those are generally not very well-regarded, since the x86 emulation is slow and there is basically no native software.


Windows has run on non-x86 processors from at least the 90s.

https://en.wikipedia.org/wiki/Windows_NT#Supported_platforms


Oh, well that's not so good if it's just emulated x86.


The parent's argument stands, even if the details are slightly off.

Apple switched from torx to pentalobe screws because torx wasn't proprietary enough.

Also, they keyboard wasn't held down with 60+ screws, it was held down with rivets which is arguably worse. They could painstakingly be drilled, tapped, and replaced with screws if you were really masochistic.


> because torx wasn't proprietary enough

What does this even mean, when you can easily buy both types of screwdrivers at any hardware store?

Reportedly Torx, and later pentalobe screws are used because of their tiny size and lower profile, a philips head at that scale (0.8mm) is instantly stripped.


I agree that torx is useful. It's also an international standard.

Pentalobe is not an improvement over torx - torx was arguably working fine for Apple, but they decided to invent and start using a proprietary, non-standard screw anyway. The consensus at the time was that Apple did this to deliberately make it harder to get into their products.

The only reason you can find them now, 12 years later, is because third parties started to make the hardware, because it turns out getting into Apple hardware is something a lot of people eventually want or need to do.

Apple's attempt to lock you out of your own hardware didn't work, because ifixit and others spent tens of thousands of dollars on custom tooling. It was a big deal at the time, the hardware wasn't available anywhere, let alone at a hardware store.


> The consensus at the time

It was an angry article by iFixit that started the whole controversy. They could very well have done it for reasons related to licensing, or machinering, or space savings, or who knows what else. The fact the MBP 17" never used them, while the smaller models did, hints towards that.

You may have taken the stories at the time at face value, but within weeks you could buy pentalobe screwdrivers and they surely did not cost 'tens of thousands of dollars' to develop - screws are cold pressed and very simple to produce.


> they keyboard wasn't held down with 60+ screws, it was held down with rivets which is arguably worse. They could painstakingly be drilled, tapped, and replaced with screws if you were really masochistic.

I just replaced a keyboard in a Macbook Air. It is screws around the outside, then the dozens of rivets on the inside. If you're okay with the old keyboard being destroyed, you just rip it out: the rivets just pop out, and the replacement keyboard gets put back in with screws. So it was a pain, but the drilling/tapping isn't required.

Total cost $35, plus some quality time with my kid.


> Apple switched from torx to pentalobe screws because torx wasn't proprietary enough.

That was a simple and cheap change with clear benefits for Apple in terms of discouraging modifications to devices they support so the cost versus benefits actually makes sense for Apple there.

Whereas to suggest they designed a whole property storage controller interface just for the purpose of discouraging user servicing doesn't make sense at all. If that was the goal why not just use off-the-shelf NVMe modified to have some kind of secret power-on key or something?


Because there’s no way to spin that as a benefit for the user.

Optics is the most important factor and it’s something Apple’s marketing department does better than most companies.


There are lots of ways they could spin that as a benefit for the user (i.e. security).

It would be a poor explanation, but even when there's a good explanation, as you can see it is still dismissed as just "good optics". So how do you tell the difference between "good optics" and features that provide real benefit?


Some of my common CNC manufacturing runs use jigs with screws to hold odd-shaped products cut from large plate stock. I’ve tried many different styles of screws. Philips is great if you match the driver to the screw. The wrong driver will sort of work ok. If you buy a pack of Dewalt driver bits, you will note that each one is labeled. Match that label to your screws, and you should have a good fastener experience.

When I need to use a screw thousands of times, in manufacturing processes, I buy 18-8 Philips.

I use Torx only for one specific application: for user-serviceable finished products that have previously had support problems with idiots who can’t be bothered to select the right screwdriver. Some folks simply can’t do it. So, the most problematic products end up with Torx fasteners, which problematic customers mess up by using the wrong size Torx or even hex wrenches, and my support guys have a script for these problems that goes through “you didn’t use a T-10 wrench; there is nothing more to be said and nothing to be done for it. It’s black and white. Sorry.”


I understand proprietary screws being anti-consumer but torque screws?

Every building site I have been on almost exclusively used torque screws and every hardware store sells torque screwdrivers in every size. I wish more manufacturers would use torque screws since the heads don't show the same kind of wear.

And yes manufacturers aren't building their devices with them being repairable as primary design goal. This is understandable and there always will be a trade off. Even devices that are designed to be relatively easy to repair like Lenovo ThinkPads or HP Pro/EliteBooks have these trade-offs depending on the form factor.


The socket doesn't have to be on top of a pcb, given apple's history of "tweaking" standards its hard to believe the thicker argument. Given how reliable m.2 is (I've personally never seen a problem with one that seems to be seating related). I'm betting socketing the SSD actually makes the device more reliable long term since your looking at just replacing a disk in a couple years if it fails vs basically replacing the entire motherboard, which has got to be the most expensive part.

So, this is likely just penny pinching the manufacture, and assuming the customer will bear the repair fee if it breaks.


You're thinking about reliability in a different way from Apple. By the time the SSD fails, it's not Apple's problem anymore, it's your problem. (Or, more likely, Louis Rossman's problem.) Conversely, an SSD that gets jostled out of a socket if the product gets bumped the wrong way (maybe because the courier kicked the package around) means time and money spent on in-warranty repairs.

Also, Apple never actually used M.2. All of their socketed SSDs are proprietary form factors, though there are cheap adapters you can buy that will let you use M.2 drives.


<i>an SSD that gets jostled out of a socket if the product gets bumped the wrong way</i>

You know the other side of m.2 is screwed in, right? Anything that manages to "jostle" a m.2 drive out of the socket has likely also destroyed the machine.


> Using socketed M.2 SSDs introduces a class of problems ("the SSD isn't seated correctly")

I can't get around to how people say such stuff with a straight face - Apple is a 20-30 year old trillion dollar hardware company and its engineers can design high-tech processors, but apparently can't ensure that a RAM or SSD remains fixed in a slot? Do you really believe that?

> There are probably other reasons

Sure there are, and I'll list the major ones - planned obsolescence (soldered parts are harder to repair) and increased profits through price gauging on upgrades.


> planned obsolescence (soldered parts are harder to repair) and increased profits through price gauging on upgrades.

Neither of these really makes sense.

Planned obsolescence: Apple laptops retain their resale value for a very long time. Even if 8GB of RAM is no longer enough for you, there are plenty of people who'll still be able to use the device for many more years. The days are long over where computers became obsolete after a few years without upgrades.

Price gauging on upgrades: Almost everyone buys the base configuration, and almost no-one upgraded RAM or HD even when the hardware used to allow it. Whatever increased profit Apple are making from upgrades is going to be absolutely tiny.

If Apple really wanted to price gauge on upgrades, they'd at least stock the 16GB models in stores. My guess is that it's barely worth Apple's while to make these models at all, in a purely economic sense. What you're paying for isn't the cost of the RAM/storage itself, but the money necessary to make it worth the bother of making additional models which almost no-one buys.


> Planned obsolescence: Apple laptops retain their resale value for a very long time.

Planned obsolescence here means limiting upgrades and repairs, and making repairs or replacement so costly that you prefer to buy a new device if the current model develops performance issues or has hardware failures. With the M1 this is even worse now as you can't even run other base OSes on it reliably, thus making its users susceptible to systemic obsolescence (not including hardware or software compatibility when the OS is upgraded).

It has nothing to do with the resale value of a product. (It is a humbug argument as nobody can predict what the value of your device will be tomorrow if some newer technology comes along. Moreover, soldered parts and non-user-replaceable battery actually make such laptops even less desirable in the seconds market because they cannot be upgraded, unlike the older Apple laptops).

> Almost everyone buys the base configuration, and almost no-one upgraded RAM or HD

I'd really like to know your source for this.

> What you're paying for isn't the cost of the RAM/storage itself, but the money necessary to make it worth the bother of making additional models which almost no-one buys.

Again, share your source for this really ridiculous argument - they wouldn't need to bother with all this if they didn't solder the parts in the first place.


Planned obsolesce does have a bit to do with resale value, as it determines how much it costs to sell your existing device and replace it. This in turn has implications for whether upgrades are worth the money. Few upgrade ever for any reason; even fewer upgrade if it's economically feasible to sell your old model and buy a newer one. Apple doesn't need to do anything nefarious to stop people upgrading their MacBooks.

In looking at repairs you have to factor in reliability as well as replaceability. We're seeing a trend towards laptops becoming harder to repair but also intrinsically more reliable. No-one complains that laptop CPUs can't be replaced, for example, because CPUs are reliable enough that it's not an issue. We're well on the way to the same being true of internal flash storage. On top of that, Apple is reaping significant benefits in energy efficiency, reliability and performance from closely integrating components. That is something that benefits everyone who buys a MacBook. Hardly anyone benefits from removable RAM or SSDs.

I think some people mistakenly think that Apple could just stop 'soldering down' the RAM and SSD, but they must not realize how closely integrated everything is in the M1 MacBooks. The idea that Apple did all of this extraordinarily expensive R&D just so that they could sell a few more RAM or SSD upgrades is bordering on a conspiracy theory. As I said, there is so little demand for 16GB MacBook models that Apple doesn't even stock them in stores.

Overall, I just don't see any evidence that Apple has bad motivations here. It seems to me that you are just speculating uncharitably.


> We're seeing a trend towards laptops becoming harder to repair but also intrinsically more reliable.

The trends are separate and not related. I have a 10+ year old laptop with replaceable parts and it still runs great without any issues. With the European Union introducing the Right to Repair bill, I expect to see a reverse of this trend soon, and more repairable electronics in the future. If Apple and the others stop their selfish and unethical lobbying against the Right to Repair movement in the US, then the American consumers will also enjoy the same benefits and not be taken for a ride.

> The idea that Apple did all of this extraordinarily expensive R&D just so that they could sell a few more RAM or SSD upgrades is bordering on a conspiracy theory.

Perhaps it does for the ignorant. But it is already recognized that firms like Apple that indulge in this already know that the profit from such unrepairable devices offset the additional expense on the R&D required to create it. Even the wikipedia page on planned obsolescence specifically points this out:

   Producers that pursue this strategy believe that the additional sales revenue it creates more than offsets the additional costs of research and development, and offsets the opportunity costs of repurposing an existing product line.
And, as mentioned already, everyone already knows how Apple is actively lobbying in the US against the Right to Repair bill thus clearly proving that what you call a "conspiracy theory" is indeed a deliberate and entrenched business practice in Apple.


>The trends are separate and not related.

Sure, but as failure of any given component becomes less likely, the advantages of making it replaceable cease to outweigh the disadvantages. An M1 MacBook Air with replaceable RAM and SSD would not have the same performance, battery life or form factor. 99% of Apple's customers care way more about those things than they care about upgradeability.

>But it is already recognized that firms like Apple that indulge in this already know that the profit from such unrepairable devices offset the additional expense on the R&D required to create it.

I'm baffled by this claim. If all Apple wanted was to make their laptops unupgradeable then they could just solder on generic CPU, RAM and SSD components – no R&D needed.

I try not to use the term 'conspiracy theory' lightly, but the claim that Apple's transition to the M1 architecture is motivated primarily by 'planned obsolescence' really is a conspiracy theory.


>An M1 MacBook Air with replaceable RAM and SSD would not have the same performance, battery life or form factor.

citation please.


Discrete replaceable components use up more space, which leaves less space for battery. On top of that, Apple are most likely getting power consumption and performance benefits from integrating the SSD controller and reducing the length of the traces to the RAM chips by an order of magnitude (https://news.ycombinator.com/item?id=25258797). Take a look at the logic board for an M1 MacBook Air: https://photos5.appleinsider.com/gallery/38927-74332-MBA-Tea... There's just no room for RAM sticks and and M2 slot. You could make a different laptop with those features, but it's a laptop that from the point of view of most consumers would be worse.


>but it's a laptop that from the point of view of most consumers would be worse

"Worse", but the insignificantly "better" one is landfill in a few years time when the battery/storage/keyboard/anything has a fault, or even just when it needs more RAM or storage.

Most consumers are just ignorant.


Do you have any stats on this? I'd be surprised if Apple laptops ended up in landfill quicker than their competitors' on average, given the huge second hand market. You also have to bear in mind that the vast majority of 'broken' laptops just get thrown away or put in a cupboard, regardless of whether it would be theoretically possible to repair them.


This sounds like hogwash.

It's the usual Apple stuff, "we've put so much amazing into this laptop that it doesn't matter that it's starved of RAM!". New phones have the same amount of RAM as this laptop.

You will never convince me that an SSD sprinkled with Apple magic is better than having enough RAM. NEVER.

Also you will never convince me that a non-replaceable storage is somehow necessary, or better in some way than replaceable storage, even if it shaves tens of nanometers extra thickness from the laptop. NEVER.

Apple has been doing this ridiculous stuff for decades now and its victims just keep on falling for it.

Punched in the face over and over again - from dongles, to batteries, to proprietary connectors that are abandoned the next year, to lack of headphone sockets, to overheating GPUs because of inadequate heatsinks, to unibody that isn't actually unibody and bends when you tilt the screen, to needing to replace your motherboard because your keyboard got a speck of dust in it, to screens that crack if you look at them wrong, to phones that don't work if you hold them wrong, ad nauseum. Please sir, can I have some more?

They've been punched in the face for so long that now they get a headache when they're not getting punched. It's some kind of bizarre form of masochistic Stockholm Syndrome.

THE WORST PART OF IT ALL is that the market success of this consumer-hostile garbage influences the rest of the industry and ruins other products like a cancer, so now it's super hard to find a phone with a headphone socket or replaceable battery.

Fuck this shit, FUCK APPLE, and fuck their customers for not using their wallets to demand better, and thereby encouraging and normalizing the terrible behavior of this horrible company.


8GB isn't enough for everyone, but it is enough for a lot of common consumer tasks. If you check out comparisons between the 8GB and 16GB M1 models, it's surprising how hard you have to push them before the difference in RAM becomes apparent.

I personally would have liked to see Apple bump up the base spec to 16GB. But hey, the 16GB models are available if you need them.


8Gb is the least-bad part of my little rant.


It is bizarre to me how many problems I've literally never seen before in my life some people on this site can come up with to defend Apple's decisions, especially things like this on a website which purports to be Hacker News.


How could an M.2 SSD be seated incorrectly? It's either in and screwed down or it isn't. As for making the machine thicker, this obsession with shaving every last micron off has become almost cartoonish. I'll take serviceability and good keyboard travel over fashion statements any day, even if my laptop has to be a millimeter or two thicker because of it.

Once that soldered-in SSD goes, that expensive fashion statement is just more e-waste.


With this was true. The M2 design is terrible from an engineering perspective both mechanically and thermally. At best the mounting is a hack job.

I agree with the problem though. M2 is just a crappy answer to it.


Come on man, nobody has problems with their SSD not being seated correctly with any other brand. There are screws to hold them in, they're not going anywhere.

The thickness thing - how thick is an M.2 SSD? 3mm? Get real.

Apple is consumer-hostile and their products are disposable, that's all there is to it.


> I really don't understand why Apple is apparently incapable of making these laptops use standard, replaceable m.2 NVMe SSDs.

Because those SSDs don't have Apple magic. Apple's SSDs are designed to be directly attached to the M1 CPU to provide low latency, high throughput storage -- what amounts to nonvolatile RAM, allowing an M1 Mac with 16 GiB max to vastly outperform an x86 PC with much more RAM at the same tasks.

No, really, though, it's so they can charge an arm and a leg at the Genius Bar for SSD replacements.


How is the Genius Bar going to replace chips soldered to the board? They’re not that genius.


Replace the whole board, charge an arm and a leg for it.


For perspective: 2 year old MacBook Pro 2018 256Gb. (Mostly used for development purposes)

  Critical Warning:                   0x00
  Temperature:                        37 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    42%
  Data Units Read:                    892,035,478 [456 TB]
  Data Units Written:                 786,976,871 [402 TB]
  Host Read Commands:                 4,989,739,415
  Host Write Commands:                2,554,081,641
  Controller Busy Time:               0
  Power Cycles:                       137
  Power On Hours:                     2,132
  Unsafe Shutdowns:                   70
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0
And a 6 month old Intel i3 MacBook Air 256Gb

  Critical Warning:                   0x00 
  Temperature:                        48 Celsius
  Available Spare:                    100% 
  Available Spare Threshold:          99% 
  Percentage Used:                    1% 
  Data Units Read:                    59,113,734 [30.2 TB]
  Data Units Written:                 47,319,687 [24.2 TB]
  Host Read Commands:                 596,571,150
  Host Write Commands:                318,913,173
  Controller Busy Time:               0
  Power Cycles:                       93
  Power On Hours:                     384
  Unsafe Shutdowns:                   21
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0


FWIW my 256GB SSD + 116GB RAM M1 MBP13 had 0.9 TB writes in 6 weeks, so very close.

A similar configuration Intel MB12 I use in 3.5 years also had writes at the same rate.


That's an impressive amount of RAM!


3 month old, 8GB/512GB, M1 Air

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        56 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    2%
  Data Units Read:                    153,706,243 [78.6 TB]
  Data Units Written:                 133,607,337 [68.4 TB]
  Host Read Commands:                 2,004,724,611
  Host Write Commands:                380,685,816
  Controller Busy Time:               0
  Power Cycles:                       178
  Power On Hours:                     358


That's a lot of write. I think that's 157 full writes of your disk. You're writing out almost a terabyte (or is this terabits?) per day.

Your reads are also low in ratio to your write.

Are you swapping? This almost looks like pages swapping in and out of memory.


to give a comparison:

        PM951 NVMe SAMSUNG 512GB
        SMART/Health Information (NVMe Log 0x02)
        Critical Warning:                   0x00
        Temperature:                        52 Celsius
        Available Spare:                    100%
        Available Spare Threshold:          50%
        Percentage Used:                    4%
        Data Units Read:                    30,314,686 [15.5 TB]
        Data Units Written:                 35,402,028 [18.1 TB]
        Host Read Commands:                 601,326,085
        Host Write Commands:                793,617,758
        Controller Busy Time:               17,248
        Power Cycles:                       3,897
        Power On Hours:                     9,506
        Unsafe Shutdowns:                   343
        Media and Data Integrity Errors:    0
        Error Information Log Entries:      5,351
This is from my daily driver 2016 Dell XPS13 9350.

Used for years with rolling-release linux + windows for gaming. multiple reinstall of every OS and since a year, hackintosh.


> I really don't understand why Apple is apparently incapable of making these laptops use standard, replaceable m.2 NVMe SSDs.

Because if they did that, users would be able to replace them.


If that was the concern they could just solder a plain NVMe drive. Blocking user repairs doesn't require the creation of a whole new purpose-built storage controller interface, so that can't explain their rationale here.


>I really don't understand why Apple is apparently incapable of making these laptops use standard, replaceable m.2 NVMe SSDs.

Profit.


>making these laptops use standard, replaceable m.2 NVMe SSDs

Because not using them gives Apple control of the exact specifications and capabilities of their M1 SSDs.

Else they're bound to be compatible with the lowest common denominator standard connectors and drive implementations.

Aside from profit ("paying more for custom parts"), this is also how they control "the whole widget", and how they can innovate (like with the M1 CPUs/SoC components) when others drag themselves with me-too standard parts.

Was the allure of a Mac ever that "you can build your own from standard parts"? If anything, it was always the opposite: it's not a PC.


> Else they're bound to be compatible with the lowest common denominator standard connectors and drive implementations.

The fairly common Samsung 9x0 Pro Nvme SSDs are way faster in almost all benchmarks. So I don't think that argument holds.

This really is about charging a premium for additional storage - and not due to technical reasons.


If there’s no data available about 256GB models, then there’s no way to assess whether their speculations are valid. I look forward to more data.

My 1TB NVMe drive in my Intel mac desktop is at 1% after 18 months, and shows some 46TB of writes by their data collection method. At least 600GB of the data is music albums that haven’t changed since I first copied it over, and I’m mildly curious whether it’s truly writing 32GB/day for 18 months (since I never run out of RAM and I don’t do large I/O activities).

But they’re all focused on M1 so I’m staying out of it and letting them do so.


I have a 256GB, bought it exactly two months ago. 55TB written, 3%. Not doing anything crazy with it, it's mostly a Chrome machine.


55TB written in 2 months for a chrome machine is insane. My 2016 MBP has 55 TB written in those almost 5 years.


My three months old 256 GB shows 9 TB written and 0% used. I mainly write in VS Code on this machine.


For contrast, my PC “system” PCIe drive bought in 2018 shows 6,7TB, and “data” SATA SSD has 1,5TB wear, which is used for games, etc. The drive at work had around 8-10TB since 2016 last time I checked.

I was never concerned with wear, have moderate demands/usage and used these drives as much as needed.


> People that are willing to pay for that much NVME probably use it a lot.

I guess it depends on the use case. I have a 2TB drive for my home macbook pro because that's where all my photos/videos get synced to. There shouldn't be a lot of reads/writes to a significant part of my drive even though it's large.


You say "the first 2" yet those were hand picked by the post author among the many, many posts on twitter reporting 1%.


>And speculating what that would look like for a 256GB drive.

Why "speculating"? Couldn't they get someone with a 256GB drive to ask them?


It also seems to me there are a couple jumps in the initial tweets here (particularly the assumption of the same write rate on 256 GiB models as on 2 TiB models). But regarding several of the assumptions you mentioned here:

> (4) that they are directly comparable to non-M1 hardware without further processing

> (5) that they do not increment when Time Machine or other filesystem snapshots occur

> (6) that they do not increment when APFS copy-on-writes occur

> (7) that they do not include the total byte size of sparsely-modified files

smartmonctl is a tool that inspects the drive's built-in statistics about physical writes. If it shows significant jumps from snapshots, copy-on-write, and sparse files, it's because those feature aren't working as designed in minimizing physical writes.

I also find it unlikely that a bug in smartmonctl or IOKit would show incorrect but plausible-looking values. More likely in the SSD's firmware. But I'm not getting alarmed just yet. (I also don't own an M1 Mac, so that's easy for me to say.)


As always, this thread is full of FUD instead of facts.

You are spot on with your 1% statement (which could be as little as half that, due to rounding). The percentage used is base on the figures set by the manufacturer. It is the one used in drive warranties. Also - worth pointing out that it doesnt mean the drives stop taking writes at 100%. This counter actually goes up to 255 (per spec), where 255 means 255+. Its literally just for life "expectancy".

Some people saying to ignore that and to look at TBW. They simply dont know what they are talking about! No discussion around the nand type on these drives. Is it SLC, MLC, TLC, QLC? or is it one if the hybrid modes, such as pSLC, or iTLC which act like the higher categories? This can mean the difference between <1k and >100k+ P/E cycles.

We simply dont know what the OS is doing. I know that I can have WAY more applications open (and remaining responsive) on my 8GB M1, than i could my 16GB i7. It doesnt feel like traditional swap - so maybe its not. Maybe its way more aggressive, suspending entire applications to disk in the background - taking advantage of the massive bandwidth to make the UX better at the cost of drive writes. We simply dont know.

From following the original thread, the vast majority of people reporting 1% usage since launch. There are a handful with higher usage than that. We dont know what these people are doing on those machines, but its evident from the figures that that the writes generally scale to the amount of ram on the machine. 8GB machines generally seem to be a lot less usage than 16GB. Maybe these people with HUGE amounts of writes are running a memory hog like chrome, which consumes all available ram on the machine, and which is paging when they background it. Some people reporting 8hr daily usage since launch with 200 drive power on hours (drive hours arent same as uptime, as drives can be suspended for power saving), others reporting 800 for same "uptime". Clearly it is due to the workload.

EITHER WAY, the "percentage used" is definitely the figure to go on, because only the manufacturer knows exactly what NAND types they are using (unless someone wants to enlighten us?) as well as the wear levelling algorithms on the controller.


> Maybe they are suspending entire applications to disk in the background

This tactic of suspending unused tabs has MASSIVELY reduced by browser memory usage. Originally with "The Great Suspender" then built into FF natively when I switched to that. This should be used everywhere.

It's seriously game changer for some like me who keeps open 100 tabs.


Yeah, I use this in safari. Its fantastic.


What is the name of the extension/setting?



Like Chrome, Safari also suspends tabs on startup. That is to say, tabs are only loaded when first used.

It depends on your workflow, but I don't use an extension for this. I occasionally just quit and restart it; takes less than five seconds.


That's not what the extension does, it 'expires' tabs you haven't looked at in a few minutes. Clearing up the memory.

Clicking on the tab will load them.


>says that a total of 1% of the drive's capacity has been used, which means that in about 16.5 years (2 months * 99 / 12)

And cellphone reception bars accurately tell you signal strength on Apple product https://mspoweruser.com/apple-admits-to-the-iphone-lying-for...

What really matters is the number of write cycles drive performed, not some imaginary dumbed down single number "Percentage Used" indicator.

> their drive will go readonly.

Cn you point me to a drive manufacturer/model which reliably goes into READ ONLY mode after encountering a defect/exceeding wear limit? Hint: Even most expensive Intel server drives will silently die on you despite claiming read only fallback.


Just in case I all for science and now gonna try to debug what processes cause heavy writes to disk. Easier for me because I don't really use much except browser and some games.

> The first line of the smartctl output everyone's bandying about says that a total of 1% of the drive's capacity has been used, which means that in about 16.5 years (2 months * 99 / 12) their drive will go readonly.

Problem is that my SSD end up 3% dead two months after I bought M1 and after month of daily browsing / Netflix. So that news got me really worried about what gonna happen if I actually move my files to that laptop, connect it to DropBox and and gonna compile C++ on it and might be do some Java web development. What if I end up with 5% capacity loss a month?


On top of that, you don't get answers by just posting numbers and blaming Apple.

Numbers do not mean anything without context.

It would be really interesting to understand what applications folks with both higher and lower numbers use daily. And what their developer habits are. Do they use a specific browser or a specific toolchain?

But ultimately, someone should probably write a little app or daemon that can just keep an eye on what specific processes are doing disk i/o. I think if people ran that for a while and then posted _that_ output, there would probably be some better answers.


Agreed. I've seen thousands of mac minis in CI farms writing 1+ TB / day (all day, every day) and they're slowly starting to die 4 years in. It's hard to write flash to death with that kind of write load. I wouldn't worry about that.


Are there any numbers with precision below one digit? If not, 1% could mean a lot of things and muddle any takeaways to nil.


> 1% of the drive's capacity has been used, which means that in about 16.5 years (2 months * 99 / 12) their drive will go readonly.

Makes me wonder if the engineers at Apple decided it was an acceptable trade off.


Yes, no one uses 16 year old laptops


Well, maybe using 16 years old machine today - but what about using current machine in 16 years ? A lot of stuff has stabilized and has been "good enough" for many use cases for quite a while.


Software support is less than 10 years so in 16 years you'd be very far behind on security patches.


Well, on Mac and other proprietary hardware & software combos.

Enterprise Linux distros such as RHEL regularly provide 10-12 years of support you can pay for per major version and presumably you will be able to upgrade the major version to reset the time.

Same thing for community Linux distros - Fedora, Debian of OpenSUSE already work on current 16+ years old hardware and one can expect that to continue in the future.


My 2006 white plastic MacBook is still in use.

It's just a friend's kid's video player right now, because with OSX it's become too slow for most uses, but if someone put Linux or BSD on it....


Especially 1st gen of new platform laptops, with all the potential quirks. I think that's also why Apple priced them very affordably.


I still use my apple 16GB ipod touch to play music, its in a speaker that has a dock for it. Still works, and I use it to play music. The hardware works. But almost none of the software works, except the music player. The browser no longer works, otherwise I would play music off of spotify or youtube. That thing is probably 13 years old now.


Good for you. Do you use a 16yo laptop though? Because that is the topic here.


Performance improvements have slowed a great deal since then. Assuming you aren't doing anything particularly heavy, you can easily get away with using a Sandy Bridge CPU (10 years old) for most tasks. You'll likely still be able to use it in 5 years just fine. That applies even moreso for something as performant and efficient as the M1.

Case in point, my ThinkPad T420 has been in use since 2011. Even though it's no longer my main driver, the performance is more than good enough for when I need a backup device.

For comparison, 16 years ago, we're talking about single-core 32-bit CPUs, the very first version of their Centrino brand, DDR2 RAM and IDE/PATA HDDs. There's just not much you can do with those limitations.


The performance improvements due to parallel computing hardware (think Metal GPUs and their successors) will be so insane in 16 years that your current laptop won't stand a chance at doing anything what will be considered standard then.


Increasing parallelism is irrelevant for most workloads due to Amdahl's law.


Amdahl's law is only relevant if you can't change your algorithms. If you can, then most of the time it's a non-issue.

The human brain runs off umptillion neurons each running at 100-1000 Hz (depending on measure). Not GHz, MHz or even kHz. Hz. Obviously it's an extreme case, but if general intelligence can be achieved without fast serial computing then there should be lots of things we can do with a few dozen cores all running at GHz rates.


That's bullshit. All of the programs I have written recently had an incredible speedup after porting relevant parts of the program from single-threaded to GPU.

Right now we are hitting the threshold of incompetent programmers and tools long before any other.


Only occasionally, I have an old windows laptop with a CD rom drive. I have some old backups photos on CDs. Its from the 2000s so not sure how old it is exactly.


Still use my 2010 MBP 15”. I changed out the battery twice and swapped in a SSD drive. It’s my only dvd drive. I also have a 2011 Air 11” that still works.


That’s right. I use an 18-year-old laptop.


Nearly no one, I would venture to guess.


This couch side laptop I use to read and search things is a 2011 MBA. Not 16, but maybe it'll make it...


My 2011 MBA from grad school is pretty much done, its so slow. I replaced the battery and the charging port and its doing better but still about to end. Good 10 years though.


Yeah, after posting I was thinking about that.

I have a very lean setup on it with Linux. I don't think it'll last too much longer to be honest.

I mainly use it if I'm on my phone, but find something that a bigger screen/keyboard would help.

But I can't do much on it. Even a RPi has more RAM on it.


I have an Iphone 4 that still serves as a remote control for my music setup. The battery was replaced once, 2014 or so.


My 2017 MBP is showing 97% capacity. It's my daily driver. So at this rate I can expect the SSD to last 291 years? Really? Either these numbers are bogus, or the capacity drop over time is far from linear.


That's just the wear capacity; an SSD can still suffer random failures at any time. The controller and DRAM aren't designed to last for decades.


I have gone through 2 SSDs in my rMBP mid 2015 (16GB RAM). This was the last model with replaceable SSDs, I believe. I am on my third now:

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        38 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          10%
  Percentage Used:                    7%
  Data Units Read:                    82,062,737 [42.0 TB]
  Data Units Written:                 81,415,281 [41.6 TB]
  Host Read Commands:                 539,567,311
  Host Write Commands:                561,934,808
  Controller Busy Time:               3,863
  Power Cycles:                       3,059
  Power On Hours:                     2,705


> (2) that they are not a bug in the IOKit implementation used by smartctl

"While we're looking into the reports, know that the SMART data being reported to the third-party utility is incorrect, as it pertains to wear on our SSDs" said an AppleInsider source within Apple corporate not authorized to speak on behalf of the company. The source refused to elaborate any further on the matter when pressed for specifics.

https://appleinsider.com/articles/21/02/23/questions-raised-...


I am sitting at 10% after 60 days... so my drive will go readonly in 1.6 years.

=== START OF SMART DATA SECTION ===

SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)

Critical Warning: 0x00

Temperature: 33 Celsius

Available Spare: 100%

Available Spare Threshold: 99%

Percentage Used: 10%

Data Units Read: 546,457,001 [279 TB]

Data Units Written: 510,545,911 [261 TB]

Host Read Commands: 3,130,642,888

Host Write Commands: 1,491,509,201

Controller Busy Time: 0

Power Cycles: 118

Power On Hours: 930

Unsafe Shutdowns: 18

Media and Data Integrity Errors: 0

Error Information Log Entries: 0


Sorry that I don't know much about SSDs but what do you mean by "their drive will go readonly"? Can you elaborate? Thanks.


SSDs have a limited number of total write cycles they can undergo before they wear out. See e.g.:

https://www.dell.com/support/kbdoc/en-us/000137999/hard-driv...



In addition to what's been said, Flash cells lose data over time and need to be rewritten.

The controllers take care of this. But an SSD that has gone read-only, assuming it can in fact go read-only, is an SSD that will quickly lose all its data.


You are also assuming it's linear, no?


We are also assuming it'll grow at all.

The premise of the comment is that we don't have enough data.


datapoint:

  CPU: M1
  Model: MacBook Air
  Age: 1 month
  RAM: 16GB
  SSD: 512GB

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        27 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    3,463,221 [1.77 TB]
  Data Units Written:                 1,571,934 [804 GB]
  Host Read Commands:                 51,945,085
  Host Write Commands:                23,518,851
  Controller Busy Time:               0
  Power Cycles:                       83
  Power On Hours:                     18
  Unsafe Shutdowns:                   3
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0


> What, exactly, is the problem with that?

Literally the entire thread appears to be "people who can't do maths correctly, misinterpreting a third-party tool which may or may not be accurately reporting the information in the first place, running around like their hair is on fire."


Apple's own Activity Monitor shows hundreds of GBs of disk writes per hour on my M1 Macbook Pro. Smartctl output showing 10+ terabytes of writes per month looks about right. Apple doesn't provide lifetime TBW numbers but e.g. Samsung seems to only cover about 150-250 terabytes written total for various 250GB SSD models.


Go have a look at the linked thread, where people are basic their hysteria on three order of magnitude errors.


Or the trillion dollar proprietor of the software can disprove these claims and escape defamation. Why don't you take the first step in the assumptions you pointed out because nobody seems to be interested in damage control.


I read this article and immediately tried it on my own 2014 MacBook out of curiosity. Turns out its a third party command line tool that doesn't come with macOS that's reporting statistics on a proprietary system on chip SSD that has only been available for 2-3 months. This thing is probably not using an off the shelf SSD controller and the chances are the information is just wrong. Some of the stats reported down the thread look like they don't stack up as in impossible amounts of data written in the time the drive is reported as being awake.

If it is true and M1 computers start bricking in two years time, which I find unlikely, then these people can take their computers to be fixed, if that doesn't happen, a company can't sell something doesn't work, so I would take it to small claims and exercise my contractual right for the goods supplied to be fit for purpose.


I think its Apples turn to respond. I don't see this as anyone elses responsibility. Until then, all available data clearly points to a premature death of many M1 machines.


So it's looks bad on my Macbook Air M1 16GB:

  Percentage Used:                    3%
  Data Units Read:                    47,004,809 [24.0 TB]
  Data Units Written:                 47,469,528 [24.3 TB]
  Host Read Commands:                 153,293,725
  Host Write Commands:                218,787,006
It's been in-use for two months only and I haven't even compiled anythig on it and all my usage was: light gaming in few 2D games, Firefox and some films.

UPD: it's 256GB SSD model.


You're where I am and I've had this Intel MBP for nearly 4 years and haven't been nice to it (compiling, etc).

    Percentage Used:                    3%
    Data Units Read:                    57,377,073 [29.3 TB]
    Data Units Written:                 77,652,525 [39.7 TB]
    Host Read Commands:                 1,297,472,434
    Host Write Commands:                1,855,797,459
I have 10x your host r/w commands. I wonder if the size of the SSD's blocks on these new MBP are very large so you get large physical writes for small OS writes? Or maybe the controller is bad at coalescing writes?


~10x off could also just be bad math in the smartdata path somewhere, either on the drive, or in the OS.


Intel MBP at two years

```

=== START OF SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 42 Celsius Available Spare: 100% Available Spare Threshold: 99% Percentage Used: 23% Data Units Read: 439,149,863 [224 TB] Data Units Written: 407,143,345 [208 TB] Host Read Commands: 2,930,228,690 Host Write Commands: 1,777,317,283 Controller Busy Time: 0 Power Cycles: 103 Power On Hours: 1,691 Unsafe Shutdowns: 49 Media and Data Integrity Errors: 0 Error Information Log Entries: 0 ```

Ouch.


My M1 Mac Mini is also at 3% after 2 months, 1 TB disk:

  Percentage Used:                    3%
  Data Units Read:                    168,154,606 [86.0 TB]
  Data Units Written:                 169,999,872 [87.0 TB]
  Host Read Commands:                 1,077,848,202
  Host Write Commands:                743,848,335
this is a secondary dev machine, so while it's been on and idle for almost 500 hours, it's only had a week or two of "active work".


All of these values are higher than my workhorse 2017 MBP that I’ve used nearly everyday since I got it.

Could something be wrong with smartctl?


Mine is about 20 days old. MacBook Air M1 16Gb 1TB Drive:

    Percentage Used:                    0%
    Data Units Read:                    2,219,368 [1.13 TB]
    Data Units Written:                 2,141,099 [1.09 TB]
    Host Read Commands:                 28,719,296
    Host Write Commands:                26,061,193
    Controller Busy Time:               0
    Power Cycles:                       179
    Power On Hours:                     14
Not sure what to make of the power cycles stat either. It's mostly plugged in at my desk.


Option A: the tool is accurate, and you've writing 1 TB in 14 hours (73 GB/hr for the entire time the drive has been on).

Option B: The tool is interpreting the SMART data incorrectly, or the drive isn't reporting it correctly.

I mean I don't know which is correct, but it seems odd that in 20 days of ownership, the drive has been awake for only 14 hours, but been writing solidly at the rate of a gig a minute for the whole time.


Option B sounds more likely in my case. I haven't used it much. Mostly just checking some websites. It's not a daily driver.


I've been looking at the Activity Monitor and disk writes for the kernel task are crazy high (hundreds of GBs per hour), especially when Rosetta is involved. The Rosetta daemon seems to use a lot of memory, likely causing excessive swapping. Another suspicious process is... Safari bookmark sync? Can't even image why it needs to write 10GB per hour.


Sounds like 8 GB of RAM is not enough after all...


> "The Rosetta daemon seems to use a lot of memory, likely causing excessive swapping."

I haven't seen this. The rosetta daemon, oahd, is using only 1.3MB on my system (8GB M1 MacBook Air) and I have several large translated apps running. If it's stuck using lots of memory, I guess that's probably a bug.


Its weird that people still have computers that use swap when 16 gigs of ram costs like 80 euros.


Yeah, I hope for the sake of M1 owners that it is Option B, because it means this is likely a flap about nowt.

That said, it does highlight how bad it is that computers now come with the bit that is most likely to wear out being a non-replaceable part.


I double-checked the numbers reported by the driver against doing actual writes to the file system and the numbers reported by the OS, and they match exactly when there is no other activity: writing a 1GB file increases the number by 1GB and a few kilobytes of metadata.

Once the memory is full, it starts swapping a lot and then things go bad.

For the record, here are the numbers from this box: 900GB written in 20 power-on hours, on a 256GB driver.

Critical Warning: 0x00 Temperature: 26 Celsius Available Spare: 100% Available Spare Threshold: 99% Percentage Used: 0% Data Units Read: 15,019,377 [7.68 TB] Data Units Written: 1,759,297 [900 GB] Host Read Commands: 101,021,092 Host Write Commands: 14,010,727 Controller Busy Time: 0 Power Cycles: 75 Power On Hours: 20


My Data Units Written has gone up by 0.3TB in the last hour doing nothing but surfing the web on Safari. I would like to believe that this points to Option B. This is on a 512GB SSD/8GB RAM MBA


That sounds like power management at work. When the drive is not doing anything of course it should turn off.


Option C: OS installation and write amplification


What's the sector size? Maybe it's counting number of sectors *sector size.


> Not sure what to make of the power cycles stat either. It's mostly plugged in at my desk.

I wonder if that's a battery charge cycle metric. Anyone know?


It's how many times the drive is powered on/off.


It's not. My MBP (non M1) has 28,891. The battery would be dead as a doornail. I imagine MacOS just has the ability to put the SSD to sleep when idle.


It does, and it is also configurable in Energy Saver in System Preferences.


Mac can draw more power than a power adapter can supply. In short bursts this is fine.

Some apps (CIv6) will drain the battery completely when plugged in.

No idea if this is related


Are you talking about M1 Macbook Air? Since Apple sell M1 Macbook Air with 30W adapter and I guess it can draw more than 30W, but personally I haven't seen any of it while playing both GPU / CPU intensive games.


Adding my data points: Macbook Air M1 16GB RAM, 512GB SSD. I have had the laptop for 4 days, and so far only used it for OS updates, a little Discord usage, web browsing (Firefox) and a small amount of programming (no Spotify).

   Percentage Used:                    0%
   Data Units Read:                    596,588 [305 GB]
   Data Units Written:                 404,196 [206 GB]
   Host Read Commands:                 8,532,827
   Host Write Commands:                3,851,891


Which Terminal command can I use to get this data? (I don't have an M1 MacBook, I'm just curious.)

[edit] Here's my data:

  Percentage Used: 2%
  Data Units Read: 51,571,905 [26,4 TB]
  Data Units Written: 32,365,209 [16,5 TB]
  Host Read Commands: 1,291,777,925
  Host Write Commands: 720,050,493
I got my MacBook (15", 16 GB RAM, 512 GB SSD) in July 2018.


    brew install smartmontools && sudo smartctl --all /dev/disk0


That only works if you already have be Brew installed :) you can get it at https://brew.sh


or with MacPorts:

> sudo port install smartmontools && sudo smartctl --all /dev/disk0


Thanks!


Web browsers typically cache tons of data on disk. I wonder if the people with higher usage are all running something other than Safari?


What should possibly make Firefox use disk more heavily on M1 Mac than it does on Linux? I've been using Firefox for decade and it never caused 0.5TBW / day on my Samsung SSDs.


Could be different OS behavior in terms of syncing to disk?


I've had mine between two and three months. My writes are significantly lower than yours.\n

SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 27 Celsius Available Spare: 100% Available Spare Threshold: 99% Percentage Used: 0% Data Units Read: 8,424,532 [4.31 TB] Data Units Written: 5,496,149 [2.81 TB] Host Read Commands: 122,463,637 Host Write Commands: 59,624,967 Controller Busy Time: 0 Power Cycles: 200 Power On Hours: 60 Unsafe Shutdowns: 6 Media and Data Integrity Errors: 0 Error Information Log Entries: 0

I mostly stick to Apple applications because they seem to get much better performance, i.e. Safari instead of Firefox. Other applications that I use are Emacs, discord, mail, calendar.


Data point:

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        26 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    21,290,808 [10.9 TB]
  Data Units Written:                 17,018,375 [8.71 TB]
  Host Read Commands:                 128,112,411
  Host Write Commands:                89,074,700
  Controller Busy Time:               0
  Power Cycles:                       174
  Power On Hours:                     80
  Unsafe Shutdowns:                   6
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0
8GB/256GB three months old. Mostly used to write code using VS Code and browsing with Chrome. I've also compiled a lot of Haskell code.


Strange coincidence... I have identical power cycles to you.

Also

Read 1 entries from Error Information Log failed: GetLogPage failed: system=0x38, sub=0x0, code=745


16GB/1TB model, two months old:

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        24 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    3,019,145 [1.54 TB]
  Data Units Written:                 3,392,635 [1.73 TB]
  Host Read Commands:                 55,076,557
  Host Write Commands:                29,421,973
  Controller Busy Time:               0
  Power Cycles:                       100
  Power On Hours:                     28
  Unsafe Shutdowns:                   16
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0
Doesn't seem too bad.


16GB/1TB, also two months old:

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        34 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    2%
  Data Units Read:                    117,513,026 [60.1 TB]
  Data Units Written:                 110,686,292 [56.6 TB]
  Host Read Commands:                 506,322,545
  Host Write Commands:                351,505,939
  Controller Busy Time:               0
  Power Cycles:                       389
  Power On Hours:                     388
  Unsafe Shutdowns:                   40
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0


16GB/1TB about a month old:

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        32 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    16,603,470 [8.50 TB]
  Data Units Written:                 15,747,066 [8.06 TB]
  Host Read Commands:                 95,905,593
  Host Write Commands:                62,087,286
  Controller Busy Time:               0
  Power Cycles:                       93
  Power On Hours:                     52
  Unsafe Shutdowns:                   7
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0
FWIW I've installed each incremental MacOS update the day it came out.


That's double the reads/writes of my 16 GB/1 TB 2020 Intel MBP for less than 1/3 the Power On Hours. Ratio between data total and read and write commands is quite different though.

    SMART/Health Information (NVMe Log 0x02)  
    Critical Warning:                   0x00  
    Temperature:                        32 Celsius  
    Available Spare:                    100%  
    Available Spare Threshold:          99%  
    Percentage Used:                    0% 
    Data Units Read:                    8,413,842 [4.30 TB]
    Data Units Written:                 8,138,306 [4.16 TB]
    Host Read Commands:                 187,292,832
    Host Write Commands:                154,212,201
    Controller Busy Time:               0
    Power Cycles:                       105
    Power On Hours:                     171


M1 8gb Macbook Air here:

Percentage Used: 0%

Data Units Read: 28,383,716 [14.5 TB]

Data Units Written: 26,487,969 [13.5 TB]

Host Read Commands: 137,200,722

Host Write Commands: 155,386,158

VSCode, compiling, generally programming work load, with heavy multi tasking. I would say ~1 month old.


My 2016 MBP, 500GB SSD with a ton of nodejs development (so lots of disk i/o):

    Percentage Used:                    5%
    Data Units Read:                    198,797,763 [101 TB]
    Data Units Written:                 151,801,865 [77.7 TB]
    Host Read Commands:                 4,494,028,451
    Host Write Commands:                1,965,182,650
A new system should not have done that much work already in that short a time.


m1 mac mini 256 GiB drive, 16 GiB memory. Usage is as a thin client to beefier (louder, and hotter) machine in another room; youtube; slack; zoom; discord. Used since 22 Jan 2021.

data point

  Percentage Used:                    0%
  Data Units Read:                    1,815,191 [929 GB]
  Data Units Written:                 1,854,253 [949 GB]
  Host Read Commands:                 31,584,364
  Host Write Commands:                25,360,962
data point verbose

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        31 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    1,815,191 [929 GB]
  Data Units Written:                 1,854,253 [949 GB]
  Host Read Commands:                 31,584,364
  Host Write Commands:                25,360,962
  Controller Busy Time:               0
  Power Cycles:                       102
  Power On Hours:                     33
  Unsafe Shutdowns:                   7
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0


Wow, I just checked on my windows laptop with heavy use 19 TB lifetime writes in a year.


data point:

    Percentage Used:                    0%
    Data Units Read:                    7,920,747 [4.05 TB]
    Data Units Written:                 1,490,848 [763 GB]
    Host Read Commands:                 90,988,727
    Host Write Commands:                34,766,350
Usage: 3 months~ wrote an ts web app in development over about 6-8 weeks and then 4 weeks of regular gaming (Steam:EU4). Development was all in terminal (vim + npm).

Model: Air 7C16GB/256GB


I have an Intel MBP 16 with twice as many read/write bytes but still shows 0% used. Not sure how they are calculating the percent.


How big of a drive is it? So far, most of the 3%+ used ones have 2TB drives, which is the opposite of what you would naively expect.


On my older (3+ year now) MPB 16 with a 512GB drive I get:

    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        41 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          10%
    Percentage Used:                    22%
    Data Units Read:                    621,054,721 [317 TB]
    Data Units Written:                 483,003,547 [247 TB]
    Host Read Commands:                 9,028,855,382
    Host Write Commands:                3,924,827,022
    Controller Busy Time:               16,789
    Power Cycles:                       8,373
    Power On Hours:                     6,828
    Unsafe Shutdowns:                   375
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      271


My 2016 MBP shows very similar numbers, though my % is even higher (26%):

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        30 Celsius
  Available Spare:                    88%
  Available Spare Threshold:          2%
  Percentage Used:                    24%
  Data Units Read:                    356,923,637 [182 TB]
  Data Units Written:                 354,603,847 [181 TB]
  Host Read Commands:                 2,817,724,668
  Host Write Commands:                2,270,199,931
  Controller Busy Time:               0
  Power Cycles:                       21,443
  Power On Hours:                     996
  Unsafe Shutdowns:                   21
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0


Wow. How long ago did you get the MBP 16"? I've been running this system as my daily driver for >12 months and see very different numbers from you:

  smartctl 7.2 2020-12-30 r5155 [Darwin 20.4.0 x86_64] (local build)
  Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

  === START OF INFORMATION SECTION ===
  Model Number:                       APPLE SSD AP8192N
  Serial Number:                      XXXXXXXXXXXXX
  Firmware Version:                   1161.100
  PCI Vendor/Subsystem ID:            0x106b
  IEEE OUI Identifier:                0x000000
  Controller ID:                      0
  NVMe Version:                       <1.2
  Number of Namespaces:               1
  Local Time is:                      Tue Feb 23 17:01:28 2021 PST
  Firmware Updates (0x02):            1 Slot
  Optional Admin Commands (0x0004):   Frmw_DL
  Optional NVM Commands (0x0004):     DS_Mngmt
  Maximum Data Transfer Size:         256 Pages

  Supported Power States
  St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
   0 +     0.00W       -        -    0  0  0  0        0       0

  === START OF SMART DATA SECTION ===
  SMART overall-health self-assessment test result: PASSED

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        37 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    53,827,033 [27.5 TB]
  Data Units Written:                 37,342,497 [19.1 TB]
  Host Read Commands:                 704,072,000
  Host Write Commands:                669,901,451
  Controller Busy Time:               0
  Power Cycles:                       234
  Power On Hours:                     528
  Unsafe Shutdowns:                   74
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      0


I think he means 16GB, not 16" as he mentions 3 yr old but 16" didn't exist 3 years ago.


Yeah, should have been more specific - it's a MacBook Pro (15-inch, 2017)


older doesn't mean anything relevant to anyone but you. Did it have its bar mitzvah or is it 3 years old, nobody knows but you.


Pardon me. I have 256GB model. While I used it daily for almost 2 months now it's still sounds crazy to have 25TBW for my usage. I didn't used it for any programming or compilation, no brew package builds either. All I did is used Firefox, downloaded like 30GB of torrents and then watched some Netflix. Also I haven't even moved any of my files to this laptop: so no photo archive, no email sync or dropbox, literaly no usage.

All I can hope for is that it's some kind of bug in SMART or might be Apple gonna fix it soon. Since I use my Kubuntu desktop with Samsung 850 PRO 256GB way more heavily and I barely got to 30TBW in like 4 years.


Torrents suck for SSD lifetime, the chunk size is typically 16kb but chunks aren’t downloaded whole and in order, so you might end up with a lot of wear and tear unless the file system and the torrent client/library are both working together to eliminate the partial writes.

However, it doesn’t seem that torrents are a common factor in user accounts, and 30GiB in torrents would not account for that much write amplification, so carry on!


Yea i'd personally be pretty annoyed if my macbook was just writing that much data in the background slowly wearing out my ssd, 25tb over 2 months is just silly.

checked my 2015 15" that i've been using non stop for over 3 years and it's at 96tb which is 2.6 a month.


You can also keep an eye on the data counter in activity monitor while it's on. At that kind of numbers you're doing to catch it even if it's periodic: it's got a running total too since the Mac was switched on.


How about Time Machine backups? I suspect this is part of it. I believe that Time Machine does keep local snapshots.


Apple better come to my house and fix it here or just bring me a new laptop. I'll take a normal intel at this rate. I don't have the time for them to monkey around with it.

SMART/Health Information (NVMe Log 0x02)

Critical Warning: 0x00

Temperature: 29 Celsius

Available Spare: 100%

Available Spare Threshold: 99%

Percentage Used: 2%

Data Units Read: 18,188,656 [9.31 TB]

Data Units Written: 23,776,092 [12.1 TB]


For comparison here's the nvme smart-log for my 2.5 year old X1 Carbon running Linux - used for VSCode/compiles and browsing/ssh etc.

  carbonx  ~  sudo nvme smart-log /dev/nvme0n1
  Smart Log for NVME device:nvme0n1 namespace-id:ffffffff
  critical_warning   : 0
  temperature    : 29 C
  available_spare    : 100%
  available_spare_threshold  : 10%
  percentage_used    : 0%
  endurance group critical warning summary: 0
  data_units_read    : 12,862,645
  data_units_written   : 16,193,903
  host_read_commands   : 124,284,317
  host_write_commands   : 218,295,105
  controller_busy_time   : 567
  power_cycles    : 3,078
  power_on_hours    : 558
  unsafe_shutdowns   : 209
  media_errors    : 0
So around 8.2 TB written in 2.5 years of daily use. And I have had syncthing running on it past few months to sync some git repos as an experiment. (And I run a rolling release distro so lots of updates.) In comparison the M1 data posted here looks out of whack.

I generally use bpftrace to find if anything keeps writing to my SSD - for the most part I don't find any misbehaving programs on modern Linux distros. Assuming dtrace still works on M1 Macs you might be able to find what is writing to the disk.


Another Linux user data point: XPS13(9310)/512GB nvme/16GB RAM/2GB swapfile/kubuntu 20.10/kernel 5.11 (in use since ~Dec 5, 2020):

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        33 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          50%
  Percentage Used:                    0%
  Data Units Read:                    631,319 [323 GB]
  Data Units Written:                 1,157,883 [592 GB]
  Host Read Commands:                 4,650,820
  Host Write Commands:                7,329,774
  Controller Busy Time:               16
  Power Cycles:                       37
  Power On Hours:                     45
  Unsafe Shutdowns:                   15
  Media and Data Integrity Errors:    0
  Error Information Log Entries:      1
  Warning  Comp. Temperature Time:    0
  Critical Comp. Temperature Time:    0
I do Linux kernel dev and have done quite a few kernel compiles at this point (using Ubuntu's full config) as well as general browsing and some Docker workloads and Spotify... Judging by 'Power On Hours' combined with 'Data Units Written', looks like there's a bug to me.

As a side, I don't know why 'Available Spare Threshold' is 50%, and the 1 'Error Information Log Entries' appears to be successful results of some test.


I have the same question here. How do you only have 45 power on hours in 80 days?

Does anyone know why power on hours seem to be under reported so often?


On linux there are a bunch of "laptop mode" configurations that can minimize the time the disk will be woken up to actually write out data. The price is that you'll lose the last N minutes of work on a hard crash. And it only works when you have enough RAM to avoid swapping and keep dirty pages in memory. And your workloads don't explicitly call fsync. But if setup correctly your drive will spend most of its time sleeping.


> And your workloads don't explicitly call fsync.

If you run ZFS, you can set sync=disabled for your filesystems. This will disable fsync.

Unlike most (all?) other filesystems, that's actually safe. ZFS doesn't reorder writes between transaction groups, so after a crash you'll get a consistent state from however many minutes ago.

(However, txgs have a time limit of 5 seconds by default. You also need to increase that.)


As it relates to the Mac's, I'm starting to wonder if their SMART reporting is just faulty. Power on Hours seems to be definitively wrong, I'm starting to wonder if all of the SMART data is bogus and there is no issue.


Just a guess, but maybe it doesn't count time in low power modes?


Another Linux datapoint, X1 Carbon 6th Generation, daily use.

  SMART/Health Information (NVMe Log 0x02)                                                                                                                                                            
  Critical Warning:                   0x00   
  Temperature:                        35 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          10%                                                           
  Percentage Used:                    25%
  Data Units Read:                    22,242,132 [11.3 TB]
  Data Units Written:                 74,693,212 [38.2 TB]
  Host Read Commands:                 540,582,518                                                   
  Host Write Commands:                1,857,635,922                            
  Controller Busy Time:               5,131                                                         
  Power Cycles:                       884                                                           
  Power On Hours:                     3,497                                                                                                                                                           
  Unsafe Shutdowns:                   261                                                           
  Media and Data Integrity Errors:    0      
  Error Information Log Entries:      882       
  Warning  Comp. Temperature Time:    0         
  Critical Comp. Temperature Time:    0        
  Temperature Sensor 1:               35 Celsius
  Temperature Sensor 2:               37 Celsius


How do you only have 558 PoH with ~900 days of daily use? Do you only use it for 30m per day?

Here's a 512GB WD Black from my home lab server. It runs ~10 super light usage VMs with Docker stacks that include 2 GitLab instances, 2 GitLab runners, 3 Nextcloud instances, 3 Redmine instances, 1 Gitea instance, 2 Drone runners, 1 Minio instance, 1 Nexus instance, 1 Emby instance (w/transcoding), and various reverse proxies, etc..

The write endurance is supposed to be 300TBW, so it should really be over 20% used but says 0%.

    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        38 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          10%
    Percentage Used:                    0%
    Data Units Read:                    95,066,245 [48.6 TB]
    Data Units Written:                 135,910,315 [69.5 TB]
    Host Read Commands:                 972,466,473
    Host Write Commands:                2,504,003,547
    Power On Hours:                     17,383

    Error Information (NVMe Log 0x01, max 256 entries)
    No Errors Logged
Compare it to a 500MB Crucial MX500 under the same load which is supposed to have 180TBW:

    Model Family:     Crucial/Micron BX/MX1/2/3/500, M5/600, 1100 SSDs
    Device Model:     CT500MX500SSD1
    Serial Number:    
    Sector Sizes:     512 bytes logical, 4096 bytes physical
    Rotation Rate:    Solid State Device
    Form Factor:      2.5 inches

    ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
    9   Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       11225
    173 Ave_Block-Erase_Count   0x0032   032   032   000    Old_age   Always       -       1032
    194 Temperature_Celsius     0x0022   067   044   000    Old_age   Always       -       33 (Min/Max 0/56)
    202 Percent_Lifetime_Remain 0x0030   032   032   001    Old_age   Offline      -       68
    246 Total_Host_Sector_Write 0x0032   100   100   000    Old_age   Always       -       59166051308
That's 68% used after ~30TBW (59,166,051,308 sectors*512 = 30,293,018,269,696 bytes).

What I've learned from trying to diagnose those MX500s is that TBW doesn't really matter all that much. It's the P/E cycles that really count. For example, the MX500s are rated for 1500 erase cycles (#173 above) IIRC.

I've also become skeptical of many SMART implementations. I know the PoH on the Crucials is incorrect because when they were new they were reporting 45 days of PoH on a system with 76 days of uptime. So if the manufacturers can't get something as simple as PoH right, how can anything be trusted?


I have a 1TB MX500 and a much older 240GB M500 and the PoH stats are similarly unrealistic for both. The MX500 has ran close to 24/7 for two months now and reports 328h which is like 14 days. The M500 reports 12k hours (~1.35 years) but it's 5+ years old and I'm sure it should be higher.

OTOH the number of power cycles they report seem to be too high, at 45 and 2332 respectively. Maybe that's related?


I have a different work laptop so I mostly use this on weekends and weeknights and I believe NVME APST mode may be affecting it.

Then there's always the possibility of wrong interpretation of the numbers as every OEM has possibly different implementation.

In Apple's case it's more likely they have the right implementation as they make both the software and the hardware. There could still be bugs..


Such high swap usage sounds like a bug.

But at the same time. 8GB of ram isn't a lot.

Especially when multi tasking.

I know there was a lot of talk about Apple's directly soldered RAM being super efficient.

But if you've got 30 tabs open, playing music and using Photoshop. No amount of optimisation is going to prevent swap usage.

Still though. Terrabytes of usage must be a bug.


Just yesterday people were telling us that Apple revolutionized memory and are only installing such low amounts of RAM because hey, you got that super fast SSD right there it's basically RAM by a different name.

Well, whoops. I guess there are some side effects.


>Apple revolutionized memory

Rules of software engineering and engineering in general: you can't break Einstein. RAM is generally slow to begin with, but the disk is slower. RAM should be plenty to be used as disk cache, not vice versa. (edit: omitted the last sentence)


It's especially funny because the M1 Mac SSDs seem to perform at around the same level as current competition from what I've seen.


Apparently the issue affects the 16 GB models as well. In that case, the SSD wear likely would not be related to the limited RAM in the 8 GB model.


Presumably people getting 16GB of RAM are also running more memory intensive workloads than those not shelling out for the upgrade, so it could be a non-factor.


16GB ram M1 user, haven't seen a KB of swap since I don't do memory intensive stuff, just wanted to future proof.


"8GB of ram isn't a lot."

You are not wrong in 2021 but this comment took me back to late 1900s-early 2000s when a few hundred MB of RAM was premium :). How far have we come!!


Yeah, although in this case it takes me back to ~10 years ago. I didn't think 8GB was a lot back then either, and we typically tried for at least 16.

Its pretty disappointing how little advancement has been made in RAM in recent years. It seems like everything should come with at least 16GB now with 32GB being readily available, but I guess noone is working on the cost of desktop/laptop memory.


In 2011 I bought a laptop with 16GB ram for $2000 (Lenovo W520).

2015, MBP13, 8GB ram I bought for $1900.

2018, MBP13, 16GB, $2200.

2020, MBP13, 32GB, $2500.

Pretty sad improvements for 9 years. But the form factor did improve...


Wow those W520 were heavy. I got a few dumped on me at an old job. They were going to throw them away. I put a few together that I sold for 400 each in 2018, great returns. I didn’t even think they would bring 200


I mean. You're correct, in an off color way. I'm typing this from a Thinkpad with 128Gb of RAM that I bought for <2k (total machine+aftermarket RAM).

So it sounds like Apple has made very little progress in arena.


I'm typing this comment on a 2011 MBA with 4GB's of RAM.

Running Linux on it. Native apps are okay. Firefox is okay unless I over do it with tabs. Forget about electron apps.


RAM prices have come down significantly. I just helped a friend get 16GB of memory for ~$50. It wasn't super nice fancy RGB LED RAM, but it works.

You can still get by on 8GB if you're a lighter user. It's unpleasant for me, but my grandma literally only uses a browser so it works for her.


An Acer laptop from 2010 has 8GB (2cores/4threads, 3.2GHz), still in use but it's just office applications. That's all (an no swap of course).


For laptops, it's power, isn't it? Not to mention initializing so much ram takes a long time.


Maybe, but now we've got phones with 12 GB. It doesn't seem like the difference can be particularly significant.


I remember having 8 memory slots for my Cyrix 40Mhz 486 clone in 1994. I had 4MB of memory in four slots. I was thrilled when a friend gave me 4 256KB memory sims for an extra 1MB of RAM, which significantly sped up my computer :)


"late 1900"... you mean like 1994 where 8MB RAM was the dream?


oops haha yea I meant 1990s


Why would you need more than 128MB? What are you gonna do with it?

Hahahahahahahahahaha....


I had a 75 MB (yes, MB) drive in the first computer I bought. It was top of the line at the time. I remember telling my wife that I couldn’t imagine ever filling it up. Ha! Now I feel like a pauper if my phone has a 64GB drive ;-)


"640kB should be enough for anyone" - nobody ever, but misattributed to Bill Gates.

https://www.computerworld.com/article/2534312/the--640k--quo...


At the time it would have served pretty much all normal users enough, so I don’t see what’s wrong with the statement. It’s like today telling a home user 32GB should be enough for just about anything a normal user would do with their home PC.

I wouldn’t advise anyone to get 128GB (or more) “just in case”. If they have a special need case, then, sure , otherwise no.


Late 1900s, few 100 MB of HDD would have been premium. I remember my father paying a lot for upgrading from 1MB to 4MB.


I think I forget sometimes because I'm working with this stuff daily now, but yeah the evolution of computing technology is just astounding in scale and pace.


> How far have we come!!

Patting yourself on the back for gratuitously wasting resources really, really grates on me.

Sure, I can buy a car that gets 10 miles per gallon for my weekly groceries shopping. I can even afford it, it won't break my bank.

But actually being proud of it? Like it's some sort of sign of social and technological progress?

Whew.


I've noticed kernel_task's disk usage is heavily skewed towards writes (e.g. 7GB read for 170GB written over a couple of hours). Assuming it is indeed swapping pages to disk and reads/writes are counted properly, either the dynamic pager or some application/system process are making extremely unfortunate life decisions. Lots of dirty pages are never read back, like there is a huge cache of... something.


In general you will write out pages to swap before evicting them from memory. The goal is never having to wait for a page to finish being flushed to disk to allocate a new page. This means you will sometimes access a page that had been written out to disk but is still available to memory. If you modify it it may be written out again before being read. So it isn't unreasonable to have swap writes be higher than swap reads.

However those numbers seem extreme probably some bad tuning or just neglecting to account for the full cost of writing to disk.


One thing: any machine client&server I've used in the past 20 years, the swap is the 1st thing to disable.

If there is a need for swap, install more RAM.

> Terrabytes of usage must be a bug.

Garbage collection setups with relatively large heaps are extra horrid when it comes to memory pattern usage while running full GC... and respectively swap.


Swap has value. E.g. it lets you have a tmpfs that's larger than your physical RAM. Or you can process memory dumps of a machine larger than yours. Or you can just freeze a process group and have it dumped to disk so other processes can use the RAM in the meantime. It can definitely help avoiding the OOM killer.

Swap occupancy and swap activity are not the same. The former is fine, the latter should be kept small.


> Or you can process memory dumps of a machine larger than yours.

You should be able to that in software anyways - instead of loading to memory entirely, all it has to do is memory map the file dump.

About the need to freeze a process (group). I don't quite see how that's useful on a server. On a desktop machine I have never run into such a case where closing the application would not suffice. Is there an example?

Last - using the swap pretty much means no disk cache.


> You should be able to that in software anyways - instead of loading to memory entirely, all it has to do is memory map the file dump.

Should, perhaps. But in practice I have had analyzers gobble up more memory than available.

> About the need to freeze a process (group). I don't quite see how that's useful on a server. On a desktop machine I have never run into such a case where closing the application would not suffice. Is there an example?

Long-running renderjob, preempted by a higher-priority one. Technically they're resumable so we could stop them and resume the other one later from a snapshot file but that isn't implemented in the current workers. So as long as the machine has enough ram+swap it's easier to freeze the current job and start the higher-priority one.

> Last - using the swap pretty much means no disk cache.

I don't know all the tunables of the swap subsystem well enough but I have seen swap being used before running out of physical ram. I assume some IO displaced inactive applications.


>analyzers gobble up more memory than available.

I can run 128GB, if need be. But yeah if the software is pitiful and poorly implemented. Running on swap is an one-time-option I guess.

>so we could stop them and resume the other one later from a snapshot file but that isn't implemented in the current workers.

Indeed, this seems like a poor implementation, lacking 'save' function. I have not run into similar cases.


The most ram you can get on the machines is 16gb.


Of course, that's the point. If you cannot have enough memory, you don't buy them. Even the old haswell (2013) Acer laptop has 16GB; the skylake lenovo laptop I use for work has 32GB.


> 8GB of ram isn't a lot.

Actually yes it is.


> 8GB of ram isn't a lot.

True, my 3 year old phone has 8GB ram. On a laptop, 8GB is just anemic.


You're not wrong, but I have to wonder how much phone RAM is just the new megapixels, i.e. bigger number = more impressive spec sheet.

iOS is obviously a lot stricter on background activity than Android but manages to work great on 3-6GB.

I can comfortably do development work on my laptop with 16GB, and until last year was managing mostly okay on a 5 year-old machine with 8GB.

When you factor in the sort of stuff people actually do on a phone, surely 12-16GB is a massive waste? You can make the argument that it will become more useful as the phone ages, but by that point it will have probably stopped receiving software updates.


The value is in having multiple apps stored in memory. I think the average person multitasks more on their phone compared to a PC.

Things like having a lot of tabs open. Messaging applications, music streaming etc.

It's not required. But instant switching between apps is a nice user experience.

In the same way a 120Hz monitor is a nicer experience but 60Hz is perfectly reasonable.

I don't think it's fair to compare iOS to Android. Android is quite a bit heaver.

I remember reading a quote from someone at Nvidia saying that hardware is much easier to change than software.

And at the end of the day, as consumers we should demand more for our money. Phone's aren't getting cheaper.


Remember that it is RISC too, so memory usage would be much higher then x86.


X86 instructions are a bit of a mess, yes, but in ARM you can end up with more of them.

The tricky bit here is actually the SIMD instructions since they can be very long and compiler will often go absolutely bonkers on fairly short code.


Arm and x86 have a similar number of instructions. The tradeoff is that with x86 you end up with more compact code because the instructions are variable length; but on arm the actual decoding is easier so you have more space for i$ (or anything else you want).


That's what I meant


Why?


At a very basic level: RISC instructions take up more space because they do less individually.

That said, very little of most RAM usage is the actual instructions running. It's mostly data.


We're no longer talking about "directly soldered RAM"; in the M1 Macs, the RAM is now part of the M1 chip itself. It's on the CPU die.


No, the RAM is separate chips / dies, albeit incorporated into the same module as the CPU. On pictures of the M1[1], the RAM is the plastic covered chips next to the main heat spreader for the SoC.

[1] Such as the image on the Wikipedia article https://en.wikipedia.org/wiki/Apple_M1


It's completely baffling how a little meme like this, which is in no way accurate, survives and thrives not just at large but specifically on HN. Can we track the meme's point of origin?


It's not that far off-base, given that most people don't understand the difference between on-package, where a component is bundled with the CPU on the same chip, and on-die, where a feature is actually part of the same litho process as the CPU.

There's really no meme to trace here; just a popular misunderstanding of semiconductor terminology.


The RAM is on-chip but not on-die... but most people (even developers) don't know the difference between a chip, a die, and a core to begin with. It's a pretty minor error, although such errors are a good sign that someone either doesn't know what they're talking about or is careless with terminology.


I’m not sure if the RAM can be defined as on chip but it’s on the same package.

A chip can contain multiple dies.

Usually chips were just discrete packages basically a die that is packaged for integration onto a PCB.

A die is just the bare silicon that has an IC etched into it.

With interposers and modern multi chip packages things are a bit more complicated.

Since the RAM dies are packages they themselves can be defined as chips too while the CPU needs to be integrated to the substrate first.

At this point it’s a question is the CPU+RAM combo can be defined as a chip on its own or is it a hybrid/compound package I would go with the latter.

If the memory and cpu dies / chips would’ve been stacked like say the raspberry pie one I would call it RAM on chip tho.


Why he like that.

Maybe it's a good sign someone is half interested and open to learning more.


If it's fair to refer to M1 as an SoC, then it's defensible to say the RAM is on the chip.


> Can we track the meme's point of origin?

Apple themselves are responsible for it, all their promotional material for the chip have included unified memory right alongside all the on-die modules

https://www.apple.com/v/mac/m1/a/images/overview/chip_memory...


Speaking for myself, this is the first time I've heard of multiple modules in the same chip. Hearing about it now and knowing generally how dies[1] are manufactured and packaged into ICs it's an obvious thing to do, but yeah never crossed my mind.

[1] dice?


It's cool, hey? How AMD (and others) are now doing their CPUs is a good example: there's often several CPU "chiplets" and then one IO chiplet on the same package. [0] is the first good search result.

Given what they've done with the M1, I highly suspect Apple will do something like that for their higher end machines.

[0] https://www.anandtech.com/show/16148/amd-ryzen-5000-and-zen-...


Congratulations, you're one of today's lucky 10,000 :)

https://xkcd.com/1053/

I'm mildly annoyed by the tendency of some people to be critical while they're correcting (teaching!) someone.


What makes you think HN is that different from anywhere else?

And besides, memes persist much more readily anywhere where tribalism comes into play e.g. Politics, Apple vs. [Insert your OS here], Any Tesla thread etc.


No, the RAM is co-located in the M1's flip-chip package, but it's not physically on the same die.

In the photo below, you can see the inside of the BGA flip-chip showing the M1 SoC itself alongside two Hynix LPDDR4 devices:

https://www.eetimes.com/wp-content/uploads/Apple-M1-1.jpg


You've been down voted for technical inaccuracy.

I'll up vote you until someone opens the chip package and replaces or upgrades the ram.


Now you're just daring someone like Louis Rossmann to give it a go. :)


Ha! Good one ;)

I'd watch that.


Yeah, it's a bug. But don't go crazy over it.

Back in 2016, a Spotify bug resulted in writes of approximately 700GB/day [1]. No SSD died, judging from the lack of a class action lawsuit. And at least for my MBP SSD, I can tell you it is still doing very much alright, despite being thrashed for anywhere between a few weeks to a few months.

Having said that, do apply any temporary fix. Even if the effect will probably be negligible, it's still unnecessary wear.

[1] https://www.extremetech.com/computing/239268-spotify-may-kil...


Can't this easily be a userspace app overwriting a logfile or something thousands of times over? Spotify is rebuilding a cache in the background every minute, or some audio editing app misuses a sqlite database and it's writing the same stuff every second etc?

Any app/program can write to the disk. Why this is being framed as a hardware/OS error?


The problem is that the ssd is not replaceable and people are saying the main cause of this heavy writing is the swapping to disk. So a hardware/OS problem.

I think it wouldn't get a lot of attention if the ssd could be replaced.


This isn't a correct statement, I think what you mean is it isn't user replaceable. Take one to Apple who can and does replace these due to failure through their service program. Outside warranty you will of course be paying for that.


More likely that apple will just price gouge on a motherboard replacement and just trash the old one. And of course, they will do this only after attempting to upsell the user on an entirely new system.

It reeks of shade to intentionally grenade the hardware just to get more door traffic at retail locations.


> It reeks of shade to intentionally grenade the hardware just to get more door traffic at retail locations

Apple already forces owners of "vintage" MBPs to come in to a store for repair (or repair-related warranty) work, even if the failing component is identical to that of a non-vintage model. You'd think they would prefer to send those devices in to a repair depot with lots of inventory for older parts, but now that you mention it I guess they figured out that foot traffic converts into sales at a non-zero rate.


Every part of this is a made-up, unsupported, bad-faith, outright lie. No part of this is “more likely” based on my 30 years of experience of actually dealing with Apple. What’s actually more likely, in my experience, is Apple going above and beyond to cover repairs, even out of warranty, for issues that are even partially their fault.

“Attempting to upsell” doesn’t pass the laugh test. And it’s incredibly crass and irresponsible of you to toss around words like “intentionally grenade the hardware” without the slightest hint of evidence.


> Every part of this is a made-up, unsupported, bad-faith, outright lie. No part of this is “more likely” based on my 30 years of experience of actually dealing with Apple. What’s actually more likely, in my experience, is Apple going above and beyond to cover repairs, even out of warranty, for issues that are even partially their fault.

You are engaging in projection.

> “Attempting to upsell” doesn’t pass the laugh test.

Then there should be no reason to force the user to come in to a retail shop to get approved repairs on their machines then, and apple can save lots of money by going to mail-in repairs.

> And it’s incredibly crass and irresponsible of you to toss around words like “intentionally grenade the hardware” without the slightest hint of evidence.

There's hundreds of examples in this thread alone.


No, I'm not projecting. No, Apple doesn't "force" anyone to come into retail shops; that is simply false. You can mail hardware in. Some people aren't near an Apple Store.

And no, there are zero examples in this thread of any evidence that Apple has intentionally harmed its own hardware.

So you are batting 0 for 3.


They're not meaningfully replaceable without paying apple a lot of money for an ssd, assuming you're out of warranty, hastening planned obsolescence. How's that?

People understand apple can replace them. Nobody wants to pay apple to replace an ssd.


Apple can't though. It's soldered to the motherboard, so replacement also involves replacing the CPU and ram. Suddenly a $50 part has turned into about $500 (or $1000 with apple price gouging on ram and storage prices).


To support your thought -- https://arstechnica.com/information-technology/2016/11/for-f...

Since it's not happening to everyone (check the original twitter thread), I'm 100% sure it's third party apps writing in small 4kb chunks with O_DIRECT file opened, therefore creating write amplification effect where 4kb write becomes SSD block size write (for example 4mb), therefore one 4kb write per second, instead of being 0.3GB/day, becomes 345.6GB/day.


Can you explain this more? I always thought writes were by the page and erases were by the block, so the scenario you're describing would only be likely with TRIM disabled and an awful garbage collection algorithm.

I could be wrong though because I don't understand the implications of O_DIRECT.


That's an interesting thought. Given the prevalence, I almost wonder if it's a bug in an upstream framework or maybe a crazy default only on M1 Mac's in some language.

Chrome would also be unsurprising. Theyre known for their memory churn already.


My first thought when reading about this is that logging gone awry is a very likely cause. Could be the OS itself or apps doing it, but in either case a fix should be pretty reasonable.


Reminds me of this: https://appleinsider.com/articles/20/07/27/vmware-virtualbox...

> The engineer explains problem relates to a "regression in the com.apple.security.sandbox kext (or one of its related components)" in macOS 10.15.6. As part of the investigation, it was discovered com.apple.security.sandbox was allocating millions of blocks of memory containing just the text "/dev" and no other data.


I think logging can't be a problem for recent NVMe SSD (but problem for small eMMC on cars).


> Why this is being framed as a hardware/OS error?

Because this error closely matches the new Tesla hardware replacement (MCU mmc is not a "wear item" or not, depending on who you ask) and like that chip, the M1 also has non-replaceable parts which seem to be wearing faster than normal.


Maybe they transitioned from an earlier MacBook and feel like they are running the same apps? Though, of course, new architecture means that's a bad assumption.


I suppose it could be Mail, Spotify, iCloud, or anything really. But if its happening on all the new M1 macs, what do you think the common denominator is?


If you follow the original twitter thread - https://twitter.com/marcan42/status/1364409829788250113 - you'll find that it's not affecting all M1 macs, and affects many intel macs, therefore it's almost guaranteed to be third party app's behaviour (writing 4kb chunks with O_DIRECT will lead to SSD write amplification)


I think Spotify insist on using 10% of my drive. I can’t configure it as far as I know (any more). The only solution that I know of is `rm -rf <cache>`.


How to find this on your own macOS drives: https://apple.stackexchange.com/questions/135565/how-do-i-ge...


It's gotta be a bug. Hopefully this will get on Apple's radar.

Edit: for what it's worth, here's my stats from my 8GB M1 MacBook Air after ~2 months of continuous heavy usage as my main work machine:

  Percentage Used:                    0%
  Data Units Read:                    4,622,487 [2.36 TB]
  Data Units Written:                 2,301,067 [1.17 TB]
Which feels reasonable. So doesn't seem like it affects everyone equally.


> on Apple's radar.

(For those not aware, Radar is the name of Apple’s internal bug filing system ;)


I believe access is given to outsiders too


Only if you’re a partner of some sort, and even then your access is going to be fairly limited.


Hmm... My M1 Air 16GB from December is an order of magnitude higher than that, but I do usually have 10 VS Code windows open, lots of Node instances, databases, Slack, and LOTS of browser tabs. Should I be worried?

  Percentage Used:                    1%
  Data Units Read:                    74,899,871 [38.3 TB]
  Data Units Written:                 71,233,417 [36.4 TB]


If you can, run iosnoop in background and note every program that writes 4kb chunks a lot.

These are wearing your SSD down.

And no, they will not be combined together because most of these shitty apps have O_DIRECT flag set or call fsync() after each write(), making the OS obligated to hit SSD with 4kb write.

SSD's don't operate that way, though, a 4kb write is guaranteed to become a bigger write since SSD cell/block is usually either 512kb, or 1mb, or 2mb, or hell even 4mb. So one 4kb per second becomes 4mb per second, and that's 345.6GB/day.

Pretty scary how one shitty app can ruin your ssd so fast, huh? I saw google drive app do 50 small writes per second. That's ~2TB/day.



That's really interesting. Doing some Google'ing there seems to be almost no discussion on this issue. I would happily disable fsync() at the OS level for all apps at the expense of possible data corruption as this is only a dev machine and everything important is backed. I wouldn't know how to do this though.


I've investigated this in some depth for my personal boxes with slow HDDs, in lieu of a more formal writeup, here is the best solution I've found. First:

  echo "write through" > /sys/block/sda/queue/write_cache
This lies to the OS and tells it that drive writes don't need to be fsync'd. Replace "sda" with whatever your drive in question is, naturally. Note that this is not persistent, you'll need to configure your init system to do this on boot. You can verify it's working by looking at /sys/block/sda/stat (see https://www.kernel.org/doc/html/latest/block/stat.html).

Next, in /etc/fstab configure your filesystem to be mounted with "barrier=0" (note that only some filesystems support this), which will often prevent data from getting written out to the disk at all, instead getting kept in cache.

You still need the first part because the filesystem layer won't cover all possible cases--for instance, LVM thin provisioning will issue a manual flush below the filesystem layer once per second, and there's no way to remove that.

One problem I haven't managed to solve is detection--in the unlikely event things don't shut down properly (e.g. a kernel panic), how do I find this out so that I can restore from a backup (rather than having something subtely corrupted somewhere)? This is conceptually easy with some global bit in a special sector used as a mutex, but I don't know of any existing off-the-shelf solution implementing this.


The equation is (months machines was used for)/(percentage used). Assuming you've used the machine for 2 months that comes out to 200 months or 16 years left. I suspect you won't care by then.


...assuming linear progression.


25% of the available space gone after four years could very well be a problem.

Also, if writes continue at the same rate then wear will increase exponentially, as the same number of writes are distributed over an increasingly smaller amount of space.


It’s percentage of available wear, not space.


My 3 month old 8gb MBP

  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    1%
  Data Units Read:                    53,339,111 [27.3 TB]
  Data Units Written:                 49,807,244 [25.5 TB]


Here's mine, 8GB M1 Air with 314 power on hours

  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    3%
  Data Units Read:                    231,234,195 [118 TB]
  Data Units Written:                 182,951,675 [93.6 TB]


btw. this is my macbook pro 16gb 2019 (intel) with ~800 hours:

    Available Spare:                    100%
    Available Spare Threshold:          99%
    Percentage Used:                    0%
    Data Units Read:                    91,665,449 [46,9 TB]
    Data Units Written:                 88,819,714 [45,4 TB]
heavy usage, with vm and i've written to the 1tb nvme disk multiple times and needed to clean stuff to get free space (multiple times)


> heavy usage, with vm and i've written to the 1tb nvme disk multiple times and needed to clean stuff to get free space (multiple times)

This is me all the time. Thank you for validating it's not just me.


check what's in iosnoop. I'm 100% sure it's tons of small (~4kb) writes all the time by some program.


My 3.5 year old 8GB MBP:

  Available Spare:                    79%
  Available Spare Threshold:          2%
  Percentage Used:                    15%
  Data Units Read:                    251,550,023 [128 TB]
  Data Units Written:                 228,761,896 [117 TB]


16gig model checking in with fairly heavy usage since late November

  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    1%
  Data Units Read:                    11,700,881 [5.99 TB]
  Data Units Written:                 38,044,983 [19.4 TB]


My 50 days old 16GB MBP:

    Percentage Used:                    0%
    Data Units Read:                    11 039 001 [5,65 TB]
    Data Units Written:                 8 341 136 [4,27 TB]
That would mean ~87 GB per day, every day. I have about 200 GB data on the drive; that would mean rewriting almost half of it every day.

I don't use it that heavily.

For comparison, my 2015 MBP has 37 TB written. Since 2015 and about the same usage pattern.


Adding to the data point, this is my 16GB Mac Mini 1TB I've been using as a primary computer since December (so ~3 months). I have been doing some development on this machine as well as porting packages for ARM for MacPorts. I also have Syncthing and Time Machine running.

    Available Spare:                    100%
    Available Spare Threshold:          99%
    Percentage Used:                    0%
    Data Units Read:                    9,664,566 [4.94 TB]
    Data Units Written:                 5,755,588 [2.94 TB]
This sounds to me like a write amplification issue caused by some software as mentioned elsewhere in the thread (first party or third party).


The post alleges that this this is due to "aggressive swap". I suspect that the MacOS developers at Apple know their target hardware, so I'm wondering (if this is the situation) whether the developers realized it, but the product was shipped anyway.

FWIW, even as a developer who mostly stays out of the kernel, swap has long been on my mind for Linux laptops (e.g., keeping pre-SSD spinning-rust hard drives sleeping), and I pretty much always disable swap, even on desktops. Part of the rationale is, if I can't fit all the processes in (now) several gigabytes of RAM, something probably needs an OOM-euthanizing.


Swap is most useful, when you (almost) never look at the swapped out data again.

That might sound stupid, but it occurs often enough in practice: eg code or data structures that were only used when starting off the program in question.

An optimally coded program wouldn't benefit from this. But real world programs often do.


I wouldn't link the usefulness of swap to the quality of programs. It has a lot to do with user behavior. If a user opens 100 tabs in a web browser but never looks at most of them, swapping unused ones to disk if they don't fit in RAM anymore is a very useful behavior (considering the alternative is killing the entire program).

You could argue an optimally coded program should do this themselves, however, that is not so easy when multiple programs are running concurrently and fighting for RAM. On an 8GB machine, how much RAM should your web browser use? The answer is "it depends". If there are no other programs running, it can use all 8GB. If you are running an IDE, a chat client and a compiler in the background, it should probably be using less.

The individual programs don't have enough information to make this judgement call - and making very conservative estimates will lead to much more swapping than actually required. The operating system is in charge of allocating memory for all those programs and can effectively make the judgement of what should be swapped out or not.


If you hit the swap, there is no disk cache. Which is problematic on its right own.


Huh?

Swapping out unused stuff leaves more space for the disk cache.


disk cache is just 'free memory', when swap is used there is no free memory.

Pretty much any OS (linux, windows, macos) use the unallocated memory for disk cache. Disk swap also takes RAM as page descriptors.


> [...] when swap is used there is no free memory.

Well, that's not true. In Linux you can have pages swapped out to disk and physical memory used for disk cache at the same time.


I always disabled swap on Linux (back when I was using HDDs) because it was completely useless anyway. As soon as you ran out of RAM and it started using swap the entire system would grind to a halt and you'd basically have no choice but to restart it, killing all processes rather than just random ones.

There doesn't seem to have been any effort by Linux developers to fix that (e.g. provide a kernel level GUI to let you pick which processes to kill when out of RAM).


> There doesn't seem to have been any effort by Linux developers to fix that (e.g. provide a kernel level GUI to let you pick which processes to kill when out of RAM).

There is no and will be no GUI for that, especially considering that many linux devices do not have any.

However, that does not mean there is no effort to improve the situation. The main player here is, surprisingly, Facebook. Their effort is going to be integrated into systemd in the form of systemd-oomd and Fedora 34 is going to ship it enabled out of the box (see https://fedoraproject.org/wiki/Changes/EnableSystemdOomd).


> especially considering that many linux devices do not have any

Why not? Many Linux devices don't have keyboards but it supports those perfectly well.

I hadn't heard of systemd-oomd, sounds interesting!


Unfortunately disabling swap doesn't really fix the problem you're talking about -- Linux will just grind to a halt trashing the page cache instead.


You may want to reconsider whether you want swap disabled:

http://jdebp.uk./FGA/dont-throw-those-paging-files-away.html


On the Subject of Swap and SSD write I have previously reported many issues to Apple over the years. Earliest report going back to 2016. And of course heard no feedback as usual.

You may want to check Activity Monitor > Disk > Kernel_Task for Disk Write. This shows how much swaps write, along with other apps you have been using since your last reboot. You can check your last reboot time in System Information > Software.

Currently Launchd wrote 50GB which I have no idea why. Corespotlightd wrote 70GB. And SafariBookMarkagent with 4GB. And Kernel_Task 4.5TB.

These numbers are over the course of 30 days.

Generally speaking Mac has been very Swap heavy for a long time. And Big Sur seems to be pushing this further as there are evidence shown this isn't seems to be specific to M1.

On Safari, if you have lots of Tabs, Clicking Tab Overview will force all Tabs to Reload. Which will write hundreds of GB of Data if you have lots of Tab, especially if they were originally sitting idle. ( i.e You close Safari and you reopen it where Tabs not focused are not loaded ).

If you have iCloud Drive, there may be some cases where you are downloading few hundreds GB of iCloud Data a day if you have certain Apps that constantly update its files. WhatsApp Backup used to ( and may still be the case ) cause this problem if you have a large WhatsApp History. WhatsApp download a 50GB copy, only to realise it has a new copy on iCloud and re-download again. Doing it many times per day.

Assuming some tweaked options, iCloud could be downloading all of your iPhone / iPad Backup to your Mac as cache. Again, if you have 100GB of Backup and that is downloaded once or twice per day. It adds up.

Disabling iCloud Drive tends to help.

Last time we had Electron Apps, or specifically Spotify making hundreds of GB per write per day due to a bug.

But as usual, Apple apologist around the world are quick to point out these are non-issues. Luckily HN still have some sanity. Hopefully this is finally enough to finally push Apple to fix it. Windows or Linux doesn't have the same write data count even when the usage pattern are roughly the same.


Best part is that the SSD is soldered into the board and it can't be replaced.

Even worse. In this case for the M1 Macs when the SSD dies, the Mac becomes bricked and if you have no backups then the data on the dead SSD is lost.

A typical web dev may rely on running a couple of electron apps, docker, IntelliJ and lots of chrome tabs to get their work done. Alongside everyday use looks to me that the life of this SSD is going to be shortened very quickly.

I already said this before [0] and it seems to them 'it is not an issue.' Well here we are and this thread is full of other concerned users.

[0] https://news.ycombinator.com/item?id=26150405


Adding to this,

My late mid 2014 MBP has the kernal_task heavy write issue. I has written over 15TB the last 61 days. The new nvme disk I installed two months ago is already at 2% wear:

  Available Spare:                    100%
  Available Spare Threshold:          10%
  Percentage Used:                    2%
  Data Units Read:                    45,859,839 [23.4 TB]
  Data Units Written:                 44,541,716 [22.8 TB]
  Host Read Commands:                 296,573,912
  Host Write Commands:                218,840,299
  Controller Busy Time:               6,954
  Power Cycles:                       138
  Power On Hours:                     1,662
A quick Google directed me at:

  - Spotlight indexing packages folders (npm, pip etc). Add those to the "privacy" tab in the spotlight settings to skip indexing.
  - The battery and or logic board may be failing.


I've definitely seen behavior like this on Mohave on my old 2012 MBA, Tb of data written to the (128G!) drive over the course of the uptime of the machine.

There was no iCloud Drive on it, Safari not generally in use. Memory pressure wasn't extraordinarily high, but it wasn't a well specc'd machine for the current day.



And in turn thats just a link to LTT forms[0] what's your point?

[0] https://linustechtips.com/topic/1306757-m1-mac-owners-are-ex...


I had this exact issue and had to disable SIP to disable swap because I was swapping over 2TB a day. Everyone I chatted to online about it jumped over themselves to tell me I was an idiot, and I somehow needed to swap even though I have 16gb ram.

Needless to say I'm having a little chuckle. It's a lesson for us all about blindly believing things.


Did you notice any performance hit after disabling swap? Even 16 Gb RAM doesn't seem enough ...


Not a thing, though I didn't benchmark! Only stupid thing is you can't run iOS apps while SIP is disabled, and re-enabling SIP turns swap back on. iOS apps aren't something I care about so it doesn't really affect me.


Intel iMac I bought in november 2020 (~3 months of use) with 128GB of RAM, so it never swaps:

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        42 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    95,824,868 [49.0 TB]
  Data Units Written:                 89,046,642 [45.5 TB]
  Host Read Commands:                 1,365,215,816
  Host Write Commands:                837,815,676
  Controller Busy Time:               0
  Power Cycles:                       145
  Power On Hours:                     604
Looking at iosnoop I see dropbox, google drive, backbalze, arq and other programs heavily abusing my SSD with tons of small (4kb) writes.

With age of 105 days, that's 433 GB/day.

My SSD's block size is 45.5TB/89046642 which is ~512kb.

433 GB/day divided by blocksize is 9.8 hertz, meaning I've had apps write ~10 times per second on average.

This corresponds with what I'm seeing at iosnoop. Tons of third party apps open files with O_DIRECT flag or call fsync() right after each write. SQLite databases are most common culprit, since by default it flushes to disk every small insert/replace. And every sqlite database I've seen by third party app just uses defaults (none of them are using WAL, for example).

OS has to flush that to disk if O_DIRECT or fsync() is used, therefore leading to write amplification.

https://www.sqlite.org/atomiccommit.html -- Each insert gets into journal first, that's 4kb there, then index gets updated, that's another 4kb, and then it gets committed to main database, that's another 4kb. There might be more, but in what I'm observing in iosnoop it's coming as multiple of 3.

And since I'm on 512kb SSD blocks, that's 1.5MB per single insert/update.

I forced every sqlite database on my disk to be WAL and changed synchronous to NORMAL from default (FULL), and it reduced I/O activity greatly without any ill side effects in months, most likely developers of these apps aren't even aware of these tunables:

https://www.sqlite.org/pragma.html#pragma_journal_mode https://www.sqlite.org/pragma.html#pragma_synchronous


4k (Page size) writes with fsync is the standard way to commit database pages reliably. My understanding is that modern SSDs also have the 4K page as the minimum unit of input. Can you refer me to the documentation or article that shows a page write causes a whole block to be written?


How did you change every sqlite db running on your system?


  #!/usr/bin/env bash
  
  WANTMODE=wal
  
  for file in "$@"; do
      MODE=`sqlite3 -init <(echo .timeout 10000) -batch "$file" 'PRAGMA journal_mode;'`
      if [[ $MODE == $WANTMODE ]]; then continue; fi
      OLDSIZE=`du -kc "$file" "$file-wal" "$file-shm" 2>/dev/null | tail -n1 | cut -f1`
      echo -n "$file: "
      echo "PRAGMA journal_mode=$WANTMODE;" | sqlite3 -init <(echo .timeout 10000) -batch "$file"
      if [[ $? != 0 ]]; then continue; fi
      NEWSIZE=`du -kc "$file" "$file-wal" "$file-shm" 2>/dev/null | tail -n1 | cut -f1`
      echo "${OLDSIZE}kb -> ${NEWSIZE}kb"
  done


Got my base M1 mini with 8GB of RAM and 256GB SSD on February 13th. With just simple content consumption (browser, Spotify, messaging) as of now it has written over 20TB of data! That is insane.


This may be related:

I had to switch off Time Machine on MacbookHD (main disk) by excluding it from the Time Machine preferences, and reinstall the OS to erase the previous copies it had made, all ~400GB of it (on my 499GB disk)... As the user, I was not asked for permission to allow Time Machine to use up to 80% of available disk capacity. It was backing up so many copies of the data until I noticed I had left only 8GB of free space and I only noticed because Quicktime screen recording started crashing due to insufficient disk space. Basically, the decision by Apple is to use up to 80% of the available disk space to save copies of the data on that disk without user permission, and because it checks and reconciles the amount it's using every so often you can easily end up with 0% free capacity if you e.g. record a lot of videos in between.


This is extremely good to know, thank you - I have turned off my M1 Mini and probably won’t be booting it until Apple announces a fix and explanation about the hard drive issue.

I am not quite sure if this Time Machine business is directly related (though it might be due by poorly optimized rewrites, particularly if there’s outstanding hardware shenanigans with encryption on M1s). It is baffling that the disk usage appears to be a deliberate decision.


Time machine should clear out old snapshots when pressed for disk space, that's very strange.


It didn't do that for some reason. Also, I could not remove the snapshot form the command line or force TM to thin them. I had to reinstall the OS.


I'd be willing to bet that in looking deeper you'd find it's Spotify doing the lion's share of that.


Can we ask them not to do it somehow?


Spotify was caught before (in 2016) doing exactly that -- https://arstechnica.com/information-technology/2016/11/for-f...

Only after publication on news sites they fixed it.

Won't be surprised if the bug regressed and happened again.

I don't use Spotify (not available in my country), so I wouldn't know, but I bet half of the reports are because of some badly written third party app constantly writing in 4kb chunks with O_DIRECT or fsync() all the time.


You'd have to verify I'm right, but this page describes how to change the cache location: https://support.spotify.com/us/article/storage-and-data-info...


The entire drive 8 times over per day? That can't be right. With 8 hours of usage per day, that's 70 MB/s constant writing. Really?


I don’t switch it off, so that I can have remote access to It, but 20TB in 10 days sounds a lot.

Also worth noting that I see my swap fluctuating from 6-8GB almost all the time (granted I have multiple browser tabs and windows, maybe like 10-30 ish ?)


update

From the activity monitor: Uptime 1 day & 6 hours Kernel_task : 418 GB written - 84 GB read Microsoft edge helper: 23-4 Launchd: 19-1 Mds_stores: 9-9 Photolibraryd: 4-8

In the last 24 hours I have not touched the mini, I just switched it on yesterday to check something online and switched off the monitor.


I've created a Google form to collect some data - https://forms.gle/ksV68PicxdPvgjzt8 Feel free to help crowdsource some data on the issue.


Baseline M1 MBA, about 1 month of regular usage

  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    9,670,166 [4.95 TB]
  Data Units Written:                 6,898,714 [3.53 TB]
  Host Read Commands:                 75,916,955
  Host Write Commands:                41,914,954
  Power Cycles:                       80
Fwiw, my swap tends to hover around 2.5-3gb lately


150 TB of data in 2 months is hopefully not correct. Something is wrong.


Yeah. My 2017 MacBook Pro (256 GB Apple SSD) reports:

  Available Spare:                    79%
  Available Spare Threshold:          2%
  Percentage Used:                    15%
  Data Units Read:                    251,550,023 [128 TB]
  Data Units Written:                 228,761,896 [117 TB]
But this is after 3.5 years of use.


Yea, my MacBook Pro, 2018

  Percentage Used:                    2%
  Data Units Read:                    84,461,830 [43.2 TB]
  Data Units Written:                 69,369,816 [35.5 TB]
  Host Read Commands:                 2,125,724,765
  Host Write Commands:                1,440,781,009
  Controller Busy Time:               0
  Power Cycles:                       117
  Power On Hours:                     1,253


Youtube is/was disk killer for some reason. Used to listen a lot of music from it, but the constant heavy disk usage put me off. 16GB on Linux, zero swapping, Chrome browser. Switched to old school MP3 playing via Audacity and never looked back.

You can see what is wearing your disk under process / activity manager.

The M1 behaviour can it be Chrome + Youtube,Netflix,Spotify usage related?

2c


For comparison, my 2017 iMac with 1 TB SSD gives:

  Available Spare:                    100%
  Available Spare Threshold:          10%
  Percentage Used:                    1%
  Data Units Read:                    109,446,813 [56.0 TB]
  Data Units Written:                 68,569,966 [35.1 TB]
  Host Read Commands:                 2,922,206,492
  Host Write Commands:                2,180,242,949
  Controller Busy Time:               4,463
  Power Cycles:                       13,345
This is after 40 months, during which it has been my main computer for both home and work. So about 10.5 TB of writes per year.

Prior to the iMac I had a 2008 Mac Pro for work and a 2009 Mac Pro for home. Those were doing about 7.8 TB per year of writes combined, but that includes an SSD on the home one that was used for Time Machine. Subtracting out that, it was about 5.2 TB per year combined writes, so about half of what I do on the iMac.


Perhaps setting the noatime flag might help?

Reducing Disk IO By Mounting Partitions With noatime https://www.howtoforge.com/reducing-disk-io-by-mounting-part...

OS X - Setting No-Access-Time on OS X SSD Volumes https://dpron.com/os-x-noatime-multiple-ssds/

Optimizing MacOS X Lion for SSD http://blog.alutam.com/2012/04/01/optimizing-macos-x-lion-fo...

Mac OS X SSD tweaks https://www.icyte.com/saved/blogs.nullvision.com/441781


It'll reduce writes but it doesn't explain the huge numbers. Considering the atime updates are only ever one block.


Pagesize is now 16KB in ARM64 archs, not 4K anymore (intel).

Maybe the SMART tools are miscalculating?


Page size is variable on aarch64. You can even have 1GiB page if you like.


That's interesting, it seems to be 16KB on macOS implementations though.


I purchased my M1 on Feb 16... haven't used her all that much since i have a 16" Pro for work. Does this look bad?

Percentage Used: 0%

Data Units Read: 5,406,703 [2.76 TB]

Data Units Written: 5,381,633 [2.75 TB]

Host Read Commands: 27,309,330

Host Write Commands: 25,822,612


From reports this isn't specific to M1 Mac users.

If you're on a Mac you can run "sudo fs_usage" to see what is writing to/from your disk.


Thanks! I have run this command now (not on an M1 but on a 2018 MacBook Pro), and I'm reporting that 31% of my file system accesses were coming from ESET Security (namely esets_daemon, esets_proxy, ERAAgent, esets_gui), and 19% from Chrome. I'm actually using Chrome, but if ESET degrades the SSD lifespan by 31%, I'm gonna be annoyed.


M1 can't run pre-11.0 (Big Sur), so this might be a Big Sur issue.


Years ago I had a 2015 MacBook Pro that used to write daily swaps in the terabyte range for about two weeks. Not a single incident but consistently 1-3 tb/day. Performance wasn’t impacted at all and so I only noticed by chance when I looked at the activity screen for other reasons

It persisted through restarts and was only fixed when I reinstalled the system (and all my application software. Setup remained the same so no idea what it could’ve been)

With a few assumptions about the built-in SSD I concluded that I’d be above the most likely applicable TBW within 200 days!


Just to add some more data here, M1 MBA 16GB Ram / 256GB SSD which I've been using for almost exactly two months:

    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        34 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          99%
    Percentage Used:                    0%
    Data Units Read:                    5,315,871 [2.72 TB]
    Data Units Written:                 2,249,290 [1.15 TB]


What is your most used software? Are you using Firefox or Chrome?


This is my personal machine, so primarily using it nights/weekends for thinking with embedded development projects and web browsing lately.

I was using Firefox, but have recently switched Safari full time. It just seems to run better and isn't as brutal on the battery.


Thanks. So far it actually looks like Firefox is somewhat heavy on writes.


macOS: How to Disable Homebrew Analytics In case you are not aware or do not want to send homebrew analytics here is how to turn it off:

brew analytics off

"According to their GitHub page, Homebrew’s anonymous user and event data have a 14 month retention period, which is the lowest possible for Google Analytics."

https://www.macobserver.com/tips/quick-tip/disable-homebrew-...


This has to be wrong readings, is the M1 swapping really totally different? My normal last gen Intel Mac has 0% usage and it's nearly one year old (yes, I use it nearly daily).


It's probably macOS doing something differently when run on an M1.


From the sounds of it, the same reading they are using to suggest the M1 will only last 1-2 years has been observed on older Macs. So this whole thing seems overblown.


This is completely anecdotal with nothing to back it up but it seemed like I swap far more aggressively when I have x86 processes running...not sure if this is related to Rosetta 2


My gut feeling is that this is an error of the reporting tool - or perhaps more specifically, an interpretation of the results.

I think it is highly likely that these accesses are hitting some sort of cache (possibly page files, or SRAM, etc), not actually hitting the actual TLC/QLC part of the storage.

Obviously I could be wrong, but I’d be surprised if Apple made this big of a mistake and didn’t catch it internally before now. If they did, then there are some serious quality issues in their process.


14TB written on my 4 week old m1 mbp with 256 GB. Ridiculous.


Many iOS apps use Realm DB which is memory mapped and calls fsync on every write. This wasn’t obvious to iPhone/iPad users but will wear out SSDs very quickly.


2.5 year old 32GB RAM Macbook Pro on Intel for contrast (developer workload over that timeframe):

    Percentage Used:                    4%
    Data Units Read:                    151,520,338 [77.5 TB]
    Data Units Written:                 102,518,209 [52.4 TB]
    Host Read Commands:                 3,014,199,390
    Host Write Commands:                2,371,675,100


3.2 year old 16GB MBP w/ 1TB on Intel (heavy developer workload):

Percentage Used: 5%

Data Units Read: 498,780,988 [255 TB]

Data Units Written: 265,225,887 [135 TB]

Host Read Commands: 6,633,264,962

Host Write Commands: 2,911,463,422


How does AppleCare deal with SSD issues like this? I have heard that for batteries, they'll replace if the maximum usable capacity is below 80% of what it should be. I assume they've had a similar policy for SSD? Arguably the SSD is even more critical than the battery, because you can at least hypothetically use a laptop plugged in.


Apple care only lasts 3 years. So pretty rare issue I imagine. new logic board.


Yes, I imagine it has been very rare in the past. I'm wondering if anyone has come across the issue before (or has a friend who works at Apple and knows the rules). I would hope that if they give you a new logic board that they'd at least be able to transfer your data. I know that they always make you sign a waiver saying that you've backed everything up and if you lose the data that's fine. Hopefully they would at least try to transfer the data for you, if this turns out to be a bug like it sounds like.


I'm going to check the stat from my M1 Macs, but here is from the last MBP before 16" - 2019 15" MBP

=== START OF SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 32 Celsius Available Spare: 100% Available Spare Threshold: 99% Percentage Used: 2% Data Units Read: 79,321,757 [40.6 TB] Data Units Written: 63,313,296 [32.4 TB] Host Read Commands: 2,312,199,690 Host Write Commands: 1,098,534,950 Controller Busy Time: 0 Power Cycles: 130 Power On Hours: 1,213 Unsafe Shutdowns: 52 Media and Data Integrity Errors: 0 Error Information Log Entries: 0

How does this far in comparison?


If linked to swap issues, this article details how to disable swap (at the risk of having processes killed if you run out of RAM):

https://windsketch.cc/macbook-disable-swap/

I hope this HN thread prompts Apple teams to investigate and solve the issue.


Stats for my heavily used MacBook Pro (15-inch, 2018)

  SMART/Health Information (NVMe Log 0x02)  
  Critical Warning:                   0x00  
  Temperature:                        37 Celsius  
  Available Spare:                    100%  
  Available Spare Threshold:          99%  
  Percentage Used:                    2%  
  Data Units Read:                    45,932,352 [23.5 TB]  
  Data Units Written:                 48,394,214 [24.7 TB]  
  Host Read Commands:                 1,069,241,760  
  Host Write Commands:                698,888,162  
  Controller Busy Time:               0  
  Power Cycles:                       137  
  Power On Hours:                     661  
  Unsafe Shutdowns:                   48  
  Media and Data Integrity Errors:    0  
  Error Information Log Entries:      0

Now my battery health is a different story...


1 week old M1:

    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        25 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          99%
    Percentage Used:                    0%
    Data Units Read:                    933,459 [477 GB]
    Data Units Written:                 670,152 [343 GB]
    Host Read Commands:                 15,779,474
    Host Write Commands:                7,614,634
    Controller Busy Time:               0
    Power Cycles:                       78
    Power On Hours:                     6
    Unsafe Shutdowns:                   3
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      0


In (light) use since mid-Dec 2020:

``` SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 26 Celsius Available Spare: 100% Available Spare Threshold: 99% Percentage Used: 0% Data Units Read: 4,167,081 [2.13 TB] Data Units Written: 2,807,840 [1.43 TB] Host Read Commands: 57,601,178 Host Write Commands: 45,778,010 Controller Busy Time: 0 Power Cycles: 168 Power On Hours: 43 Unsafe Shutdowns: 8 Media and Data Integrity Errors: 0 Error Information Log Entries: 0 ```


My 256GB SSD/8GB MBA, approaching three months:

  Percentage Used:                    0%
  Data Units Read:                    5,170,416 [2.64 TB]
  Data Units Written:                 2,977,938 [1.52 TB]
  Host Read Commands:                 89,184,979
  Host Write Commands:                34,892,977

Mostly being used for building web apps, somewhat heavy video editing loads, and messing around in python. Using a lot of different software, much of it (entire Adobe Suite) via rosetta. I'm not sure what to make of these stats yet.

> Maybe we should consider listing which OS version everyone is using? I never updated from 11.0.1 and these stats seem comparatively low. Latest Big Sur is 11.2.1.


M1 air used mostly for VSCode and web browsing. I regularly see swap in the 1-3 GB range. I think I have < 100 hours of usage.

  SMART/Health Information (NVMe Log 0x02)
  Critical Warning:                   0x00
  Temperature:                        31 Celsius
  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    8,236,219 [4.21 TB]
  Data Units Written:                 6,393,656 [3.27 TB]
  Host Read Commands:                 86,536,748
  Host Write Commands:                37,609,250


MacBook Pro (13-inch, 2020, Four Thunderbolt 3 ports) (1 year old)

Percentage Used: 0%

Data Units Read: 23,095,006 [11.8 TB]

Data Units Written: 22,728,206 [11.6 TB]

Host Read Commands: 436,520,152

Host Write Commands: 414,121,166


M1 MBA 8GB/256GB, mostly safari with ~20 tabs avg and youtube, no spotify

  Available Spare:                    100%
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    25,160,936 [12.8 TB]
  Data Units Written:                 23,041,292 [11.7 TB]
  Host Read Commands:                 207,408,853
  Host Write Commands:                98,285,538
  Controller Busy Time:               0
  Power Cycles:                       173
  Power On Hours:                     101


I guess Apple will disable these statistics for users similar to what they did with the "battery remaining time" back in the day (not that long ago :P).

Written this from a mac mini m1 8Gb,256Gb


My Macbook M1 Air (6 days old)

  Available Spare:                    100%. 
  Available Spare Threshold:          99%
  Percentage Used:                    0%
  Data Units Read:                    4,645,245 [2.37 TB]
  Data Units Written:                 4,045,006 [2.07 TB]
  Host Read Commands:                 30,123,041
  Host Write Commands:                17,403,647
  Controller Busy Time:               0
  Power Cycles:                       85
  Power On Hours:                     18


A dumb question regarding flash programming: assume the erase block is 16MB, and there is a fresh block to write to. If I slowly append-write (not modifying existing data) to the block, would subsequent writes require the whole block to be erased/reprogrammed?

The use case is usually about append-only logs: MacOS constantly generates tiny log lines and those must go somewhere? Does Write Amplification make logging a bad use case on SSD?


I have the cheapest one and I have much better stats, and I use it for development, pretty much compiling everything, all the time, rust, rm -rf node_modules, that kind of stuff, every day, 12-16hs for two months. Looks good.

Percentage Used: 0% Data Units Read: 10,224,513 [5.23 TB] Data Units Written: 9,526,626 [4.87 TB]


Is there a way to get the drive SMART status without having to install 3rd party tools that run as root (or another safe way, e.g. through a VM or rolling back an APFS snapshot afterwards, etc.)?

I wouldn't mind writing a bit of code either to get these stats. I have a 2020 Intel Mac Mini and an old MacBook Pro with a Samsung Evo 840.


14 month old Dell Inspiron with 512GB nvme:

    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        30 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          50%
    Percentage Used:                    4%
    Data Units Read:                    7,993,097 [4.09 TB]
    Data Units Written:                 10,644,620 [5.45 TB]
    Host Read Commands:                 93,990,757
    Host Write Commands:                215,449,508
    Controller Busy Time:               1,499
    Power Cycles:                       107
    Power On Hours:                     8,203
    Unsafe Shutdowns:                   21
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      0
    Warning  Comp. Temperature Time:    0
    Critical Comp. Temperature Time:    0
    Temperature Sensor 1:               30 Celsius
    Thermal Temp. 1 Transition Count:   85
    Thermal Temp. 2 Transition Count:   74
    Thermal Temp. 1 Total Time:         340
    Thermal Temp. 2 Total Time:         122


Hackintosh?


Apple devices have always been lacking RAM, especially iPhones and iPads. Macbooks used to be upgradeable but now with everything soldeted on PCB that is no longer the case... so keep that in mind and buy new Macs with highest RAM option, otherwise your device may become obsolete sooner


M1 MBA 8/256 after ~2 months:

Percentage Used: 6%

Data Units Read: 163,709,197 [83.8 TB]

Data Units Written: 180,048,189 [92.1 TB]

Host Read Commands: 2,225,703,049

Host Write Commands: 1,729,134,696


Haven't seen this mentioned: I wake up to a double digit GB Finder process every other day. I leave it plugged in during the night. Wear is 3% or 48TB of writes.


What finder extensions you have enabled? Disable them all, see if behaviour stays.


I don't have any as far as I can tell, but some 3rd party tool that interacts with Finder in some way could be the culprit..


Never buy Apples' first generation .. of anything. This goes for computers, watches, phones .. cars.

It has been a maxim for decades, and it holds true even today.


iPad Pro V1 was ok, except for the smart keyboards dying. Anyone know where they can still be bought?


Over yet last few years the WiFi daemon insists in writing content to the /var/log/wifi.log and friends, and there’s no way of turning it off.


M1 hardware can obviously swap to SSD without involving the OS much, and not at all on the prefetch/swapin path. That’s impressive.


crazy, day one m1 8gb ram with 512gb hd here. this laptop has never seen spotify and safari is my main browser - no compilation or other heavy ram use either.

Available Spare: 100%

Available Spare Threshold: 99%

Percentage Used: 0%

Data Units Read: 44,414,705 [22.7 TB]

Data Units Written: 38,739,583 [19.8 TB]

Host Read Commands: 769,910,444

Host Write Commands: 421,693,825


Day one as in bought on release day or that you’ve had it for one day?


Why is "Available Spare Threshold" showing as 99% on these M1 SSDs? I believe normal SSDs are more on the order of 10%.


How do I find this info? Smartctl was not found.


On macOS: brew install smartmontools On Arch Linux: sudo pacman -Syu smartmontools


Without installing brew?


You need to install homebrew


These SSDs are going to be dropping like flies unless Apple produces a firmware update soon... has Apple responded?


The real problem here is that the M1 macbooks do not have a replaceable SSD. Even without this issue, SSDs do die sometimes and on every non Apple laptop you just buy a new m.2 sdd and everything is all good.

I have had macbook SSDs die in the past and thanks to it being a swappable module back then, it was no big deal.


> The real problem here is that the M1 macbooks do not have a replaceable SSD. Even without this issue, SSDs do die sometimes and on every non Apple laptop you just buy a new m.2 sdd and everything is all good.

In the past, when the SSD dies on these older Macbooks, you just replace the SSD with a new one. Very important for long term use when it eventually goes out of warranty, since there would be no need to go to the Apple Store to get it replaced for a fee.

On the M1 Macbooks, thats it. It's bricked and your data is lost and no way to recover it. (If you didn't backup in advance) You might as well buy another one.

In my case, I will continue to use my old Macbook and wait until Apple fixes the excessive writes or swapping issues in a newer version of macOS. By then I would be getting a Macbook with a newer M1X or M2 processor that doesn't have the excessive swapping issues and the software ecosystem for Apple Silicon would already have caught up with it and will be optimised for the processor.

I said it before with Apple products [0] [1] [2] [3], I always ignore the first generation unless you want to be a bleeding edge beta tester trying to get work done or you're after collector editions.

[0] https://news.ycombinator.com/item?id=25066248

[1] https://news.ycombinator.com/item?id=23638202

[2] https://news.ycombinator.com/item?id=22852633

[3] https://news.ycombinator.com/item?id=22300515


What tools are you using to measure this?


For such small drives, why can't they just splurge a little and put some high endurance SSDs in?


M1 macbook, 512gb hdd here. Anyone mind sharing their total "real" space. Mine is 494.38gb

Thanks!


Wow the SSDs are soldered. Thats a big yikes from me. Like soldered RAM is bad enough.


I just realized M1 SSDs are irrepraceable... Really, this is wrong on so many levels.


Why hasn't anyone looked at the source code for SMARTMonTools yet?


Non-removable SSD's. Just, no; never, thank you.


Maybe there’s a common application at fault here?


How do you get this print out of the disk usage?


You can use: brew install smartmontools && smartctl --all /dev/disk0 to get this data


This is total trash, I've had my M1 MacBook Air for >2 months and it shows 0% used.

I've got the 512GB SSD / 16GB ram model.

I'm using it HARD for 12 hours a day so YMMV...


14TB in 2 months here (1TB/16GB)


that is most likely due to swapping .


So glad I read this. I was about to buy a Mac Mini M1. Now I'm holding off until this issue is clarified or solved.


I hope this doesn't get "fixed" in the next update by making it impossible to run smartctl.


I was going to comment, "That's a stupid thing to say" but then realised who we are talking about ...


Why am I surprised this is posted on LTT? He and his fans are so anti-apple its nauseating.


Has Apple responded? This is not a small issue..


Built in obsolescence?


Www


This looks overtly malicious. There's really no excuse for people to be seeing over 30TB of writes to their drives if they only use their system for browsing the web or dev work.

My main NVMe drive's stats, after almost a year of ownership, after 3 OS installs (windows and linux, though I moved windows to a smaller less used sata ssd a few months ago) and frequently moving big files on/off network storage, installing games, and dev work for small personal projects etc:

  Available Spare:                    100%
  Available Spare Threshold:          10%
  Percentage Used:                    0%
  Data Units Read:                    200,214,119 [102 TB]
  Data Units Written:                 27,304,663 [13.9 TB]
  Host Read Commands:                 616,298,176
  Host Write Commands:                387,970,043


It is just a bug that will be fixed. Literally happens with all kinds of systems and software.

In case some SSDs already have big damage - will probably be replaced for free as usual.


I really don't understand how people tolerate these massive stacks of closed software that tend to have extremely opinionated and forced automatic updates, much less pay a premium for it. If I started a company today everyone would probably get something like the system76 laptops.




Nice, LTT made it to the #1 of HN. :clap:


And this is Exhibit A on why I don't want any computer I spend a lot of money on to have a non-replaceable SSD. Inevitably, it will turn into a brick, or at least a poor excuse for a laptop that only runs off external drives.


I wonder if it's swapping a ton, especially on machines with 8G RAM?

8G is pathetic for 2021 anyway. Would never go under 16G for a serious machine.


This might be the case, as my 8GB MBP M1 has seen a lot of wear on the SDD. With that being said, the machine has been super fast, and I’m running docker all day, emacs, Spotify, node, Firefox with 5-10 tabs. I’ve almost never run into issues, I just close apps when I’m not using them.


Perhaps not a "serious machine" (that's subjective), but is well-specced for it's intended audience, and reviews are generally very positive.


Have to agree, I am using a Thinkpad T420 released in 2011 with 16GB. My upgrade will hopefully have a 32GB capacity.


I have 32 GB and rarely use the second 16.


I have 16GB, I haven't done much today, I use Safari with 7 tabs and have Spotify and GitHub Desktop open.

Memory used: 10GB

Uptime: 1d

I don't know, feels like OP is right. 16GB is not enough in 2021, especially since I my computer is from 2013 and already had 16GB.

Unfortunately this is where Apple decided to make money, so we have brand-new 8GB $1000 devices in 2021. Thanks Apple.


With 24GB, I rarely needed it all, but a good portion was cache/buffer, I.e. 10GB. Of course you can just swap to SSD and get pretty much the same performance, until you don't..


So I have a MacBook with 16gb ram for work, and an M1 MacBook with 8GB Normal usage on the Intel MacBook, ~10GB, on the air with similar workload ~5GB Not apples to apples for sure, but there is some magic going on with the M1. Perhaps the different instruction set?


On the other hand, I bought into the hype that the 8GB was fine for pretty much everyone.

I had a M1 Air with 8GB of RAM and ended up having to buy a M1 Air with 16GB of RAM since the swapping during my normal workflow made the machine unusably slow. Dozen Chrome tabs, Angular 2+ application being served, Webstorm, GUI Git client, Messages, and a few other little things.


Both machines are from Apple, so I'm afraid that this time it indeed is an Apple to Apple comparison.


I wouldn't bet on it being just the instruction set itself but it is true that given the chance x86 compilers will often go mental with the SIMD extensions which can lead to lots of fairly long instructions rather than a lower throughput but much shorter block


I was worried until I noticed the words "Hard Drive" in the title.


Terrible title as this is actually about SSDs.


And the actual article says "Solid State Drive" - could be posted differently here to dodge the character count limit.


Dang probably changed the URL. The original URL did say "hard drives".


M1 Macs all have SSDs.


Poorly worded - this problem applies to SSDs.


Ok, just my perspective, that you dont need to share. I am typing on lenovo P51, not really a new thing, but I have two TB SSDs, 2 slots for normal ram (32gb currently), i7, no overheating, great keyboard, battery and simple possibility to just dismantle the lower part (or if needed the whole laptop) and blow all the dust out or vacuum clean it. Running linux where all the hardware (except the fingerprint reader which I would not use anyway) is supported.

Once something dies I can replace it at any time on my own and the prices for replacement parts are actually going down, not up. It is "a tad" heavier but on the other side, I would carry the backpack with me regardless of weight (and now I can skip a fitness session :D).

Why on earth are you even buying apple laptops? Ok, I do understand it for nontechnical people but we are on HN. I just dont understand a reasoning why you dont get yourself a proper laptop instead and all the convenience that comes with that. "Its beautiful" reason?

I would imagine you actually do work on your laptop and the look is not your primary criteria when buying a work rig (although I have always loved Thinkpad boxy designs).


My perspective that you don't need to agree with, is that the P51 is a hideous monster that weighs exactly twice as much as my MacBook, the asymmetric keyboard + touchpad would drive me crazy, it doesn't run macOS and has worse battery life. Also my fingerprint reader works, just like every other part of the machine, along with all the software I use. There's no "except".


What a pointless discussion you’re bringing up. Your question is not debatable because in the end it’s all a matter of personal preference.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: