Hacker News new | past | comments | ask | show | jobs | submit login
Optimizing Linux with cheap flash drives (2011) (lwn.net)
62 points by edward on Jan 9, 2016 | hide | past | favorite | 17 comments



Also you can add noatime,nodiratime,discard to the options of your fstab config. But if you don't have SATA 3.1 or higher then don't use the discard option but instead add fstrim <path> to a chron job or rc.local. The other options I mentioned turn off recording of access times to files/dirs.

And you can set the noop scheduler using echo noop > /sys/block/<device e.g. sda>/queue/scheduler. Whether that's the best option for you depends, but if you have an SSD it might be.

Finally if you're using full disk software encryption, do check if your SSD supports hardware-level encryption instead and if your BIOS allows you to set its key.


I've always felt adding things like noatime or relatime was a bit too hacky. I'm not sure the actual consequences of access times not being properly recorded are well understood, and that scares me a lot; just exactly which software is actually affected (and how?). I don't use Mutt...but what about Samba? I have several folders shared in my LAN via Samba, with things like Access databases, Excel spreadsheets... could someone elaborate on this?


I've been using noatime forever (since 2002 or so) on my laptops/desktops and IIRC also all the servers that I set up myself, and a couple other servers I just checked have relatime (Debian default?), without any issue, although all of those machines only ran software of my choice and thus a relatively limited range.

I even started somewhat relying on atime being the (otherwise non-existant at the time) "inode creation time" when using 'noatime', and was a bit upset when I realized that mounts created with "mount --bind" would lose the noatime setting (an explicit "mount -o remount,noatime" is required on the new mount point).

I also used Samba around 2002, but am not sure whether I used noatime already with it.


I've used it for years and never noticed a single problem. Mutt seems to be the only program that anyone ever mentions with an issue (it apparently uses the last access time to determine if a folder has received new mail since you last looked at it). Since I'm not a mutt user, I've no idea if anyone has bothered to fix this.

IMO a file's last access time is inherently unreliable anyway, a file could have been read by a backup process or file indexer at any time. Your file browser might well peek at every file in a folder to make thumbnails or determine file types.

You likely won't see any problems with Samba or with office documents, these all work fine with filesystems that don't have last access times at all (and Windows disabled last access times in Vista onwards, apparently)


Any reference about the 'no discard' advice ? I used it for a while on an Intel 520 80GB SSD, it was fine at first but when I reached low free space (~1GB) the SSD became sluggish, which isn't unexpected, but even after freeing 4GB it was still slow. Manually trimming reported '0 Bytes trimmed'. Only after remounting the disk without `discard` and then `fstrim --all` that the SSD got decent performances back.


I'd guess your SSD was doing a synchronous discard which would be slow. I know anything less than SATA 3.1 does it synchronously, and maybe there are other situations too. E.g., maybe async trimming requires a decent amount of free space?


It was fine before I reach low free space and I don't have SATA 3.1, not even 3, possible 1 lol, so it was synchronous ?

I guess the controller needs some temporary space to do its magic and below that point it will struggle.

The odd fact is that freeing space didn't allow for normal behavior until "forcing" an actual trim.


noatime implies nodiratime; you don't need both.


I wonder how things have changed since this article? It talks of a big divide between cheap flash drives (USB sticks, SD cards, etc) and 'real' SSDs (e.g. the non-removable 2.5" hard drive replacements).

At the time, all but the worst 'real' (for want of a better word) SSD hard drives had enough local RAM and firmware smarts to do proper I/O ordering, buffering and manipulation. This effectively hides all the complexities of the flash storage, allowing any OS to work well.

The cheap USB sticks and their ilk couldn't do this because of their lack of RAM, decent firmware and all associated money-saving build choices. But as the hardware gets cheaper, I wonder if this is still true? Do USB sticks and SD cards nowadays have more RAM & better firmware?


I have a lot of development experience in this area. I have done SSD/flash firmware with RAM between 16KB and 8GB. The algorithm for each of these approaches give different perform characteristics. But you can implement hybrid algorithms in between for example 32KB or 256KB adding more performance features. The complexities are the most in these hybrid algorithms. It is certainly feasible to do these in single chip products in USB sticks. I know for example there are some USB drives with SandForce controllers. But the reality is the USB stick market is quite price competitive. They tend to use bargain basement NAND quality. People rarely write the drives to fill capacity before tossing them into their desk drawer. In summary, it is technically feasible but the market base is price sensitive.


There's been a lot of blurring the lines on the hardware side. There's now a whole category of "portable SSDs" using USB3 to SATA bridge chips to make low to mid-range internal SSDs into external drives. There are also several internal SSD controller designs that can provide far better performance than a typical USB stick even without any external DRAM on the drive.


Could you give any links to the newer controller designs, please? The improved performance without DRAM sounds interesting.


Silicon Motion's SM2246XT is a 2-channel DRAM-less variant of their SM2246EN 4-channel mainstream SSD controller. The -XT is used in both SanDisk's SSD Plus internal SATA SSD and their Extreme 500 Portable SSD: http://anandtech.com/show/9847/sandisk-extreme-500-portable-...

Phison's new S11 controller is also a DRAM-less design that will be used in low-end internal SATA SSDs.


I was investigating industrial class SD card(need pick one for a product running under harsh environment) and used this content back then, however for all my benchmarks I did not see meaningful performance difference with tuning recommended by him.


For coincidence, I was searching for articles on how to optimize a SD card for a Raspberry Pi and I find this one.

My RPI have a somewhat expensive Sandisk Ultra 32GB microSD card, however even them it was still slow to respond on random I/O (like installing a new package in it). I decided to use F2FS instead of ext4, however it didn't do much better: for some reason, the RPI kernel disable the majority of performance enhancements from F2FS by default. You can re-enable them in /etc/fstab, however after a unclear poweroff my filesystem was completely messed up. RPI wouldn't even boot. This is unacceptable.

So I decided to go back to ext4, however try to optimize for flash media. This article put me in the correct way, basically thanks to this part:

> When a filesystem wants to optimize its block allocation to the geometry of a flash drive, it needs to know the position of the segments on the drive. On partitioned media, this also implies that each partition is aligned to the start of a segment, and this is true for all preformatted SD cards and other media that require special care for segment optimizations.

Searching Google for "aligned partition Raspberry Pi" and I found those articles below that really helped to optimize my SD card performance:

* https://thelastmaimou.wordpress.com/2013/05/19/optimizing-ar...

* https://blogofterje.wordpress.com/2012/01/14/optimizing-fs-o...

After aligning my partition, the RPI seems faster and much more similar behavior to a traditional HDD (however, not so fast as a SSD, of course). However, I didn't disable ext4's journal, though, since I want to reduce the chance of getting my filesystem in a unbootable state.


Someday maybe we will have small, pocket sized computers with enough RAM to run all our processes with space to spare. There will be no need for swapping. There will be fast recompilation. And no need for shared libraries of functions. Each program will contain only the functions it needs, no added cruft.

Excuse me, I was dreaming. Sometimes I have the most absurd dreams. Hello reality, good to be back.

The SD cards I use in my RPi are always mounted read-only. Not sure why I do that. Stupid I guess.


Does this advice also apply to technically 'inferior' TLC NAND SSDs or does it only hold for eMMC drives used in single board computers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: