Hacker News new | past | comments | ask | show | jobs | submit login

The 8Tb WD SMR drives I have use a 40gb chunk of the surface for non-SMR reads and writes as a buffer. Altering anything in an SMR track requires copying the data from that track and tracks near it into the buffer, then laying them back down into the SMR region sequentially. Like erasing a flash block before writing. Unfortunately, this means that the drive's performance takes a dump from 200mb/s down to 300kb/s if you do enough transfers to fill the 40gb buffer, or attempt multiple transfers at once.



I stopped buying WD after they slapped the Red label on their SMR drives. I’m sure there are good uses for them, but being a member of a RAID ain’t one.


I run a bunch of Seagate 8TB SMR bought a few months after release, and the ZFS raidz2 pool has grown over the years.

They are wonderful for write once/read many storage of large files. Due the the sheer number of drives the write speed isn’t a major concern as it can saturate the 20gbps Ethernet bond to the machine.

Reads are the multi-GB/sec range though, so long as they are sequential.

They would not be my first choice, but for specific uses like backups and media storage they are a good value.


For low usage like on a NAS ,they are fine for me because they usually last me about 5 years. Maybe more if I don’t replace them for larger versions. They’re about $50 cheaper than other better performing models. They’ve also outlasted every seagate drive I’ve had in the same size and price category. Of course I would never use them for anything other than a NAS.


For low speed writes -- e.g. trying to download things over the internet say and then save them off, but not for trying to backup multi terabyte data to it.

If they increased the size to 4TB say, to make it easier to import data to it, it might be worth it. But if it takes several days to write 32GB of data to it, it's not really gonna be worth it to most people.

I've wondered why WD/Seagate don't invest in a consumer solution where drives are literally just plug and play together and give you raid without the hassle of a raid controller or software. Eg. An 8TB drive connected to an 8TB drive is a mirrored raid. SMR drives would serve as "cold storage". Once written to, would never be rewritten again.


That'd be a leap from USB/SATA/sas native chips to a full blown nix os with a microcontroller and secondary bus on it to handle drive to drive.

if you want that I think they do have in box solutions with two or more drives and a while back sold their own NAS drive system. Not sure what they have today but there are plenty of free or free enough solutions out there these days.


FYI all modern hard drives have multi-core ARM or Risc-V microprocessors built-in. Usually one core manages the motor control and signal decode from the RW head and another core runs an embedded OS and manages the data interface to the host computer. Folks have managed to get malware running on the hard drive's embedded CPU(s)[0] as well as full blown Linux distributions [1]. So your average hard drive definitely already has enough grunt to get the job done. Just needs a firmware written to expose the functionality.

0: https://hackaday.com/2015/06/08/hard-drive-rootkit-is-fright...

1: https://spritesmods.com/?art=hddhack


I think what's important is that most SMR drives can only write a few MB per second anyway once their buffer gets saturated. That's well within wifi/100M ethernet speeds.

I feel like 10gb superspeed hardware is more than a fast enough connection. The hardware is relatively cheap to build, it's just the software.

I'm not saying you need a super high speed processor or memory either, since the SMR drive itself can serve can serve as (slow) memory for the CPU. The trick is trying to figure out mostly how to replicate blocks across the raid, and then mapping the files to raid blocks.


I have a Seagate 8Tb SMR drive that I use for movie rips from my Blu-rays. In Australia "format shifting" is allowed if you keep the original format.

Linear writing and reading of 30Gb files is a great use case for it.

Edit: Actually I think we now have an equivalent of the DMCA so breaking the encryption on the Blu-ray would be infringing that. Oh well.


If you're not "allowed" to watch a movie you paid for on a TV you paid for, something is deeply wrong with the law.


They still have valid use cases like backups or unedited video footage. It's just kinda lame that the manufacturers don't market them as "slow backup devices" clearly listing all limitations, and that you have to find it out first time you use it.


When I have used them for backups, the 40gb buffer quickly fills and then I'm stuck with speeds slower than my internet connection until the buffer empties, which it will only do if I kill the existing transfer. SSDs can dump 40gb of data very quickly. Annoyingly, since the r/w head has to do it's back-and-forth dance between the buffer area and the SMR areas, this condition even impacts read speeds. Consequently I wouldn't even use them for backups. Nor would I buy them again. I've set them up as Storj drives, which they seem to handle reasonably, and I am thankful to be done with them otherwise.

If you're considering one, I would pay special attention to the buffer size, and ensure that all the transfers you want to do to or from the drive are significantly smaller than the buffer to ensure reasonable performance. That excludes most video too. Storj files are typically just a few megabytes, and typically arrive at a frequency of just one or two per second, which the drive can handle.


I had that problem as well (8TB Seagate). It would write some data, then get completely stuck to the point where Windows would report an I/O error. So I wrote a small tool that writes data in smaller chunks, monitors the write speed and allows throttling it if needed.

Weirdly enough, just using the tool instead of copying files with Explorer somehow stopped the weird hanging from happening, even without having to enable the actual throttling. Probably some bug somewhere along the driver/firmware stack triggered by the write block sizes.

Overall, I wish the drive vendors would expose some API to directly manage the SMR/CMR areas via software, just like the FLASH memory chips do. That would make the job of appending new backups + overwriting the old ones actually manageable with predictable and consistent timing.


It also seems like a potential opportunity for hybrid flash hard drives, where the flash could take over the role of the CMR buffer region and reduce the amount of back-and-forth required of the drive's rw head, which should considerably increase performance.


This could even happen is SW with separate drives.


I assume Apple’s ‘Fusion’ drive tech is still active in their OS for backwards compatibility. It’d be interesting to try it on one of these drives.


It's probably sensitive to exactly how the file is opened. SMR drives need the file to be append only. If Explorer isn't communicating to the OS that it's going to do that properly for whatever reason, the driver would kick the writes back to the random access area which would slow it down.

SMR drives aren't designed for having data shuttled between areas like that. They're meant to be used such that you write in long streams directly to the shingled areas. The slowdowns are clearly due to the abstraction mismatch getting in the way.


> Overall, I wish the drive vendors would expose some API to directly manage the SMR/CMR areas via software, just like the FLASH memory chips do.

They do, on drive models that are sold to the customers large enough to have the resources to re-write their storage stack to handle zoned block devices. The drives sold at retail will continue to pretend to be ordinary random-access block devices for the sake of backwards compatibility.


Are you sure the drive wasn't just DOA?


Got 2 of those. Same behavior, otherwise both work just fine.


I was about to comment that this might not be a huge issue in network storage. Throwing a pair of 1TB SSDs as write cache in a Synology is pretty painless. Then I remembered that SMR drives don't like RAID, soo yeah.

Maybe you can make it work in clustered file systems like Ceph if you make sure you have a big enough SSD-based write cache.


Given that 40GB is like $6 of ssd, it wouldn't be hard to provide an actual useful SSD buffer.

I got a Lenovo laptop for the parents a long time ago that had a hybrid hard drive; I think maybe 8 or 16GB builtin SSD. Man that thing was just absolute trash, so slow; ghastly long boot times or time to launch a browser.

Ideally a SMR drive has a SMR capable file-system, where it can write data into linear logs & them try to batch rewrite the metadata. Even the pathologically terrible random writes can be somewhat coped with via accommodating fs.


One flash chip isn't going to get you a buffer with higher throughput than a hard drive. It just gives you lower latency.


An 8-channel SSD can do over 10 GB/s so... no? A single flash channel should give 1 GB/s or more.


The SSDs that go that fast also have multiple dies per channel. And write speed is the relevant metric here, but SLC caching isn't useful unless you have flash capacity to spare.


Any flash being used as a cache for a HDD had better be SLC, and even then it introduces a longevity problem. Perhaps a good use for Optane.


I sure hope it honors fsync!


Not sure, but interestingly SMR drives support TRIM commands: https://superuser.com/questions/1407990/what-does-trim-on-an...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: