Hacker News new | past | comments | ask | show | jobs | submit login
Western Digital unveiled its SanDisk 1TB SDXC card prototype (sandisk.com)
148 points by jonbaer on Sept 20, 2016 | hide | past | favorite | 164 comments



Its performance is going to be glacial. Example: I have a 1.2 TB Fusion I/O flash drive on my workstation; it gets about 2 GByte / second sustained, and it has heat sinks and extra power connectors for a reason. Extrapolate down to a dinky piece of plastic that has about the same amount of storage; trade-offs were made.

And you should look really, really hard at retention when it is not powered (do the bits leak away, past recovery by ECC, when it's been on the shelf for six months?) Also look at reliability (transaction-nature) when it experiences unplanned ejects or power loss.


> And you should look really, really hard at retention when it is not powered (do the bits leak away, past recovery by ECC, when it's been on the shelf for six months?)

Aside from the components physically degrading (or environmental factors like radiation), I've never heard, read, or experienced this issue. Why do you believe this SD card would be susceptible to data loss after as little as six months?


Flash is actually super lossy (especially non-SLC technology), it only appears to durably store your data because of error correction and redundancy.

This is something I never realized until I worked on an embedded system where every bit error showed up as a log message. It made me think, "how can computers work at all"? The answer is math and abstraction layers that work a little bit better than the average web framework.


wear-levelling and bad-block management are not magic.

The failure modes of FLASH are well-known, they dont necessarily need abstraction layers and they do work very well and reliably.


I think when someone on HN refers to a specific piece of tech as magic, they're not saying that it's literally magical.


Right, and usually SD cards don't have anywhere the level of FTL's that SSD's have. In fact its likely there isn't much of a FTL at all, rather just a simple ECC on the flash pages. Lose more bits in that page than the ECC can correct, and the whole card is basically done because it can't relocate sectors. SD isn't meant to be primary storage. Its meant to be used more like floppies/usb sticks.

I can't tell you how many android phones I "fixed" for a couple years by replacing the SD cards. I actually had a sandisk 128G fail on me a couple weeks ago, that was just sitting in a dell/windows tablet with little more than a few movies copied to it. It didn't last 6 months, I guess something in win10 must have been rewritting a date/time stamp or something similar to the same sector long enough that the card just started hanging doing reads until it became bad enough that windows would choke and I noticed it.

Back about 10 years ago, I had an ARM device with an SD slot that I was writing firewall logs to. The SD cards in that machine would last ~2 months before they died. Eventually I replaced the SD cards with a USB attached SSD in an enclosure, and that fixed the problem.


I used the word math, not magic.

There are many abstraction layers involved. For example, your Linux application isn't bit-banging the NAND chips directly. Your SoC knows there's flash attached, and handles it accordingly. Or, the Linux MTD layer handles it. Then, there are the flash-specific filesystems, like ubifs.

All in all I find it pretty interesting, because if you "echo foo > /dev/mtdX", you're not going to find any sequence of flash cells that contain the bit pattern 0x66 0x6f 0x6f 0x0a for very long after you do that write. And yet everything works anyway. There is a _high probability_ that you will be able to read it back, but there it's never guaranteed.

Nothing is ever guaranteed. All we can do is increase the probability of success.


Flash storage management is still hard to get right, especially when the power can be removed at any time. Making data structures resilient to arbitrary power removal while having good performance is quite a feat.


> that work a little bit better than the average web framework

you have a reference for this? which aspects did you compare or were compared?


I think that was intended to be a joke, as when I think of the average web framework, I see buggy junk.


Multiple-bits-per-cell is astonishingly involved; with three bits the electronics has to distinguish between eight voltage levels in the face of process variations and environmental conditions that vary widly. There aren't that many electrons in these cells to begin with, and the charges leak over time. I think of the engineering that goes into this stuff as "heroic." 4 bits per cell and you're talking full bull-goose crazy. My hat's off to ya (no, I'm not going to buy any of that).

So you ECC the shit out of things and hope. Or you use less ECC, get more "checkbox capacity" on the box the consumer sees, and most people won't experience failure until they throw the device away. It's all about margins.

SSDs, even the expensive ones you buy, don't have a super long shelf life and are not what you want to use for backup media.


This, a million times, this.

It's a stunning paradox that densities and adoption is growing dramatically all the while raw retention gets worse with every generation (an astonishing amount of smarts goes into mitigating media deficiencies).

I'm not too concerned (assuming the standard backups) about devices in continuous and powered-on use, but for data at rest, powered off, this is very scary.


It gets even more scary: So you back up all your data to disks? Sure, they're pretty cheap and you can buy a terabyte for fifty bucks, but guess what the firmware for those drives is stored on. Hope the flash they used wasn't cheap (but if it's a consumer drive and it cost you fifty bucks, it probably was).

For long life I'm betting on DVDs (refreshed every few years). Easily half of the 15 year old drives I've been keeping are now bad; copying them forward every few years is a good idea. It might be time to go back to tape, but it's expensive. Pretty much everything sucks at retention.


Or just pay $100/yr to back it up to Dropbox (or even cheaper AWS Glacier)?


Because in a 3-bit flash cell, there are on the order of 10 electrons separating one charge level from the next. If a couple too many decide to quantum-tunnel their way to freedom, the bits will flip.


Flash is very heat sensitive. It doesn't take a lot of thermal energy to allow those trapped electrons to migrate.

Lower temps need more time but can still cause loss, higher temps less time.


Semi-dumb semi-serious question, what's the gradient on this heat sensitivity? How much difference does it make to (eg.) keep a flash drive in a freezer?


Somewhere there are graphs or tables of retention vs. read/storage/write temperatures.


SDXC U3 cards can easily write at 90MB/s, which is obviously nowhere near 2GB/s of a fusion drive, but it's definitely good enough for recording 4K video and taking large pictures - which is what this card is going to be used for. No one expects it to replace your SSD.


Worth noting that rarely any 2.5" HDD is faster than 100 MByte/s at 100 iops. This SD card could well come near that.


I wish but I am yet to see an SD card exceeding 10MB/s in practice.


Kingston UHS3 gets over 90MB/s in sequential read/write, and over 10MB/s in random read/write:

http://www.storagereview.com/kingston_sdhc_sdxc_uhsi_u3_revi...


What device and card are you using? The MacBook Pro will do 90/40

http://images.anandtech.com/reviews/mac/retinaMacBookPro/qui...


How did you test 2GB/sec? I have a Fusionio ioDrive2 1.2 TB card in a 32 core Xeon E5 server with 1600Mhz RAM and I get only 850MB/sec:

    # for i in {1..4} ; do ( time sh -c "dd if=/dev/zero of=/fusionio1/ddtest.$i bs=1M count=4000 oflag=direct" ) ; done
    4000+0 records in
    4000+0 records out
    4194304000 bytes (4.2 GB) copied, 4.90524 s, 855 MB/s

    real    0m4.908s
    user    0m0.006s
    sys     0m0.785s
    4000+0 records in
    4000+0 records out
    4194304000 bytes (4.2 GB) copied, 5.05399 s, 830 MB/s

Server has 128GB RAM and the card is in a PCIe 3.0 slot. Also FusionIO has 20PByte write endurance, which sandisk card obviously doesn't need.


That's suspiciously low.

Some questions:

1) What happens if you use bigger block sizes, say 16M?

2) Which filesystem you're using? Can it be fragmented? Can you test with raw disk to eliminate any filesystem influence?

3) Is dd running on same NUMA node (CPU socket) as your ioDrive2 PCI-e link?

4) To expand on 3, is there chance for QPI saturation (traffic between CPU sockets)? Have you ensured all the software uses CPU local RAM whenever possible and not access other CPU socket's RAM?

5) Are you sure all PCI-e lanes are active? (Try lspci)


Also: Did you low-level format to 4K blocks, or is is still using 512 byte blocks? Various options for the kernel module also have a significant effect, as do BIOS settings (C-states, etc.).

I got 2.5-2.9 GB/sec (iirc) with a FusionIO ioDrive2 Duo. It has been more than a year so I do not remember all the details.


Check the power status of the card. Make sure it's getting enough juice.

http://sirsql.net/content/2015/02/12/fusionio-and-the-go-fas...


Weird, even on an old PCIe 2.0 Opteron machine, I easily gets 1.4 GB/s from an HGST NVMe card. I've got 800 MB/s from a single SAS 12Gb SSD. Something's probably wrong in your setup.


How do you measure the write speed?


Same as you, good ol' dd. Not using direct IO, though. But I'm watching at iostat while testing, and the results are coherent.


Nothing fancy, just multiple copies of large files at the filesystem level, making sure that the cache wasn't giving false results. The transfer rates and the I/O counters matched pretty well. This was on a vanilla Windows 10 workstation.

We run a bunch of these things, and regularly bump up against the PCIe bus limits. There's something going on with your setup.


Speed is obviously one. I have a large microSD card that I have permanently in the SD card slot of my MBP with a BaseQi caddy. I wouldn't use it to read-write work data, but it works well as a (mostly) read-only store for MP3s.


Considering the low price of SD cards, I was thinking about doing the same thing: buy a 128GB SD and use it for cold storage


Read elsewhere in this thread. Don't rely on cheap flash for archiving, you'll lose it. Think on the order of a year or so.


Anecdotally, I can relate to this. I bought a 256GB SD card to store my music on and kept it plugged in to my laptop. It died just about a year later, and showed reliability issues before that.

(Of course, my "backup" was my iPod Classic, which was stolen from my car shortly before the SD card died, but that's another story.)


USB3 'plug & stay' drives seem like a better option for expanding laptop storage than SD cards these days.


I'm thinking the early uses will be primarily video cameras and similar recording devices that need small, easily swappable "drives" that can handle large files. When you're on a shoot, you don't want to keep swapping storage media every 30 or 60 minutes. In our little studio (university setting so not as high end as a pro shoot) we use swappable drive bays in a recorder. Those just hold standard SSDs and can be popped out and plugged into an edit station for capture and transfer. Still not cheap though and ours are typically 512GB drives.


> When you're on a shoot, you don't want to keep swapping storage media every 30 or 60 minutes.

You really should, though. Not in the filming domain but in stills, I've shot airshows in the company of professional[0] photographers who changed their cards for fresh ones after every act. No point losing a whole day's shooting to one bad card.

Apparently this is also common at weddings, though even more so as the photographer might be taking shots of the same scene on two different cameras and then changing cards in both. Just in case.

[0] as in, they are paid for their output


> Apparently this is also common at weddings, though even more so as the photographer might be taking shots of the same scene on two different cameras and then changing cards in both. Just in case.

More common is the requirement to have a camera that supports dual cards, so you can either write to both cards, or write JPG to one and RAW to the other, so you'll never lose everything.


Absolutely. I will typically take 4-6 cards to shoot a wedding, though I could buy a 64/128GB card and shoot everything on one.

Touch wood, though, I am yet to have a CF failure. But I also pay for the most reliable.


When I looked at the overhead for USB3 bulk transfers on Windows the system was spending a lot of time in the I/O and memory system stack (doing lots of page-wiring, and just going through layers).

External USB storage is find for archival and low volume use, but not great for anything you want performance out of.

Then again, on a laptop you probably don't have much choice . . .


UASP should help with that - but both the host and device need to support it.


Serious question: why is this considered innovative? 256 GB micro sd cards are widely available[1], and are less than a quarter of the size of sd cards (by area). Surely they can take a 256GB micro sd card's die, copy paste it 4 times, and end up with a 1 TB SD card?

[1] https://www.amazon.com/dp/B01G7L03OS


I think it's being treated as noteworthy due to reaching a milestone. It doesn't feel very long ago that we hit 1GB.

Edit: after a quick Google search: "Jan 29, 2004 - Sandisk ships world's first production 1GB SD card"


You generally can't just stick separate chunks of flash memory together to make one big one, because the controller is the limiting factor. Having a chip being able to manage large amounts of flash memory (having to have its own memory for the allocation tables, wear leveling, and so forth) is where it gets tricky.


But we've had 1+ TB flash controllers in SSDs for several years, too.


Yes, and they're disproportionately more expensive (ie more than twice the price of 2 500GB's) because of the aforementioned issues. It's also the bottleneck in having say 5TB SSD's that are just 20 smaller ones stuck together, because you could cram them into a 3.5 inch enclosure.


3.5 inch SSDs are expensive because they are primarily used in very expensive servers, and they aim for maximum performance.

If your main concern is capacity, you can easily use multiple controllers. In the laziest case just put a cheap raid controller in front.


Innovation doesn't mean breakthrough.

Also, innovation isn't just having the idea. Innovation is actually doing it.


Well I think the main problem is going to be the power draw. An SD Card has like a 100ma power envelope. The only way to overcome this is to shrink your parts down so that their power consumption comes under that.

Flash reads are power cheap but flash writes are power expensive.


Adding more capacity at the same speed doesn't require more active area or power draw.


Adding capacity to an SD card means either increasing the density of the flash chip or using more flash chips. In this case the op was referring to doing the later strategy. In order for additional dies to not alter the power characteristics of the card, the controller chip would have to have some sort of active power management built in power off dies that aren't in use.


Idle energy use for a typical SD card is well under 1mA. Even though the OP was suggesting something a bit more sophisticated than 4 entire separate cards inside one, it would work just fine.


FTA: > The 1TB card is certain to be prohibitively expensive, and at such a large capacity, read and write speeds are going to be comparatively slow

Why would a larger card be slower? Wouldn't it be faster since it can write more flash cells in parallel? Larger SSDs are usually a bit faster due to this.


Another poster referred to this, but the most obvious reason for slow speed is intentional throttling due to thermal concerns.

It might actually have to be simply capped at an "always-safe" write speed due to limitations on the circuitry - is there room in an SD card format for adequate thermal sensors and the logic to limit write speed as the card heats up?


Oh god, we're talking about overclocking SD cards? I can't wait to see experimental results not only in general, but also in edge case scenarios like when encountering counterfeit hardware.


Discussion doesn't seem to be really widespread, but particularly with the M.2 / PCIe SSDs you actually do get some thermal throttling. Here's a review of the Samsung SM951 where in just over 2 minutes of sequential write testing the card reached a throttle temperature of 82C and throughput dropped from 1500MB/s to 70MB/s (http://www.legitreviews.com/samsung-sm951-512gb-m-2-pcie-ssd...). I'd call that a noticeable dropoff in speed. Yes, I realize that article is talking about SSDs not SD cards. The relevant difference here is that the SSD is substantially larger with likely better heat dissipation.

Of course, the maximum bus speed of SD cards right now (including the UHS-II hardware change to add additional contacts) is only ~300MB/s, with a more practical top speed of ~150MB/s to allow both reading and writing. Maximum power consumption of UHS-II SDXC cards is 2.88W, which is actually fairly significant when you consider the volume of the cards - it's not a lot of power, but it's also not a lot of thermal mass.


My guess is that they mean relative to capacity. If the card is simply the same speed at a card 1/4 the capacity, that means you can read out only 1/4 as large part of the unit per unit of time.

Apart from the thermal throttling suggested elsewhere, there's also the bus speed, as well as the controller on the card. It's possible they could speed it up, but it's just as likely that they've just added 4x the same flash with a controller with the same read/write performance.

SD cards != SSDs. The applications they are used in tends to be more limited by capacity than speed, and they evolve accordingly.


I know some DSLR's have dual SD's in them, could we see an era of SD RAID? Weather thats all on one card or three separate cards striped with parity?

I'd also think the camera could act as a decent heatsink to disperse hear generated from the SD IC.


You've been able to buy SD and MicroSD raid devices for a while.

Here's one with a SATA interface: http://www.aliexpress.com/item/Del-10-x-Micro-SD-TF-Memory-C...

Here's one with a USB interface: http://alfa-media.com/products/tf%20raid.html

I've never used one, but they look scary.


Good grief, things have moved on somewhat quickly since I last looked. As you say, some of those are scary indeed!


I didn't take the quoted text as a suggestion that the drive would be slower in any absolute terms. 'Comparatively' can be an ambiguous term - compared to what?


I read it to mean compared to smaller SD cards, which is also the only reasonable comparison here. I just re-read it and I actually can't see what else it could be referring to.


Every single SD card I plugged into my laptop was slow, even the cat 10 ones. They also tend to fail faster than the HDDs (per hour of usage) so I'm not sure that I want to put all my eggs into a 1 TB SD basket. However 1 TB is plenty of recording time: it should be OK if you can back it up regularly.

By the way, my laptop has a 1TB Samsung EVO 850 now, plus the original 750 GB HDD and its 32 GB SSD cache, which I keep shut down. They contain the OS (SSD) and data (HDD) prior to the upgrade. I should format and reuse them. The HDD could be handy for seldom used files and given the amount of RAM (16 GB) wasn't that slow. Or I just buy another TB SSD, handy for working with docker containers, VMs and the like, and leaving overprovisioned space on the SSD.


> Every single SD card I plugged into my laptop was slow, even the cat 10 ones.

Could be that the interfacing chip in the computer is low-end and bottlenecks them. That has been my experience with some ThinkPads.


Eggs in the basket? How do you guys use sd cards?

I never keep anything on them for long, I can imagine folks that only have a tablet might neglect moving photos off their cameras


My laptop has 1.5tb of space. You don't know my life!


2TB notebook HDDs and SSDs are available, step up your game.


I'm looking forward to the stories of people actually filling these things, and then losing/corrupting them. I think that is waaaaay too large for the size/speed of SD.


Designed-for use-case for an SD card is a buffer where a photo/video camera will save footage sequentially, then you copy everything to your PC/laptop and reformat the card.

When used as a generic storage with a complex write/delete/overwrite pattern, most card would start corrupting data fairly quickly.


I understand its use case. Now imagine you're on location shooting, and you're fumbling around with tiny SD cards. It wouldn't be the first time I've dropped one. Even compact flash I try not to put TOO much work on one before I switch them over.

To say that every consumer is going to use it only as a temporary medium, I think is overly optimistic.


I think part of the appeal of high capacity cards is that you spend less time fumbling with cards. A 1TB card would probably last an entire day of shooting HD video. As an amateur photographer, I generally keep 2 extra SD cards on hand but they're backups and not used. I don't think I've ever filled up the 64GB card that I shoot with and at the end of the day I move off what was shot. If I'm shooting on vacation I'll generally leave a copy of the images on the card so that I have 2 copies floating around but I've yet to fill up the card.

This card probably isn't targeting the consumer market. I think smartphone cameras have cannibalized much of the Point-and-Shoot camera and PVR market. That being said, I think most consumers purchase one SD card when they buy their camera and it's the only card that camera ever sees.


I get what you're saying, and it is a concern, but there is a decent argument (to me, anyway) that with 1TB you probably won't be taking the card out to be fumbling with it in the first place. Even with on-location shooting, "I filled up a 1TB card, I need to archive it" is probably a good reason for a lunch break.


No one in their right mind would fill a 1TB card before swapping. That's an insane amount of potential lost. I could see it as a secondary backup disk in DSLR about it.


If you're recording 4K video, a terabyte is about two hours of recording time. Plenty of folks will fill that before swapping.


Nobody in their right mind would have thousands of photos with no backup, but most people never backup anything.


So what's your argument here, that you agree with me?


He is saying that people will fill a 1TB card before swapping exactly the same way that people often leave thousands of photos floating around without backup.


Not that you're wrong, but [citation needed]. Mainly because you use a microSD card to boot the Raspberry Pi. Also, in my Yearbook class in high school, we almost never formatted(?) our cards. Granted, they were only 32 GB or less to avoid SDXC, but still.


This was mostly from a personal experience of running couple of dozens Raspberry Pies. To elaborate:

- Cheap cards would start corrupting right in the middle of first "apt-get update && apt-get upgrade";

- SanDisk Extreme Pro 16GB fared much better, but still we had several failures after half-a-year;

- Failure mode in both cases - corruption in superblocks and inodes, journal doesn't help much when recovering (we use ext4).

As a result, we settled on splitting each card into two root partitions - active and standby. To upgrade, overwrite standby one with dd, switch active/standby by editing /boot/cmdline.txt, and reboot.


I have had half a dozen SD cards fail (corrupted) on my RPi in a 4 month period - but it turned out to be caused by anaemic power supply. I never got a corrupted SD card since I got a 2.5 Amp adaptor - this was 18 months ago with the very same RPi.


Quite often with the RPi, the problem is more failed-to-shutdown than FLASH-failure.

People unplug those things without shutting them down correctly.... quite possibly that was the problem, nothing inherent to FLASH or SD cards.


Why were you avoiding SDXC? Was it a technical deficiency in SDXC or limitation in your hhardware?


We did have a few cameras that supported SDXC, but the majority didnt. It was easier to just use SDHC and not have to worry about what card works in what camera. Besides, 32 GB was plenty because we almost never shot in RAW (high schoolers aren't professionals); I think only me and 3 others knew about working with RAW.

So to answer your question, space wasn't a premium, and because of the technical and human limitations of SDHC vs SDXC. I'm sure some of those cameras had firmware updates that would add SDXC support, but we didn't need it.


I'm not the parent, but SDXC on my Raspberry Pi causes kernel panics after ~6 hours of operation. It's entirely annoying.


Are you formatted as exFAT? Because it's my understanding that Windows will refuse to format an SDXC as anything but.


I believe it was ext3 or ext4, but I'd have to look.


I'm excited about this to upgrade my old hard-drive iPod. Replacing the hard drive with an SD card should increase battery life considerably, and this is more than 8x larger than the hard drive it came with.

If it dies, I lose no data since it's just a copy of my music library. And the speed doesn't matter since I only need to sync things once and from there it's just incremental additions.

The reason I consider this is that there don't seem to be any decent music players with large capacity, as everyone moved their music playing to their phones.


I've delved into flash on iPods, though I haven't taken the plunge yet. You might want to look at https://www.iflash.xyz - there are adapters for multiple sd and micro sd cards, resulting in ridiculously large volumes. I think RAM limits the total number of songs supported, is the next bottleneck.


I've seen those adapters, but I'd prefer to use only one SD card and a simple (read: less likely to have weird edge-cases) adapter. But I haven't taken the plunge myself either, mainly because high-capacity SD cards are still pretty expensive.

I think RAM will be less of an issue for me since I run Rockbox[0] on the iPod. No need to keep the whole database in memory there. I like everything about the iPod except the software (pretty much the same as other apple products :)

[0] http://www.rockbox.org/


That's why many professional (and some prosumer) cameras have dual slots and allow you to write to both.


Yep. My several-years-old prosumer Nikon (D7000) has two slots and I can configure them either as mirrored for redundancy, overflow in case I misjudge how much storage I need, or I can use one for RAW and one for JPEG in case I need to show a preview before I get a chance to process the RAWs.

Typically I just have it set as mirrored since I only shoot occasionally as a hobbyist and a corrupted card is the most likely "fail state" I'd encounter.


I use my second slot for JPG, reasoning that I can get good enough results out of a JPG if I desperately need it - and the cheaper, slower, bigger card I put in the backup slot will hold a couple of days' shooting as JPG and not slow the camera down.


Correct, and expecting most people to actually do that. Not happening.


With HD video capture and other such uses, sure. But what about when we're capturing 8K video in a few years?


Has anyone got any info on the long term reliability of SD cards? Could they be used for archival for 10+ years? What about 100?


SSDs are only rated for one year of retention with power off; I would expect SD cards to be worse. http://www.anandtech.com/show/9248/the-truth-about-ssd-data-...

In theory flash can be use for archival but it would have to be refreshed periodically.


FWIW, I found 4 old SD cards (16MB - 256MB, various manufacturers) in my attic last week that definitely haven't been used in 5 or more years, and the photos on them all copied over to my PC just fine. I'm guessing the much lower density of the smaller capacity cards helps a lot in this regard.


Yeiks. I wonder how many will be burned by that...


1TB (8Tbit) sounds almost certainly like it's going to be at least 2-bit if not 3-bit MLC, maybe even 4-bit. The endurance and retention numbers for 3-bit MLC are already pretty dismal in comparison to even 2-bit MLC, and the only way they've been able to convince consumers to buy this stuff is by increasing the capacity and multiplying by it.

The cells are still intrinsically more fragile, and increasing the bits-per-cell increases capacity only multiplicatively while retention and endurance decrease exponentially; theoretically, to a very rough approximation, 4 bits per cell will have 4 times the capacity but only 1/16th the endurance and retention of 1 bit per cell. In practice, it's somewhat worse.

Instead of a 1TB card using 4bpc flash that may offer 1K cycles per cell (1PB total writes), I'd rather have a 256GB SLC (1bpc, "old-school" flash) card with 16K endurance (4PB total writes). Both take up the same area (thus theoretically cost) and the SLC is in fact simpler in many ways to use because it doesn't really need such elaborate "management", but unfortunately in practice SLC is priced astronomically higher than the technology would suggest.

I'm not much for conspiracy theories, but it sometimes makes me really think if the industry is deliberately misleading us into the worse choice in terms of reliability, just so they can sell more. This is one of my big gripes about the flash memory industry --- there is a lot of marketing, a lot of fluff. But if you look at the facts it's clear that no one considering the tradeoffs logically would've ever thought MLC to be in any way better. It can certainly be two, three, or four times the density of SLC, but it doesn't even last a half, third, or fourth as long.


From Google's experience with SSDs: High-end SLC drives are no more reliable that MLC drives.

Source: http://www.zdnet.com/article/ssd-reliability-in-the-real-wor...


MLC lasts long enough and it is significantly cheaper. That's very logical.


The problem is "long enough" is not... and the manufacturers are continually trying to make that shorter and try to convince you it's "long enough".

IMHO a few years is NOT "long enough" for a nonvolatile storage device. A few decades is more reasonable.


Flash storage in general is very poor for long-term storage. Current leakage occurs and if they're just sitting on a shelf there's no way for them to self-correct. Your best bet is still tape or optical (think bluray) media.


I'd like to see some solid data on this.

I used to think that DVD+/-R and DVD+/-RW had a shelf life of <10 years (this was ~10 years ago) and I can't recall anyone asserting they lasted longer.

After running a long term program where backups were written to DVD+RW discs, and seeing them last > 10 years, I was impressed.

Now I'm finding sources that say that the post-write self-life of optical media is on the order of 10s to 100s of years.

Anyhow, it just makes me wonder what the practical shelf-life is, and what the underlying media "does" over time :)


It breaks down like any material. The easiest way to sample this would be the REALLY cheap cd-r discs you could get in the late 90s. If you left one of those sitting on your dash, they would borderline melt in the sun... and for sure start peeling off the plastic if they sat on your dash for a couple hours.

Same thing with newer media - if you keep it in a climate controlled environment the sky is the limit. If you abuse it, it's not going to see anywhere near the quoted life.


On the other hand, I've never gotten writable optical media to last longer than three years, despite storing them in a cool dark closet.

Did you compare checksums on the data you wrote?


On the earliest data, nope! Very likely I had some or many flipped bits.

I did have the backups on multiple discs and comparing the two they had the same data so I guess that's almost as good.


Iirc, RWs las longer than plain Rs. Something to do with there being a second layers between the stprage surface and the environment.


any recommendation for tape recording devices these days, that will be still supported 10 years from now ?


An LTO drive is good long term since it's guaranteed to write 1 generation back and read 2 generations back. Very easy to use once you get the routine down.

As always, remember to do a restore to prove your backup is an actual backup.


Thanks. I checked the prices and a LTO drive is... expensive. I would have thought a tape recording device would be cheap by now.


Nope, but they do last a long time compared to other drives. We have stuff that got retired after 10 years (still working, we just upgraded standards). LTO-7 is out, so you might start picking up deals on older models.


I restored my 22 year old BBS off Colorado QIC tapes by jury rigging an old mini-ATX board with a floppy tape drive and installing Win98 on it. They read perfectly. Of course you can always pay someone to dump the data for you, but where's the fun in that?! I think any off the shelf modern tape will be just fine in 10 years.


Ha, just tried to read a Colorado tape backup from 98 (tape might be from 96 even, HP T1000 drive) and it ran like a charm. I didn't even realize it was an 18 yo storage until reading your comment.

Even at floppy speed, copying the 400MB wasn't that long, and it's lovely to hear the cute drive noise (for retro lovers).


LTO is the industry standard. You'll have no issues finding an LTO tape drive 10 years from now on ebay, and if for some reason you did, Kroll will still be able to recover it.


There is some evidence to suggest that these DVDs have good archival support, and I can guarantee you that DVD readers will still be abundant 10 years from now:

http://www.mdisc.com/


I had about 1000 DVDs in storage of backed up satellite data. At one point I had to restore the backup using the DVDs and it was about ~8 years into the mission so I was worried that the data would be no good on the earliest discs.

It turned out to be totally fine, as far as I could tell. None of the early data had SHA/MD5 sums (I added this later), so there's no guarantee the earlier discs weren't somewhat corrupted, but they were definitely readable.

Back then, I had assumed they wouldn't have that long of a life, but as you seem to have found, and I discovered, once written, optical media seems to have a very long shelf life.


Most Optical Media degrades almost immediately, depending on storage environment, lasting as little as two two years before they start to degrade/aren't readable (depending on the dye and substrate). The MDISC is archival format - designed to last a minimum of 100 years when stored under reasonable conditions, and up to 1000 years if stored in archival (cool, dark) environments.

They are essential carving physical holes into a stone like substrate.

The verification tests are really quite impressive: http://www.esystor.com/images/China_Lake_Full_Report.pdf


I would think you could get basic support for any device from Linux. Question is really, why not offshore the device support to someone else entirely?

Hosted backups are likely at least as reliable as anything you can do. Right?


Store both the tape and the device to read it.

PCI/IDE/SATA interfaces are so ubiquitous that even decades from now I fully expect you to be able to use them. If nothing else, via some kind of adapter.


Decades is optimistic for a tape reader or any electronics really. Caps dry out, rubber parts degrade and RoHS solder doesn't help the life expectancy. It might still work, it might not.


Anecdotal evidence : As an amateur photographer SD cards are the least reliable storage. ( they are flash memory after all...)


How did this manifest?

SD cards that just weren't recognized over time?

Do you have any other details? Did you stick to expensive brand name SD cards? Did you write to them often, or read from them often? Super curious what the usage pattern was.


Yes they're not recognised after a while.

To avoid mechanical failures I'm unloading my files straight from my camera via USB and erase the card from the menu.

Here's some things I've noticed

- They failed more often when I'm shooting video in full HD or 4k (less than a year). When I'm shooting films I can erase a card 6-8 times in a weekend.

- For Photos I have cards that last for two-three years, shooting in Raw and erasing the card approx. twice a month.

- At first I tried Samsung Pro but it fails equally. I'm using Transcend Extreme Pro but in terms of reliability they're the same things than the budget Transcend imho (the speed is better that's the only reason why I buy them). I had bad experiences with Sandisk cards


Depends on how often you write to it


I wonder how well would it perform when used in place of laptop's system disk (i.e. with tons of small files and very random read/write access patterns).


What's the seek time ? I use tiny usb keys as live image source, even as stateful system half the time, and it's a bit touchy. When the cache gets full, the kernel will get stuck in very long syncing mode, so much systemd starts to complain. On non recent (say 2013) usb keys, it will even trigger kernel error messages and lockups.


Abysmally


Unfortunately, this has been my experience with all SD cards I've tried. Even ones sold as high-performance are just annoyingly slow to use as a root FS.

They also seem to be extremely unreliable when used with even a modest write load, often failing within a year.


As a root FS, agreed. As an expansion, I have had really good luck with the 256GB microSD card in a low-profile SD adapter on my MBP. When I'm doing screencasts or whatever, I happily dump my recordings out to it and, as silly as it is, it's one of my favorite pieces of tech right now.


On the other hand, what if you paired it with an Apple Watch running Windows 95?


This brings back my memories (maybe 12 years ago) developing the 1GB CompactFlash card firmware at a previous job. That amount of capacity felt unbelievably huge at that time. Infact the first cards I worked on had only 4MB capacity!


> (maybe 12 years ago)

I've observed in the past that storage capacity per drive/media seems to grow at around roughly three magnitudes per decade. I'm looking forward to my 1PB SD card equivalent in the late 2020's.


Mainly tounge-in-cheek, but still

https://xkcd.com/605/


Heh, though in this case we have a few more data points. But of course it'll break down at some point.


My personal laptop has a 256GB SSD as OS/apps drive. I treat this as expendable.

I have replaced the DVD drive with a 1TB SSHD. This is my 'storage' drive. There's a directory with stuff I need to keep, which is all on Dropbox and backed up to CrashPlan. The rest is just 'cache' (e.g. music files I can easily replace).

Looking at newer laptops, there's no DVD drive any more, so I'm waiting for 1TB SD cards (at a reasonable price) so I can have my 'storage' drive. This is great news for me, though I may have to wait some time for prices to be reasonable.


That's nice and all, but I've had to trow away too many SD cards. Was happy to pay Apple the extra cash for a 512GB ssd.


What's the write speed on these things? Need much faster than current SD card write speed, for 4K Raw and faster FPS video


"SanDisk's 1TB SD card has more storage than your laptop".

It does not. Just saying.


I think that this is the first storage device to use the 10nm silicon process. The same dies will be used to make 512GB microSD cards. Perhaps someone from SanDisk can confirm?


AFAIK there are zero sources reporting SanDisk uses 10nm for this SD card.


With portable storage like that, I really could use a computer sans the internal disk.


With on the fly encrypted rolling core OS fetched from the network.


> SanDisk's 1TB SD card has more storage than your laptop

If you take the HDD out of your laptop, the HDD has more storage than your laptop. If you put it back in and use it, then it doesn't have more storage than your laptop.

If you put a 1TB SD card into your laptop and use it, then it doesn't have more storage than your laptop.


Title is not going to be right very often.



Actually, it does. 1TB > 250G in that particular case.

This is one of the reasons I opted for a slower but much larger drive in my day-to-day work laptop.


That 1TB drive is actually inside of my laptop, so I have >1TB in total.


Hard to tell from an image that labels it 'external'.


Whoops, didn't notice that in the screenshot.

It makes more sense in person, trust me :v


Not really.

One of the nice things about the Inspiron 3800 (and doubtless a few other models) is physical capacity for both a 2.5" SSD and an mSATA ... so you really can wander around with 2TB internal non-spinning disk.

(Good for bragging rights, but an expensive option.)


Definitely, that's why I love my T420 (got OS X on that). A nice SSD for the OS and other important stuff + a big HDD to put all the data on is still the best option, in my opinion.


No it doesn't.


Er, my laptop actually has a 1TB SSD. Am I missing something?


Url changed from http://www.theverge.com/circuitbreaker/2016/9/20/12986234/bi..., which points to this.

None of the tech website articles on the story seem to add any info over the corporate press release, so we might as well link to that.

This topic excited a lot of interest earlier today and set off HN's flamewar detector (more like an overheated discussion detector). That was too bad, so we've rolled back the clock on the original post and merged in the comments that were posted to a duplicate thread.


16GB Should be enough for anybody - Google Nexus Team.


Except that the Nexus 6P came in 32/64/128 GB configurations and the Nexus 5P came in 16/32 GB configurations.


It took them until late 2013, and they still don't allow SD cards do they?

They also don't allow their apps to be moved to SD, which is especially aggravating for the crappy apps you can't uninstall on phones with small built in drives.


AFAIK Android 6.0 and up lets you 'Format SD as internal storage' and from then on it's well, internal storage. I remember messing around with it on my Nexus Player awhile ago. In that particular case though, it did NOT handle unexpected power-downs at all well. I suspect it was a bug with remounting the encrypted filesystem on the card when booted back up next (where a Linux box would normally run fsck on it and then remount). I ended up doing some digging and filing a bug but never really getting motivated enough to stick with it.

FYI at least Samsung seem to be realizing their mistake as their latest models have the microSD card slot again after having abandoned it for some time, so it does seem like it's back.

I also have to concur with what others have said here - I've done my fair share of Raspberry Pi and ODROID-XU4 system image tinkering and come to the conclusion that microSD cards suck big time as a general purpose read/write storage device. I wouldn't feel comfortable having my phone's internal storage running off one actually.

Now if phones came with eMMC slots.... :)


Thanks, I'll give it a try. Data loss isn't really a concern because I mainly want it for stuff I don't want and stuff I can easily replace (audio books, google music cache, etc).


And it's now almost 2017 when Apple just made 32GB the minimum storage for their iPhone 7.


And google is going to announce its new phones on 4th oct. If they are still 16gb then you have a point. With the new android update system where the phone has 2 system partitions and load the updated version on rebooting the phone means they will need more space anyway.


2GB is more than enough for a system partition. You can fit two into a 16GB phone. The only real consideration is app and media size, and google's historically been on the low end of that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: