Huh. I assumed that it would have two columns of heads on opposing corners. Twice the throughput, half the seek time.
Instead it still only has one column, but it’s divided in vertically to give more flexibility in positioning. I understand how that could help with random access iops, but how does it double the max bandwidth as they claim?
It’s still the same number of heads isn’t? When doing sequential reads isn’t that the limiting factor?
I guess I've always assumed that HDs wrote with just a single head: moving to a specific cylinder, using a specific head, and then writing the appropriate sector (CHS). But, I guess it would theoretically no different to write the stripe across all the heads in that cylinder, though for efficiency maybe the sector size would have to expand by NR_HEADS times?
I guess this goes back to various diagrams I've seen over the years.
Hard drive tracks are really narrow. When you have one head aligned with the track on one platter, the rest of the heads on other platters will only be close to the corresponding tracks on those platters, not fully aligned.
Now that hard drives are using multi-stage actuators (arms with elbows and wrists), it's theoretically possible to align multiple heads to allow writes to be striped across multiple cylinders. But I suspect those advances have instead been used to enable even tighter track spacing (and overlapping, for SMR drives) because SSDs have forced hard drive manufacturers to prioritize density over performance.
I’ve long assumed we got to the point where the surface of the drive was uniform recording material, “tracks” were just the parallel bands the disk made by recording data. There’s difference in the platter material between two tracks, just the gap the heads leave to avoid interference. Conceptually like audio cassette tape: it’s all magnetic, but the audio is stored in separate tracks.
Is that wrong? Is there some physical coating or gap in the recording media between tracks?
If that’s not wrong, wouldn’t always reading/writing the same track across multiple heads force them to stay in alignment?
I guess I always assumed all the heads were independent data streams, they were just forced to move physically in parallel.
You’re right that reading or writing the data in parallel across multiple platters might make more sense since the heads are always locked together anyway.
I have no idea how it actually works. That had never occurred to me.
In a traditional drive, only one head is active at a time. In these drives only two heads are active. I don't think multiple heads on the same arm can be used at the same time due to micro-misalignment of the tracks.
Well that would explain it. I’ve been trying to Google it and it and everything I’ve found matches what you’re saying. Thank you.
You’d have to make sure that your data is always on different sets of platters to take advantage of the double throughput though. You could get 2x throughput if it was all on platter 1.
With the misalignment issue you mention exist if we were using multiple heads? If things were ALWAYS done in parallel across the heads didn’t wouldn’t the tracks always have to have the same micro alignment as well? Wouldn’t the problem only exist if a track on platter A and a track on platter B were written separately?
IIRC, hard drives have had the problem of thermal expansion vs track density for decades now.
I helped build a small recording studio around the turn of the century, and at that time "AV rated" drives were a thing that demanded a premium price.
These differed from other drives in that they were capable of continuous use, whereas other drives might briefly go out to lunch periodically to re-do their thermal calibration.
This was considered important by some, since it was critical that the process not "miss" any data due to the hard drive being late to the party during any part of a recording or mixdown session. These were once new potential problems in the recording space (previously, we used tape -- a linear medium for a linear event).
We don't talk about tcal these days and terms like "AV ready" have dropped from the data storage lexicon.
I can only assume that it ceased to be a practical problem somehow. Maybe tcal happens fast enough now that any writes can be caught in cache, or maybe tcal happens invisibly as a continuous process, or maybe data rates for spinning rust have improved enough that we're no longer so close to the edge of hardware performance that it ever matters, and/or maybe operating systems and recording software have improved enough that it's just not an issue worth discussion.
(These days, cutting a music track of ridiculous complexity on a singular MacBook is a complete snoozefest for the hardware. But it hasn't always been this way...)
I assumed that too. Probably cheaper and more space efficient to have just one actuator position in the HDD enclosure though and they split it vertically.
Instead it still only has one column, but it’s divided in vertically to give more flexibility in positioning. I understand how that could help with random access iops, but how does it double the max bandwidth as they claim?
It’s still the same number of heads isn’t? When doing sequential reads isn’t that the limiting factor?