Hacker News new | past | comments | ask | show | jobs | submit login
Backblaze Storage Pod 5.0 (backblaze.com)
338 points by ingve on Nov 10, 2015 | hide | past | favorite | 216 comments



Interesting, with this information and some idea of what they pay for electricity +support staff/drive replacements/maintenance you could probably figure out their break-even mark.

Their cost for the pod (which they say includes labor) comes out to 0.044/GB. The cost of redundancy is 3/20 drives, which would place the cost to 0.0517/GB of data. They are planning to charge 0.005/GB on the B2 service this is made for. That's 10-11 months to break even on the initial cost. Add electricity + maintenance costs (of which I have no idea what it is) and you get the break-even number. My guess is that it'd be 1 year. So however long each pod lasts past 1 year is gravy (aside from the electricity + maintenance costs).

Amazon S3 must be making a killing. As a customer you must be paying for the cost of the "pod" every 2 months.

Please excuse me while I buy stock in Amazon/Box/Dropbox/Backblaze/etc....

----

As a aside, as a personal user if you store over 1TB you are getting storage at a cheaper cost than B2 users. If under 1 TB you'd be better of using B2.

Seems to me that the average storage per user must be well under 1 TB on their unlimited plans.


Excellent analysis. There are some other costs involved (bandwidth, switches/cabinets, non-pod servers, etc.), but pods & drives dominate the costs, so your math is good.

You're also right that 1 TB is the approximate breakeven between Backblaze per-GB cloud storage & unlimited online backup by cost and most people are below that. However, the main reason for people to use the backup service is that we take care of all the backup functions (encryption, dedup, compression, restores, etc.)

> Please excuse me while I buy stock in Amazon/Box/Dropbox/Backblaze/etc....

Alas, Backblaze stock isn't yet available for sale ;-)

BTW, thanks for being a customer!

Gleb


> Alas, Backblaze stock isn't yet available for sale ;-)

YET...or is it? /runs over to SecondMarket.


Any chance you're thinking about a B2 based backup client for Synology or the like? I'm eager to backup those sorts of things to backblaze without going through the ISCUSI route Marco is taking.


One thing to keep in mind is that Backblaze keeps 1.15 copies of your data, while Amazon S3 and the like keep multiple full geographically redundant copies of your data. For example Microsoft Azure Cloud Storage keeps 3 to 6 copies of your data, depending on the tier, [1] when you compare that to the 1.15 copies in Backblaze B2 you start to understand where the pricing difference comes from.

For most use cases this doesn't matter and the price saving are well worth the reduced redundancy, but Backblaze B2 isn't well suited for being the canonical store of your data.

[1] - https://azure.microsoft.com/en-us/pricing/details/storage/


full geographically redundant copies of your data

In the case of Amazon, I wouldn't be so certain that they're geographically redundant copies. I raised this issue on HN a few months ago and someone replied that within an availability zone the data is "a bunch of (datacenters) that are close to each other". https://news.ycombinator.com/item?id=10231677

I like how for GRS Microsoft specifically says "a second datacenter hundreds of miles away". I have yet to find such clear statements from Amazon.


Amazon plays games with terms like "durability", meaning that the data exists. That doesn't mean that it is accessible by you or available in its entirety.

Microsoft is definitely more straightforward in its claims. O365 for my org was in Midwest, midatlantic, and southwest, giving us a really good continuity story.


Around about here in the talk: https://youtu.be/JIQETrFC_SQ?t=14m38s James Hamilton talks precisely what the definitions of AZs and regions are :)


given block level dedupe, Microsoft might have 6 copies of your data, but that block is shared between 50 users in each region. Adding a 51st person calling on that block costs nothing.


Plus depreciaton on equipment. While the drives are probably long gone over 5 years, it's possible the racks could survive much longer.


This seems obvious, as the largest MacBook or MBAir one can buy is 512GB.

Also, the B2 service doesn't come with a backup client, AFAIK.


It's astonishing that Backblaze would release its drawings as open source, good for you!

But there's no explicit license declaration anywhere in the drawings zipfile and in fact some of the drawings claim "PROPRIETARY AND CONFIDENTIAL THE INFORMATION CONTAINED IN THIS DRAWING IS THE SOLE PROPERTY OF BACKBLAZE MANUFACTURING. ANY REPRODUCTION IN PART OR AS A WHOLE WITHOUT THE WRITTEN PERMISSION OF BACKBLAZE MANUFACTURING IS PROHIBITED".

Maybe it's just boilerplate? Or maybe you really don't want to face competition in your specific industry. Regardless, an official open source license (even with commercial restrictions) would be much clearer than this.


Yev from Backblaze here -> That is boilerplate for the most part, but it's open source, have at it!


Ok I will email my attorney that "Some guy named atYevP on HN told me it was okay." ;)


Having sold a company to IBM, I can attest that this issue would be a Big Problem. {Edit: I mean for any company that used BlackBlaze diagrams labeled "proprietary."}


We are talking about the company that got a license to use json for evil


They should have sold those (framed of course) to raise money for the JSON folks (expenses and such).


A big problem for BB if they were to have open source-licensed tech and got acquired by a megacorp, or a big problem for a "kinda-sorta-licensee" of their tech once acquired?


A problem for a startup using these documents without a clear license.

It would not be a problem for Back Blaze being acquired.


It was a joke, emphasis on IBM 'Big" stuff...


That was the pun, yes, but my comment was about a real issue.


Why, are you planning on reproducing the drawings? Copyright on the drawings has nothing to do with producing the objects described therein. You don't need a license to cook from a recipe.


Sounds legit to me...


I think you are missing the point. What you say here (in HN comments) doesn't matter at all. You should fix the license accordingly - or not, if this suits you better.

I agree with the parent though, if I were in busines and wanted to use your drawings, I wouldn't touch them with such a license. Even as a personal hobby project I would bug you for (official & written) permission before using any of the diagrams.

Other than that, it is nice reading about the technical decisions that go into building such racks. Kudos for sharing the knowledge!


I know, just funnin' :) We're taking a look at it and will update the post/docs accordingly once we figure out which license means less work for us :)


What license is it under?


There's no trade secret here, since the drawings are published. You have the rights to build anything from the drawings, provided they aren't patented, which is another issue entirely.

Copyrights here only affect the actual drawings of the product. By default, the author is granted copyrights. That means you can't publish your own book with those drawings, for example.

But to actually build the chassis, go for it.


"Think free as in free speech, not free beer."

It's not under any license, it's just 'free'. We've talked about putting the Storage Pods under a license, and may at some point since we get this question periodically and picking a license that people are familiar with may make it more clear that we really are giving away the design.


Seems like the Creative Commons licenses were designed for this use case: easy to understand formalizations of just what rights if any are reserved. Maybe pick one of those?


i might be wrong, but i'm fairly certain you have rights that you need to explicitly give up in the form of a license in order for it to be "free"


It's a list of commodity parts and some drawings. If you use this information to build your own pod, who would know and who would care?

If you start selling them as your own product, maybe that would be an issue.


I was wondering about this too. Lists of commodity parts might be treated like recipes, which are not subject to copyright per http://www.copyright.gov/fls/fl122.html, but the drawings are a different matter.


That quote refers to something very specific and you may not want to align yourself or your thoughts about licensing with that movement.

The creative commons and OSI are two excellent places to start looking at licenses. creativecommons.org and opensource.org


Not trying to align with any movement. Just saying that we give the design away. Appreciate all the feedback and we'll re-look at adding a license.

To be clear on our philosophy - we don't want to restrict people from building these for their own purposes, modifying them, giving away the design, selling the boxes, etc.


I'd suggest using the following Creative Commons license:

http://creativecommons.org/licenses/by/4.0/


Great work Backblaze! I really appreciate the open, knowledge sharing nature of your hardware designs. This is one of the main reasons I am a Backblaze customer, I must say however that the lack of open source and headless Linux client software is seriously pushing me away, it's something I think many people have been waiting for years for, especially given the product licensing model is designed as such to backup a single machine only.


Yev from Backblaze -> We're working on it, in a fashion. With Backblaze B2 you'll be able to push from a Linux machine using CLIs/APIs and get those computers backed up as well! Slightly different, but it's the best way for us to help Linux users!


It is probably unfair to ask, but is there a way to get an early invite for B2 (I already signed up but can't contain my excitement; email: HN username at gmail)?. Also, have there been any comparisons of B2 done with Amazon Cloud Drive's unlimited everything plan for $59 a year (with break even when compared to B2 at around a TB)? They do provide an API, although I have yet to try it out.


Worth pointing out (for those of you not in the storage game) that these appliances, while cool, are pretty far from state of the art. We're (Dropbox) packing up to an order of magnitude more storage into 4U. And costs aren't close, let alone the addressable throughput given that you guys just went to 10G.

Serious kudos though are due for the Backblaze blog. You're all at least talking about what you're up to, which is awesome both for education around building large storage systems and establishing transparency with your customer base.

Unfortunately, a lot of the bigger clusters' teams won't/can't talk about what exactly they're doing and how they're doing it for trade secret reasons. If you want to learn about it, you have to come work for one of us. :-)

Also, the orangered hardware looks awesome.


> Worth pointing out (for those of you not in the storage game) that these appliances, while cool, are pretty far from state of the art.

So basically you decided to go into this thread, post something that isn't possible for anyone here to corroborate and call the original post "far from state of the art"...to what end? What was the purpose of your posting? Just to toot your own horn? I don't get it.

If you want to talk about how awesome or state of the art your company does things then great do that. But your actual post? It just comes off as childish like someone showing off they can do something and you're that person that has to stand in between them and the people watching them to say you can do it too and better. Surely we've all seen those very awkward moments in sitcoms.

> We're (Dropbox) packing up to an order of magnitude more storage into 4U.

Are you sure? They can pack up to 180TB into 4U so you're saying DropBox can pack up to 1,800TB into 4U?


I don't work at Dropbox, but I do work in storage. I've seen similar systems pack ~250-500TB into a 4U box. 60 drives x 6TB drives = 360TB. Not sure how you would get anywhere over 1000TB with today's spinning drives. There is a vendor linked in Backblaze blog post (45drives) which sells 60 drive enclosures, which they somehow list at 480TB but I'm not sure how the math works out there. However, keep in mind that usable capacity is going to be less. Erasure coding has significantly less overhead than traditional raid, but there is still a cost (depends on chosen protection set).

http://www.45drives.com/products/storage/xl60.php


supermicro has 72 drive server and 90 drives JBOD


We used these and affectionately called them the mullet as in business in the front, party in the rear.

http://www.supermicro.com/products/chassis/4u/847/sc847de26-...


> packing up to an order of magnitude more storage into 4U

Where are you buying 40 TB hard drives? I'd love to get some ;-)

We actually don't claim these are 'state of the art', more 'state of the inexpensive'. You can buy faster performance storage servers, more redundant servers, etc. But I believe these are one of, if not the, lowest cost storage servers per GB. (And of course, open source.)

BTW, I'm a fan of Dropbox (use it personally) and, in general, think there is a ton of value in developers, organizations, and companies sharing what they've learned about storage. The better and lower cost it gets, the more interesting things other people can do with it, the bigger the industry, and the more everyone benefits.

(full disclosure - cofounder of Backblaze here)


Hi Gleb!

> Where are you buying 40 TB hard drives? I'd love to get some ;-)

Me too. :-) Not 40TB, but larger than 4T and quite a few more of them.

> But I believe these are one of, if not the, lowest cost storage servers per GB.

Yep, I meant price per GB as well.

> I'm a fan of Dropbox (use it personally)

Awesome, thanks! I've not used Backblaze personally, but I know some friends who do and they're happy customers. Your guys' blog and rapport with the technical community is amazing.


I can't edit the parent anymore, but it has been fairly pointed out that my tone of piggybacking on Backblaze's great blog post is a shameful bit of thunder-stealing. The technical community owes a great debt to them for their transparency about building storage clusters.

I get excited about large storage systems since that's what I've been focusing on for the last few years, and my enthusiasm overran my tact/prudence. Sorry Backblaze.


That statement carries 0 value without any proof. Backblaze is doing an excellent job documenting their process and shouting 'we are doing 10 times better', 'costs aren't close' and 'far from the state of the art' is pretty lame if you're not willing to back those claims up. I'm a big fan of dropbox but this was below the belt.


Hmm, maybe I was being tone deaf?

I've repeatedly tried to state in subsequent children in this thread that it seems as though Backblaze made the right set of tradeoffs given their previous gen, and their time to recoup their investment in this, etc.

I was also (somewhat selfishly, I suppose?) trying to tap into the excitement of a group of readers who where thinking about big storage, and let them know that the possibilities were even great given greater investment.

The responses I've been getting (including yours) lead me to believe that my comment is being seen as a slight against the set of tradeoffs that Backblaze has made, and that really wasn't how I intended it.

Edit: I do disagree about it providing zero value, though. We've had a good subsequent exploratory discussion about ways to optimize storage density and (to some degree) cost. And I hope we're not at the point where everything said on the internet demands "proof"; I'm not sure a blog post would provide that, either!


The way to tap into that excitement is by making a blog post of your own highlighting what makes DB unique and special and how you do your thing, not by making disparaging comments about companies active in the same space. I'm sure your post would get just as much or even more attention as this one.

It's very easy to do what you just did and quite hard to do what Backblaze did. Essentially, if you're not willing to do what Backblaze did here the graceful thing would be to either stay quiet, simply compliment them on their achievements or, and most useful, to be specific about what could be improved.


Okay, fair point. Given empirical evidence wrt my blogging discipline (and the fact that I probably can't), it's unlikely that blog post will ever happen.

I'll leave this thread alone because there's some useful technical information in here, and amend with an apology.


> these appliances, while cool, are pretty far from state of the art. We're (Dropbox) packing up to an order of magnitude more storage into 4U.

Blog post or it didn't happen :)

Seriously though, there seems to be some really cool engineering at Dropbox, first with Go usage (https://twitter.com/jamwt/status/629727590782099456) and now with your storage capacity. We're all craving for more infrastructure details, see what you consider as the state of the art and how you use the different pieces of software/language !


What order of magnitude? There are 60 drive chassis in 4U you can get[0][1], but even a 4TB -> 6TB drive + 15 more drives isn't exactly an order of magnitude.

And they have 2 10GiB ports, though I'm sure you can work with them and get more..

It's disingenuous to call this not "state of the art" when it's really quite close, and obviously is meeting a backblaze design goal.

[0]: http://www.newisys.com/Products/4600.shtml [1]: http://www.aicipc.com/ProductDetail.aspx?ref=RSC-4H


  What order of magnitude?
Well, this article describes storage pods with 180TB of storage.

HGST have a 10TB hard drive measuring 26.1mm * 101.6mm * 147mm [1] and a 4U chassis measures 178mm * 437mm * 521mm [2]

If you had 4U of hard drives with no space for power, cooling, interconnect etc you cold fit 10 terabytes / (26.1mm * 101.6mm * 147mm) * (178mm * 437mm * 521mm) = 1.04 petabytes into 4U

I don't know whether six times counts a a single order of magnitude, let alone multiple orders of magnitude, and the 10TB hard drives I linked are "enterprise" so there might be a substantial price premium.

Samsung claim to have a 16TB 2.5" SSD in the works [2] - although details are pretty scarce at the moment. If its dimensions match normal 2.5" SSDs at around 100mm * 70mm * 6.9mm that would give 16 terabytes / (100mm * 70mm * 7mm) * (178mm * 437mm * 521mm) = 13.2 petabytes in 4U.

Of course, the assumption you could use 100% of the space in a chassis for drives is pretty unrealistic. And nobody's seen the Samsung SSD working - could be a 16tb sticker on a 1tb ssd for all I know :) Filling it with 4TB SSDs would give you 3.35 PB and it actually seems possible to get those [4].

If you don't care about $/TB, it's probably possible to get density a single order of magnitude higher.

[1] http://www.hgst.com/products/hard-drives/ultrastar-archive-h... [2] http://www.supermicro.co.uk/products/chassis/4U/842/SC842TQ-... [3] http://arstechnica.co.uk/gadgets/2015/08/samsung-unveils-2-5... [4] https://www.sandisk.com/business/datacenter/products/flash-d...


> HGST have a 10TB hard drive

"Archive drive" => shingled writes. Are people really using these things for online workloads? Might make sense for Backblaze's original "write and forget" backup workload but not very well for B2 or Dropbox


A 96 bay 4U chassis is readily available http://www.raidinc.com/products/object-storage/ability-4u-96... but that's still not a magnitude.


A couple years ago I found out Google was already using 8TB+ drives in production before they hit the wide retail drive distribution market, so I would reckon that many companies with deep pockets and inside vendor relationships would be able to get access to some very high density disks that folks like Backblaze may not be able to get yet. There's also possibilities for better density through solid state that may not make much sense from a raw $ / GB storage basis that drives Backblaze while tiered storage may be essential for how Dropbox operates. Also, I imagine it'd be possible to centralize and strengthen cooling while shoving a bunch of disks into a 4U box to increase raw drive density. For example, you could use a variant of peltier cooling to have drives chilled together off a single metal plate, then repeat a few times in a chassis.

So in short, I can think of improvements but most would come at the cost of higher capex to do so, and that could negatively impact the $ / GB numbers that Backblaze is trying to keep down. Perhaps there's not much consideration for leasing datacenter real estate or power costs or something into the equations but without some reliable numbers it's all speculation to me.


Heh, any place where I have to request a quote before they show me a price makes me nervous :D


It should if you're thinking of buying one-offs. But at bigger scales like yours, you can either invest in crew to design chassis, or in purchasing people to deal with other sales people.


That's just a storage enclosure. You'd need a separate server with SAS connectivity to use that. It still may be worth it $$/Gb wise depending on cost.


They have an Intel Atom addon module thingie, but the CPU can only really run 2x4Gb best case, almost all of these will need separate software running on Xeons.


Intel Atom is an odd choice, but liking the ethernet choice.

Gonna look more into how they lay the disks out. I wonder what the serviceability of the drives is.


Oh nice, I hadn't seen that! Need to look more at it's data sheet.


Maybe his definition of an order of magnitude is different than anyone elses? I'd like to hear how many drives and what size he's talking about


You can do a lot more than 60, and you can get a lot bigger than 6T.

> It's disingenuous to call this not "state of the art" when it's really quite close, and obviously is meeting a backblaze design goal.

So, I never said it didn't meet their design goal; it looks like quite a nice system, and it's better than their previous gen. Looks like they're doing things right, looks like solid engineering.

But I stand by the statement that, in the absolute sense, this is not quite the current benchmark on density or $/GB in the industry. That was the only point I was trying to make to the HN audience in general, and it's not intended as a critique of Backblaze specifically.

But there are very different amounts of resources (I'm assuming) invested in achieving those benchmarks than Backblaze can likely commit.

Ergo, Backblaze probably completely made the right decision for Backblaze. Both of these things are possible.


> you can get a lot bigger than 6T

Come on - most companies reasonably can't. There's a WD 10T smr/helium drive, and seagate is selling that crap 8T video drive. Unless you're talking ssd 3d/TLC density (more $), or the vapor-ware 20T drives that some storage vendors hint at and don't let me file a PO for, I literally don't know what you're referring to (now I'm getting where I can't name names)

The reason I'm complaining is because of you don't state:

- what the current state of the art is that they're not meeting

- what they could do better with their design goals given that information.

You merely state that it's just not state of the art, and to come work at dropbox. This is what I mean by being disingenuous.


> You merely state that it's just not state of the art, and to come work at dropbox.

That's about all I can do. However...

> This is what I mean by being disingenuous.

I'm don't think disingenuous is what I'm being. Maybe I'm not being as forthcoming as you'd like. Maybe my comment wasn't useful without details I'm not providing. But I'm not lying or misrepresenting anything.

So let me try to be helpful without using proprietary information. Using information people have unearthed in this thread, we can do some math to prove I'm not completely bonkers:

    96 unit chassis * 10T SMR = 960T, which is in the > 5x the density in 4U
SMR $/GB is quite good if you work with the right vendor and make the right tradeoffs.

If you could manage that 960T with the same compute resources, you'd save a lot on compute, since that's currently a ~30% of their cost.

With custom enclosures and hardware, you can do more still. You can have deep relationships with vendors and have them customize firmware for your particular use case. You can take their discards that didn't quite meet spec. You can buy crazy cheap stock with high failure rates b/c your distributed system is insanely good at repairs. You get access to stuff that's not being sold yet.

However, let me state again--this would take a lot of work. You need to mold your software around your hardware profile with very small tolerances. You might need to move some I/O stuff in-kernel, and/or move the IP stack out of kernel. You might need to reinforce your DC's raised floors to handle the weight (spoiler: you definitely do unless you planned for these types of machines ahead of time). You need great operations, and really thorough hardware quals b/c your failure domain is huge. Network traffic will go through the roof if a box fails, since you're repairing so much data.

Dozens of people on a handfuls of teams will spend time attacking this problem from different angles.

So, again--I don't necessarily think Backblaze did anything wrong here. Doing all this might be a crazy idea for them and not worth it. My statement was: it's not state of the art in terms of $/GB or density.

But for some shops, they spend so much on storage they can afford invest really, really deeply on optimizing storage cost. That's where the state of the art is.


Interesting and thanks for the enlightening comments. I proposed similar things to a few people esp the partnership with hardware development, working around faulty (cheap) stuff, custom firmware, and kernel I/O modifications. I expected each of these in a high-end solution.

My architecture didn't have an IP stack at all: guards or front-end acted as I.P. gateway while internally using much simpler protocol that still worked on Ethernet. Inspired by Active Messages and FLIR protocols of past. TCP/UDP/IP is unnecessarily complicated with simpler message passing easy in software and FPGA hardware for offloading onto cheap FPGA's. A side note was using one or more I/O coprocessors like mainframe Channel I/O (see Wikipedia) to get huge throughput might be cost-effective depending on hardware selected. Esp if using embedded SOC for I/O offloading.

Note: Octeon II's can do many Gbps of line-rate processing for three digits and much simpler internally than a lot of stuff.

The SASD project for secure, hard disks also suggested putting logic in the HD or SATA controllers rather than the system. Made me wonder about putting I/O mgmt and most storage logic in custom PCI cards with lots of SATA connectors. Basically, embedded systems enough CPU to manage whatever you throw at them, just running what's good for that layer, and less watts/money than whole servers. Seems like at some point one could port that layer to straight hardware for great performance/watt although high, initial NRE. eASIC can do that relatively cheap, though, with the Nextreme's if a design is FPGA-proven.

That was just exploration on paper so I'm not sure how practical it would be. My focus was high security systems and storage which necessitate better isolation. Hence, PCI cards rather than servers. Interesting that there was a lot of overlap, though, between what I thought made sense and what you all ended up doing. Very interesting. Guess I'll keep proven tactics in mind in future designs.


Wow, very cool!

So, candidate for offloading that's really commonly considered is "scrubbing". Essentially, protecting against passive bit flips by checking your data against your own checksums.

If you have some N TB of data under management, it's really expensive to be striping these large serial reads all the way to an application's userland. So finding ways to keep these as low-priority "background" reads (that always yield to interactive reads) in the scheduling sense that a.) notify the daemon in userland if they fail and b.) without requiring that daemon to be in the data plane, is high-reward. Ideally, the daemon can stay in the control plane so it can manage/report accounting on scrub scheduling and time since last-scrub per disk region, or whatever.

You can offload it to a kernel module to avoid `copy_to_user` (if you can't DMA) and/or context switching. Or even offload it to hardware--some custom host adapter, possibly, using custom ATA/SCSI commands to control it and query it. (and the `sg` driver).


"So, candidate for offloading that's really commonly considered is "scrubbing". Essentially, protecting against passive bit flips by checking your data against your own checksums."

"Or even offload it to hardware--some custom host adapter, possibly, using custom ATA/SCSI commands to control it and query it. (and the `sg` driver)."

Now you're thinking. Chips like Octeon already do hardware acceleration of checksums and possibly other I/O boosters. So did mainframe I/O. There's many for FPGA's and ASIC's. Cray's ChipKill technology, implemented in Oracle SPARC CPU's, does something similar for RAM in combination with ECC RAM. So, an ASIC with several SATA connectors and logic for this should be able to do it. Might handle, partially or totally, the protocol for processing the data, too.

Wanted to see if I was really thinking ahead by Googling for academic papers. If it's a good idea, there's usually a few people that have done it for years already. Turned up some hits, including on your Reed-Solomon.

FPGA-accelerated decision-tolerant coding for reliable distributed storage https://dl.acm.org/citation.cfm?id=1763276

FPGA-accelerated, flash store for analytics http://people.csail.mit.edu/wjun/papers/fpga2014-wjun.pptx

DB acceleration http://www.cse.buffalo.edu/~vipin/papers/2010/todd1.pdf

Filesystem in GPU. Shows GPU's or DSP's can be used for this. Probably Adapteva's chips, too, esp since they might negotiate for volume deals. https://people.csail.mit.edu/idish/ftp/silberstein13asplos-g...

Tilera's were used in a 100Gbps NIDS that I recall. I expect your protocol and algorithms to be simpler. ;) http://www.tilera.com/products/

So, goes to show either custom, FPGA, or better-suited chips can handle that part at great performance per watt. Cost varies with implementation strategy. Nonetheless, simple functions with streaming data model greatly benefit from this strategy.


If you've got custom drives with custom firmware, why does the scrub need to leave the drive? Couldn't they have a command that kicks off the scrub in-drive, with only a list of failed blocks returned?


Are we talking about firmware as drivers executing on host or the microcontrollers on the drives? My proposal works on adapter in between the two or could replace HD microcontroller if you had willing vendor. Doing it on host sucks host CPU requiring more powerful and energy consuming processors given No of drives and data amount.

Idea is to move straight-forward, possibly parallelizable algorithms off general-purpose, legacy CPU's onto cheaper, efficient ones more suited to algorithm. It wasn't uncommon for FPGA's to get an eightfold boost in performance on stuff like this in various experiments.

In my field, high assurance INFOSEC, the main advantage is the use in inline, media encryptors. I really just knocked off specs and ideas of NSA's Type 1 IME given it was the best:

http://m.nsa.gov/ia/programs/inline_media_encryptor/

Also, Orange Book required full mediation and even SCOMP system mediated hardware I/O. Copying these lessons from the past made my design immune to DMA, firmware, and SMM attacks that were later discovered. Like above, offloading gets performance boost cuz inline I/O handles interrupt handling, prevents many cache flushes, and supports hardware accelerators. Also, keygen and storage stay off main server. Fun times.


So you are using SMR drives. Wow, good luck with that. All the flash front-end caching in the world won't improve the miserable performance of those drives. I guess that's the only way to justify moving out of AWS?

I'd really like to see some usage stats on your SMR clusters because so far the universal verdict is that they're terrible for anything other than WORM.


Host Managed (or Host Aware) SMR disks are fine. You need to do your own zone management using ZAC/ZBC, and throw out your filesystem. The disks all have a conventional section near LBA 0 for metadata management (e.g. indices).

Usual tradeoff--more work in software, but far less $.


Ah good, now we're talking specifics. Does REPORT ZONES not point to this conventional section always? I cannot wait for a FORMAT ALL ZONES or something command to control the number/amount of conventional sections at the sacrifice of capacity, though it looks like it's not coming.

The power guarantees in the zbc spec (as of r03) seem to not be held by some drives. Writing less than a full zone seems to be a bad idea.


> Ah good, now we're talking specifics.

Well, specifics within a hypothetical. Hypotheticals are always game to to discuss. :-)

> Does REPORT ZONES not point to this conventional section always?

Yep, REPORT ZONES includes the conventional zone.

> I cannot wait for a FORMAT ALL ZONES

I agree that would be nice, but there are no indications that anyone but the vendor will control conventional vs. sequential layout, sequential sizes, or maximum # of explicitly open zones, anytime soon (absent asking for special favors for a large batch order).


SMR drives may have poor write performance, but still good enough). Read performance is good. Anyway I believe the network is the bottleneck anyway, not the drives.


That fits the WORM profile that I suggested. During normal operation, the disk IOPS is the bottleneck. The network only becomes the bottleneck during repair/failure states. Even scrubbing falls under disk IOPS and not network. At least that's what I've found in EC clusters. Replicated clusters might be different.


I, for one, believe what you are saying. There have been murmurs for several years that big outfits like Dropbox/Amazon/Google have moved to use some very fancy storage tech.

Just to ponder on the available confirmed info - lets add Samsungs 16TB SSD to a 48 disk 1U rack and times that by 4...

I'd say just with a lot of cash and stuff you can buy now you can pass that order of magnitude pretty easily. If you make your own...I can easily see the densities knocking Blackblaze's pod out of the water.

As for costs, well massive ordering, preferred rates, custom designs and symbiotic relationships with manufacturers, coupled with a software architecture that complements all these hardware gains could conceivably bring costs down to levels unheard of outside of the big cloud service providers.

The conditions and technology available right now make what you are saying within the realms of the 'adjacent possible' if you have enough money to throw at the problem. With the kind of clout Dropbox et al have I can well believe you're getting incredible storage densities but at incredible knock down prices.


up to an order of magnitude more

5x the density

so half an order of magnitude, which is much more believable?


an order of magnitude

Many years ago I liked to say that, and a friend would call me out. So I would promptly change my statement to "a binary order of magnitude".

5x the density

Maybe he should be saying "a quinary order of magnitude"? :)


> 5x the density

> so half an order of magnitude

Almost 70% of an order of magnitude.


Nope, that was just for this example; custom enclosures and larger disks can push quite a bit higher than this.


>seagate is selling that crap 8T video drive

Slightly off-topic but could you go into some detail on what you think the drawbacks are for the Seagate 8TB SMR drive? I've been thinking about getting a few for large files while keeping them online-accessible, I've found it difficult to get the thoughts of grizzled IT folks on the matter :-)


I would like that info as well.

I couldn't find any reliable data, but I did find a lot of reviews that said the 8TB drives tend to die within a few months; I'm holding on to the 6TBs for now.


I'd say it's more Dropbox getting nervous.

Dropbox wants all this trade secret non sense.

And everyone else give them their data.

That they promise they won't read.

Give me a break. Bittorrent sync and blaze pods in house for the win.


you can fit 72 x 8TB pretty easily into 4u with off-the-shelf hardware.

with custom-built 2.5" ssd enclosures you could probably fit more in the same space.


Can you link to the shelf / hardware? I'm curious



Surprised at the layout of the drives. Would make it a huge pain to actually replace failing disks - but I like that they put a decent Xeon on there.

The connectivity to the drives is really quite bad, but now I'm just being picky. Thank you for the link, really appreciate the follow up!


This is the Quanta version, which is a bit better from a build standpoint: http://www.qct.io/Product/Storage/Storage-Server/4U/New-Quan...


Hey! Yev from Backblaze here -> If you're still reading the thread, sorry things ended up getting a bit heated in the comments here. We like Dropbox! Our hope is that more and more companies will realize that it's a lot more entertaining to open up and chat about this stuff. Hoping Dropbox joins in soon! It'll make for a lot of interesting conversation. Either way, hope you don't feel too put out!


You're running 4U Quanta T21P-4U boxes but with 40gig OCP mezzanine networking now?

I hope you guys aren't using SMR drives...

Edit: Also, when are you going to move out of AWS? I really would like our/your clients to use IPv6.


Now that's a backhanded compliment.


But Backblaze is for backup. Dropbox.. isn't.


I concur. I tried on several occasions to use Dropbox, and Google Drive for my backup needs, but the initial sync was so large (numerous files) that the Windows software would crash (memory). No 64-bit client on the horizon so...

I found Backblaze and never looked back.


I thought Dropbox uses Amazon S3 not their own storage?


I guess they realized S3 was too expensive, eventually? :)


Yev here -> #TeamOrangeRed


Put up or shut up. ;)


You guys are able to fit 1.8PB into a single 4U? Seriously impressive..!


Only way I know how to do that is all flash storage...


So what's your performance, public cost breakdown and where's your open sourced design available?


I find these breakdowns fascinating. As an enthusiast PC builder I occasionally consider running my own data-systems or servers... In the end though I'm glad I have guys like you putting these together: clearly an enormous amount of thought going into it. Thanks for the write-up.


Yev from Backblaze here -> You're welcome! We're always pretty stoked to do these, even though it does indeed take a lot of man-power to put together. The nice thing is, it's actually helpful! A lot of organizations, companies, and research labs use them, and that makes us feel warm and fuzzy.


Do you find any internal benefits from this stuff as well? For example, having it be better documented, or finding areas of improvement as you go through the write-up, or things like that? Not that you need it to be directly helpful, warm and fuzzy can certainly be enough....


Good question! As part of the process for developing these we spend months and months using AGILE/SCRUM to iterate and come up with a "final answer" (though we're constantly still iterating). So it's not so much that we gain anything from writing the posts, the posts are more of a culmination of months of effort. We do end up talking about the pods more closely when we're writing the posts and as part of that some minor changes do happen, but not that many.

The sinister benefit is that the more folks read the posts, the more it gets shared (presumably) and we hope that can lead to some new folks finding out about our online backup service, and therefore signing up. Which would make our finance department feel warm and fuzzy too!


I imagine the marketing angle must be pretty big. Not only making people aware of your service, but convincing people to use it. A big question when selecting any online backup service is, "are these people I'm trusting with my data complete idiots?" Your posts should really help convince a lot of people that you are, in fact, not complete idiots.

For what it's worth, I've been a happy customer for years and try to spread the love whenever I can. My mantra is: data that's not backed up doesn't exist, and data that's not backed up off-site is not backed up. Thanks for providing a great service for cheap.


We have gotten some other benefits as well. One of our senior support reps decided to join us after reading an IAmA where we talked about the Storage Pods. We just recently hired a senior sys admin and the head of Facebook's Prineville data center, both of whom were a result of reading our storage posts.

Also, when we work with contract manufacturers on building storage pods, talk with data centers about space, etc., it's really convenient to point to public posts to explain what we're talking about rather than having to put NDA's in place.

Overall it results in a lot of awareness and goodwill. It's incredibly gratifying after the team spends many late nights putting these together.

Gleb from Backblaze


The way you guys keep blogging, it's simply an enormous positive experience. I've been a vocal advocate of BackBlaze online and offline. Lots of even cheaper services come and go, but I keep relying and pointing to BackBlaze.


You'd think it would be HUGE but it's not, just a side-benefit. We'd love for it to be bigger, but, it's kind of up to the people. If you're a happy Backblaze customer though...might I refer you to our handy dandy referral program: https://www.backblaze.com/refer.html

/MarketingOff

*Edit -> man, we have to optimize that page for our new design layout...

/yells at design team.


Thanks for the link. I was actually going to write something in my comment like, "hey, you guys should have a referral program." Then I realized I ought to check to see if you already did first.


We do, but man do I gotta get that page updated. Looks like we forgot about it while updating the site. Sad panda bears.


Can you guys enable "non attached" external disk backups? Put a cap on the size or make it a different tier.


We're sort of doing that with B2, read more about it: www.backblaze.com/b2 :D


Looks nice!

Does it got any kind of integration with stuff you've uploaded through the regular Backblaze app? (e.g. moving stuff over to B2 once it has been uploaded as a regular BB backup).


We're treating them like two different products, so no integration yet, but if we see that a sizeable of folks would want to do both or transfer their data over, we might go down that path.


I think it would solve the "user-accessible UI" for making the initial move of the data to storage.

I can write some script myself to move over my stuff directly, but even as a programmer I'd rather not :-)

I think you're plan for it is that third parties might take that task. So, maybe Arq will.

https://twitter.com/arqbackup/status/650634416067342336


Arq says they have no plans to do it in the near term at the moment, but hopefully once we open it to the public they'll want to integrate. Worst case, if we make a bajillion dollars, we'll hire some more devs to whip something together :P


I use a Backblaze box as an Archive Team staging environment! Thank you!


Very cool. I remember an article I read about a veteran music journalist who applied for a job and was told he had no experience. Turned out every magazine he worked for over 10+ years had shut down and there was not a single online reference of his articles.

Love that you're fighting the good fight (along with the Internet Archive) to keep a history of the Internet.


From 2000 through 2006 I freelanced for a middle tier publisher of computer magazines. Wrote 4-5 articles a month, probably 1/2 million words. The publisher had a paywall for their online versions of the print magazines, but really didn't put much effort into promoting it since print had been so lucrative. Over the last three years they shuttered the majority of the print magazines (and their corresponding online editions), and now it's all gone. The wayback machine/Internet Archive only has a limited number of pages available. Sad to realize that a lot of the work we do (whether it's writing or creating systems) is ephemeral...


Awesome! <3


It looks quite an enjoyable process too. Design iteration is a fantastic time to really question assumptions, and reducing the number of fans for example does just that.

The best I have is 9 addressable disks plus a ludicrous CPU and GPU in a Silverstone desktop case, not very exciting but still going against the grain.


I'm particularly excited about this version because it is the underpinning of B2 (which required higher performance), but we were still able to keep the costs low.


^ -> Gleb is Backblaze's CEO. (I too, work for Backblaze).


Please make a Linux version of your client. Even a CLI based tool that works in background will do.

From afar it looks very compelling, but unfortunately I can't use it, because no Linux :(


Didn't they mention in the B2 announcement discussion that there's a linux client for that? Or, more accurately, it's a python library used by a simple python CLI?

I just found this[1], which looks like what you want. It's not a library, but it is only 245 lines of python, so I doubt it's that complex.

1: https://www.backblaze.com/b2/docs/b2-python-pusher.html


We've got that coming up with Backblaze B2, we have APIs and CLIs that you'll be able to use for Linux builds and servers! If you go to: https://www.backblaze.com/b2/docs/ you can read our docs!


Please! I'd love to switch from Crashplan for my Linux boxes back to rsync/rsnapshot to a cloud provider of some sort.


the bad thing about your good mission is that big guys starting to buy cheap hdds which means that prices will go up :(


Prices for the HDDs themselves? Doubtful, but maybe. The nice thing about consumer-grade HDDs is that consumers are not very price elastic, meaning if consumer HDD prices go up too much, consumers will just stop buying them (we saw this happen after the Thailand Drive Crisis). Hopefully what it'll mean is that drive manufacturers will instead start making lower-cost "professional" drives for this exact purpose. Cheaper than "Enterprise Drives" but slightly "better" to justify a slightly higher price than their consumer counter-parts. Hopefully...


I still haven't been able to work out if prices ever went back to pre-Thailand prices. It pretty much coincided with the beginning of the Aussie dollar becoming worthless so that's skewed things a bit.


Well, at least in the States it's gotten pretty close to those levels if not gone down to slightly below them. Not sure where that relates to the Aussie dollar though :-(


And the producers will ramp up production and they will go down again. As a bonus we may see increased investment in spinning rust.


If anything, the added volume is probably good for keeping prices down.


The main problem I have with Backblaze and AWS is that there is no budget limit on transfers.

I would like to tell the provider; limit downloads to $500, then reply 509 to further requests.


Backblaze B2 will have that, we'll have spending thresholds in place.


I notice that they're using a 500GB 5400RPM boot drive.

Is there a need for that large of a boot drive? I would have expected that an SSD boot drive would be preferable.


Yev from Backblaze -> We don't need the boot drive to be that "large", but it's more of a pricing thing. An SSD, even at 256GB would be more expensive than those 5400RPM drives, so we just go with what's cheap :)


Reliability has a cost, too. You can get a 60GB SSD for cheap (Amazon suggests it's about the same price as the slow 500GB HDD), and the SSD will fail at a much lower rate than that HDD.


Yea fair point, might just be easier to order the 500GB ones through our suppliers along w/ our hard drive purchases, but yea, we might go down to SSDs soon. We're constantly iterating on these things.


When you have another 60 HDDs in the same case, I guess you think about reliability differently.


The 60 HDDs aren't single points of failure. {Edit: I mean for the server, not for the whole system.}

And a raid1 pair of HDDs for the system disk is more expensive than a small SSD, more fussy, and the SSD is still less likely to totally fail.


It doesn't sound like the boot drive is a single point of failure since data is stored Reed-Solomon coded in chunks across many pods. If one data drive fails, the whole pod has to go down for maintenance for it to be replaced. The only difference is that you get to choose when to take a pod down for maintenance to replace a data drive.


That's the situation I was talking about, yes. It's much cheaper to go into the datacenter once a month to replace failed data disks than to have to go in to promptly replace any system disk HDD that fails, in order to not have 60 idle disks.


Sure, that would make sense if they only had personnel in the datacenter once a month, but: "we replace about 10 drives every day" [1].

[1] https://www.backblaze.com/blog/vault-cloud-storage-architect...


To clarify, I'm not saying that they wouldn't be better off with an SSD boot drive. I'm arguing that having an HDD boot drive, given their setup, isn't awful.


What about network boot? Or do you find that to be too unreliable?


Not 100% sure if we've given that any thorough testing, but the ops team is looking at most available options before building these out. That said, if they haven't tried it, they'll read this and give it a go :)


Another option -- check out the very low profile USB thumbdrives (the one I've got sticks out about 4mm or so). Or see if you can get motherboards with an internal USB or SD socket. Maybe combine this with PXE boot for initial setup (the pod gets built with a blank internal thumbdrive, and on first boot it installs the Linux kernel and initial ramdisk image on the thumbdrive). Edit: It looks like the motherboard you use does have a USB header on the motherboard, where you can plug in a boot thumbdrive.


Supermicro also sells a sata plug mounted storage device it would have enough for a kernel to be loaded.

though i think a PXE or similar boot option would be a smarter move. just enough to bootstrap something to run in ram. (alternatively a minimal kernel and dedicate a section of the disks in the chassis to holding the running software (500MB x 45 with decent levels of erasure coding is a fairly minimal slice.) the kernel could then pull down the code after first boot (or updates)


Amazon apparently sells a bunch of 64GB and 120GB SSDs for less than the $53 cost of your boot drive.


I really love these, and I desperately wanted to be a Backblaze customer, but I was put off by the abysmal desktop software :(

Storage execution seems top-notch. End-user software deserves some love.


Abysmal compared to what?

I have experience with the Mac client, and it's rock solid, and very well executed.


The Windows client is bad. Can't speak about the osx one.


With Backblaze B2 you'll be able to do it yourself -> www.backblaze.com/b2 ;-)


I hope the memory is ECC, but don't see a mention in the article. Is it? At least the CPU (and I think motherboard) support it.


Rams -> Hynix 8GB PC3-12800 DDR3-1600MHz ECC Registered CL11 240-Pin


These posts set the standard for how hardware info should be communicated. Not only the specs with elaborate detail and supporting images, the costs broken down, and the rationale with explanations of what worked and what didn't. Great stuff!

[edit: spelling]


Wow, all the components are crazy overpriced. You can get the same system built for probably 1/3 the price with the same level of quality.

Example is the motherboard for $500+, the fans for $20 each, on off switch for $25, 8gb of DDR3 ram for $90???? etc.

The saving grace is the hard drives that are decently priced and make up the bulk of the cost.


Links? We love getting feedback, especially if we're overpaying for a particular component?

Gleb


I guess it depends on the requirements a lot. Are you guys set on using ECC RAM and microATX mobo? It would be much cheaper to use some sort of gaming cpu + DDR4 non-ecc RAM.

For example, microatx mobo: http://www.newegg.com/Product/Product.aspx?Item=N82E16813157...

some non-ecc DDR4 ram: http://www.newegg.com/Product/Product.aspx?Item=N82E16820231...

and a cpu: http://www.newegg.com/Product/Product.aspx?Item=N82E16819117...


For a use like this EEC RAM is worth the money.



You might have missed the fact that the motherboard they use has built-in dual 10G Ethernet: http://www.supermicro.com/products/motherboard/Xeon/C600/X9S...

Once you account for the cost of a 10G NIC and the fact that having it integrated frees up another expansion slot, I'd say it's a pretty good deal.

Also, I'm not sure if there are any non-Xeon processors that support ECC.


An extreme example of a non-Xeon that supports ECC: Atom C2750 :)


so this is cheaper than even glacier? and it supports client-side-encryption? and it will have a linux client (CLI is good enough)? if that's the case I'm sold.

in fact do you have the storage located in various locations? if so can I store a paid-copy to a separate location physically, just to be safe.


Just to be clear, there are 3 different things: * Backblaze Online Backup - $5/month unlimited backup for Mac/Win; automatically does client-side-encryption * Backblaze B2 Cloud Storage - $0.005/GB/mo for object storage (lower cost than Glacier, but we don't yet offer a client, though others are building) * Backblaze Storage Pod - the hardware that underlies both of these, which we also open source for anyone to use

Gleb from Backblaze


> automatically does client-side-encryption

But server-side decryption :(

I have been looking into Backblaze every time it was mentioned on a podcast, and as far as I can tell you still cannot download your backup and then decrypt it on your own machine:

https://en.wikipedia.org/wiki/Comparison_of_online_backup_se...

Such a pity. I'm still stuck with CrashPlan (yuck!) for this one reason. I've been evaluating Arq, but I'm not yet confident enough to move all my/relatives' machines over.


looks like crashplan does support client-side encryption? is that true? does it require a local mirror disk/folder like dropbox does?

I'm really more interested in a long-term backup storage instead of a folder-sync storage, cold storage is fine but glacier seems a little hard to use and retrieval is expensive.


good to know, seafile-like client-side encryption/decryption is key indeed. hope backblaze can do that one day.


Naive question: wouldn't it be possible with SSD drives (which are mainly just chips, right?) to make the drives as boards and put them in 1U / 2U / 4U chassis? I mean, who need the HDD dimensions when you are fitting the drives inside server units anyway?


Yev from Backblaze -> Not naive! You could definitely reconfigure the design to hold SSDs, but that would add cost, which is something we're really concerned about. But the short answer is yes, you can use SSDs, though it increases the cost per GB.


I just posted a link to an interesting blog post detailing the risks of building a storage pod [1]

[1] https://news.ycombinator.com/item?id=10541055


Isn't that basically long-form restatement that you should know what you need before spending money? The storage pods are pretty openly targeted at large-scale operations where the lower storage costs pay for a dedicated ops team and things like redundancy are handled at the software layer. Dinging them for not having hardware RAID is like buying a semi-truck and complaining that it doesn't fit in your garage.


The post itself makes the same points.


Yes, but the headline and tone is different until you slog through to the discussion & the comments making similar points.


Great to see a smaller, very open company remaining competitive against the giants in the space like Amazon.

I'm particularly excited to see how their B2 service will stack up to Glacier for half the cost.


Yev from Backblaze here -> We're pretty excited about it. Beta users are providing some good feedback too! We think it'll definitely be the right choice for a lot of folks!


Hi Yev! I commented on a previous post about Backblaze cloud storage and if it was at a point to replace S3 for static assets. https://news.ycombinator.com/item?id=10259507

"Is this going to work for static asset hosting such as for low cost FE assets? Can this be set to serve an index file from the root directory? Doe's it provide you a public URL to your (stealing s3 terms) bucket? Just curious as to why I would migrate from S3 for FE assets to Backblaze."

Was wondering if any progress has been made since then, especially relating to serving an index document from root dir.

Thanks!

edit: Awesome blog post btw



I keep checking my email for the invite link ;)

I think it'd be pretty interesting to expose it as a Windows filesystem with Dokan [0] and I'm sure Jeff from ExpanDrive [1] is looking at it too!

0: https://github.com/dokan-dev/dokany

1: http://www.expandrive.com/


or you could get Supermicro 6048R-E1CR72L with 72 drive-bays or something similar for 3.5K and save yourself all the pain


Maybe by releasing the plans it will cause other folks to buy it and reduce costs.


Why are they not offering 8GB drives. It's late 2015.


The most important thing for us is Cost per GB. Right now the 4TB drives we're using are still much cheaper than the 8TBs, so we go with the 4s, but the Pods work with 8TBs if you want to drop the dough on them!


I buy ~20 drives/month (a drop in the bucket compared to you guys), but I find 6TB drives are fast, reliable, and cost the same per TB as the 4TB drives (so, taking the NAS/Pod cost into account, cheaper per online TB).

Are 4TB drives significantly cheaper for you? Or is there some other constraint at play against the 6TB?


This make, Google Drive, iCloud Looks expensive.


typo: "There are two Power Supplies, each with their own wiring harness."


It's wonderful what you guys are doing. Thank you for open sourcing your work.


You're welcome! Our pleasure :)


Local private cloud storage is going to be huge. Any new startups in this space hiring?


> Local private cloud storage is going to be huge

We're not really in that space, but we're always hiring :D


>5 Port Backplane (Marvell 9715 Chipset) $43.85

Where exactly does one buy this?


love the rundowns, sad that their service is just so disappointing


I've used BB for years on multiple machines and have been more than satisfied, in fact I recommend it to everyone. I'm not affiliated with them at all, just a happy customer. Confused as to why someone would find their service disappointing but to each their own.


Would you recommend it to someone running Linux? As far as I can tell, you have to use their proprietary backup software, which only runs on Microsoft Windows and Mac OS X.


Yev from Backblaze -> Currently we don't have a Linux interface, but when we launch Backblaze B2 you'll be able to use APIs/CLIs to push your Linux data to us.


it takes months to get everything backed up, and now the just the file lists or whatever is taking up 8GB+ on my disk (i use a smaller SSD as the primary drive so space is precious). I contacted support and their only solution is to create a new account and go through many more months of backing everything up again.


Does someone from BB want to jump in here and explain why this guys "file list" could be 8GB and how re-starting the sync will fix it? Sounds like something went awry.


This is what nathan from support told me when i contacted them:

"As part of our backup process, Backblaze will run a checksum against each file before uploading it. This requires the entire bzfileids.dat file to be loaded into RAM. After a long time, or if you have an extraordinarily large number of files, the bzfileids.dat file can grow large causing the Backblaze directory to appear bloated, and use more RAM (potentially slowing down your system). In your case your bzfileids are extremely large, likely because you have a lot of files and an older backup."

here's my backblaze folder: http://imgur.com/ohqUERr (i have my temporary data drive set to a secondary drive)


> it takes months to get everything backed up

What kind of internet connection are you on? I recently reinstalled my mac from scratch (El Capitan upgrade went haywire) and had to redo my backblaze backup. I uploaded the 500 GB to Backblaze in under 24 hours.


That means that you consistently had ~6MB/sec, or 50Mb/sec of upload bandwidth. That might be possible if you're on a really good 100Mb connection (or 1Gb connection), but that's not common in most of the world yet.

Also, could it be that the "re-upload" was able to detect your older copy of the files and reuse them? I can rsync terabytes in minutes over slow connections when the other side already has the files ...


Details? I don't work for BB, but have worked in the backup industry for some time in my past, and always curious.


Not the OP, have used BB for 5 years (and still using, the service is totally fine):

Initial sync is a nighmare. The upload is on the slow side. Biggest gripe so far is that I just cannot tell the client that some files are more important than the others. I would love my pdf invoices and changed KDBX files to be synced before uploading the uncompressed blu ray of Guardians of the galaxy. On the other hand there are people that produce a lot of critically important raw video which will probably have the opposite preferences.

I can understand why they have not implemented priority interface. It is hard to get it right.


What's the point of backing up a bluray rip? Either you have the source material, in which case its almost always going to be faster to rerip than restore from backup, or you downloaded it once already (for, ahem, free) so why not download it again and not pay the cost of storing that data?

Also, I would hope no one doing mastering is uploading their raws, in real time, to a service like backblaze. On-site thunderbolt / NAS solutions are almost exclusively used in RAW backups in studios. With offsite copies stored at regular intervals via good old sneaker-net.


I just needed to think of something big. Anyway there is big stuff that is not that important - lets say couple of gigabytes of logs but nice to have.

BB folder and file type structure makes it relatively hard to back up just some files of given type. And some stuff like tars and zips could be arbitrary in size and importance.


We definitely focus on ease-of-backup vs. choices-for-backup. One of the initial things we found we the "pick and choose files & folders to backup" approach is what kept most people from setting up a backup at all because they were confused by the process.

Having said that, we offer the ability to exclude folders and filetypes if you have certain things you want to backup later.

(Not disagreeing with anything you're saying...just fyi.)

Gleb from Backblaze


I know from a product design and engineering standpoint why you have made the service that way. Every user have unique best backup strategy- it is a nightmare to accommodate all. Just proving the simple second best strategy to all is a good decision. And as I said I know what have I purchased and I am fine with the service. I stopped juggling directories and file types after couple of reinstalls though - those settings are not transferred on backup transfer and setting them all again is too much PITA.

Backups transfers are also a minor pain point - the error horse (on freshly installed BB on windows) and when, how and why it disappears and allows you to do the backup is still a mystery.

I guess that the BB cloud storage will solve some of the customization problems though - if you are power enough user you could write a custom strategy on top of the API.


As curiosity which os and file system are you running Gleb?


Are you asking about him personally, or on the Pods?


> What's the point of backing up a bluray rip?

What's the point of backing up anything?

> Either you have the source material, in which case its almost always going to be faster to rerip than restore from backup

Sure, it will be faster, because it will be a local operation, because you probably keep the original near the same place you keep the main rip. That also means that the original source and the in-use rip are vulnerable to being destroyed together in a site-affecting event (fire, flood, robbery, etc.), which makes having a remote backup valuable.


FWIW, I'm in the UK and initial backup was great and fast (on a 1Gbps upload connection, I was getting maybe 10-15MBps upload consistently, which isn't bad considering it's across the Atlantic!). The biggest thing for this has been the multithreading that BB have introduced recently - setting 10 upload threads makes the backup super-snappy.

WRT priority backup - I ended up just excluding the "big folders" manually (reducing backup size by 90%) then including them when the initial backup of the important files was complete. Also, the client backs up small files first, then moves onto bigger files, which is a nice touch - generally I'd tend to care more about losing a large number of files than a few huge files.

Not affiliated with the company, just love their product!...


In my experience the Backblaze client uploads smaller files before bigger ones, in effect prioritising your documents over large video files or archives.


takes forever, and now the file list takes 8GB+ of space - support says the only way to fix the problem is to create a new account and go through the months of backing everything up again


If your upload takes months, I'm not sure that _any_ online backup is right for you. I did a fresh install recently, and I've got 84 GB backed up. It took me a couple of days, and that's on a cheap ADSL line in The Netherlands.

In your case, I guess it would be much better to look at other solutions. For example if you have a Mac, get an extra Time Machine drive and put it at your employer's place. Every week, bring your laptop there and back it up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: