Hacker News new | past | comments | ask | show | jobs | submit login
A case for moving away from the cloud and embracing local storage solutions (golivecosmos.com)
154 points by katrinarodri 11 months ago | hide | past | favorite | 153 comments



Syncthing is a dream.

Photos sync'd from phone to laptop making pics taken instantly available to share via desktop apps. Also add in the NAS and you have three copies before you consider RAID or BTRFS or anything.

Replace Dropbox with syncthing and you have multiple folders and can choose which folders go to which devices.

Put a NAS at a second location and you have the ability to power up and power down the remote NAS whenever you want to make a backup and syncthing will handle the rest.

Add in any one of the UIs over the files on the NAS and you replace most of the household use of things like Google files or drive.

About the only thing I would like and don't have, is syncthing with an object storage backend for cloud based folders fully under your control.


> Photos sync'd from phone to laptop making pics taken instantly available to share via desktop apps.

This isn't my experience at all. I find that normally Syncthing has stopped working, and I have to open up the app, find the remote host (which says "Disconnected" next to it with no further explanation) and tap around the UI at random until it decides to connect and sync.


Garage (https://garagehq.deuxfleurs.fr/) gets pretty close for object storage. It’s built with mixing high/low latency links and replication between multiple hosts. Unfortunately it’s not really built for end-users, but devs, so there’s no ui or anything like that.


Yup. Garage is painless to set up and, unlike Minio, is designed for "small scale", self-hosted use cases where your nodes may not have identical storage and compute. One current caveat of Garage, compared to Minio, is that anonymous access is not supported. (Patches welcome!) This means you can't easily generate or share links to files in a bucket.

I personally use Garage for locally testing things that run against the S3 API. It's very easy to run the single binary directly. No need for containers, but of course that is also an option, and it's very easy to containerize since you realistically only need to run a statically linked binary and mount a TOML file somewhere.


What I like about Dropbox is all files are available (browsable) from their ios app. Do you happen to use a similar app for remote browsing?


I use Tailscale to make my home network available from my phone, and then an app called Cx File Explorer to be able to fully navigate the file systems involved and view or download any file from it.

I also use Plex to get all media on my phone, and that's actually my primary interface and so I very seldom ever need to ensure I'm connected to Tailscale or browsing, as what I need to access via phone is not usually files, it's usually media. If it is files, I tend to have a laptop on me and that also has Tailscale, so also benefits from just accessing it all as if it's local.

The only part of this I'm likely to change is Tailscale as I really dislike not being able to register account to my email and I don't have a Google, Microsoft or Apple account, and don't wish to use GitHub for this - and I haven't seen a super simple self-hosted OIDC solution for my own email (I wish Fastmail would provide one)


This is my main gripe with Tailscale too, at least for personal use. What are you thinking of using instead? Headscale?


I've been looking at https://www.defined.net/ as an alternative.

And also saw this on HN a month or so ago https://blog.janissary.xyz/posts/tailscale-oidc-authelia-car...


That blog post is quite handy, thank you.


I've never used it, but there's a syncthing app that doesn't sync anything but allows you to browse what's in the share: https://f-droid.org/en/packages/net.syncthing.lite/


Actually I am on iOS and this is an Android solution. But from one thing I found another and finally got to Resilio Sync (https://apps.apple.com/us/app/resilio-sync/id1126282325) which might be a good solution.


I tried Resilio (formerly BitTorrent Sync) years ago and it was quite slow and unreliable, but indeed there are no good Linux + Mobile solutions in this space, so you have have to sacrifice something.


Not entirely true, Syncthing works perfectly for Linux + Android


This thread is about remote browsing. I already to your comment above so you should be aware SyncThing doesn't support this.


This is abandoned - the github project is archived and no releases since 2019.


Couldn’t you do an independent “NAS to Glacier” (or similar)?

Genuinely interested as I was planning to spend some time this Xmas season along your lines and I am missing the same point.


I'm currently using a remote NAS as my backup to my local NAS.

At 35TB of data it will take a lot of time and money for me to back this with S3 or Glacier... so mostly my object storage scenario is to accelerate and make more available a working set of files I use everywhere and often, rather than to backup / sync the majority of files.


You know… I deal with complexity, redundancy, unexpected errors, hardware specs, software updates, security issues, automated exploits, and so many other things, all day. I just don’t want that at home.

I’m sure it’s nice to feel in control of your stuff, but I’d rather do something with my family, read a good book or go for a walk instead of having more of the same shite I put up with all day at work.


That's what I used to think. Then you read horror stories of people being lifetime banned by a Cloud provider with robot-only support.

Also, do you really want all your personal documents and stuff accessible on your phone to anybody who grabs it while you walk around with it unlocked on a call? Yes you might be using some security locks on folders but are you 100% sure you didn't miss any path to anything interesting?

I'm slowly working towards local storage first, partial one way syncing from phone, deliberate cloud sharing of limited stuff etc. and I'm having no regrets.


> people being lifetime banned by a Cloud provider with robot-only support

Just don't use GCP then?

Every time I've worked with Amazon (from 5 person companies to 2k+ global corps) it's been extremely easy to contact an actual human with very minimal costs.

Currently we have 3 AWS people on a Slack channel and we can ask questions from them at any time, they've even given suggestions on how to optimise costs and have recommended AGAINST using some AWS services because wouldn't fit our use-cases.


I feel like people just have wildly varying experiences with these hyperscalers.

For me, I've always had a point-person inside google I can ask, and I'm not a big customer or anything.

However AWS has always been either sales guys who very much did not give a solitary shit about me or robot support. FWIW I'm based in Scandinavia so I wonder if it's a geographic thing.

However one thing is absolutely certain: you are giving up control. After that it's just about trust. As we know about trust: it's exceedingly fragile and difficult to cultivate. One wrong move and you'll realise how powerless you were the whole time, and that's a gut-wrenching feeling.


Scandinavia here too. Our support folks are based in Stockholm, AWS has some sort of permanent presence there IIRC.


AWS support has been solid in my experience.


Add me to the list of people who have no problems contacting AWS. They've always been plenty responsive and have no issues setting up calls when needed.

I've heard of larger companies that can't get anyone, and it must be that they don't know who to call or what they want. All of my AM's have been super responsive...and these are in shops that are anywhere from 200 to 3 people.


It could be that I actually know what I'm doing most of the time, so generally can bypass level 1 and level 2 support.

I've talked to product engineers occasionally.

Maybe they have a bit on the contact database that says "non-bozo?"


> extremely easy to contact an actual human

That's the opposite of my experience with Amazon, and I even worked for a company that underwent a "partnership certification" process or w/e it's called. (I.e. we had to take classes on how to use a bunch of useless AWS crap and then test out on that, just to be able to put our product on marketplace).

You are either exceptionally lucky, or maybe Amazon's support is geographically locked (into your location)... or a bunch of other reasons I can think of. Anyhow. Even if what you say was true today, this is no guarantee that it will stay true tomorrow. If Amazon finds it expedient to cut down on support, they will.


It sounds like you may be burned out. I was in a similar position until I changed fields entirely and took a long break from computer work. Eventually (over a year) my passion for technology slowly returned. It helps that I have more time with my family now and I don't have to learn proprietary cloud APIs or Kubernetes.


I wouldn’t say burned out per se - I started paying more attention to my body and mental health, and that noticeably requires me to take a step back from engulfing myself in technology.

It’s good to hear the passion returns eventually though. All the best to you.


I understand this but at the same time I don't have project managers at home (other than GF), Jiras to track, people to pry answers out of, meetings, code reviews, difficult jackasses, performance reviews, etc. I rediscover the reason I went into tech before corporate drudgery drained all of the fun. I can just build things and shape my tech to my household's needs while getting the feeling that I've built something and practiced my craft.


Don’t get me wrong I still have side projects to tinker on and that’s fun and all! But those projects aren’t something I depend on or that will cause me trouble if they don’t work, are destroyed, or hacked.

To me, hosting my data myself would mean a huge serious commitment to make it run and keep it running, and that’d take all the fun out of it. If I update a dependency of a toy project and everything stops working, I can choose to go bug hunting - or just drop it for a week. If my family photo library disappears due to a hardware failure, my home project manager would rip my head off (if I’m lucky).


It's not as bad as all that. The stuff you control at home doesn't have to support hundreds/thousands/millions of end users, it's just for you, and it doesn't need to be publicly accessible. We're talking quarterly maintenance at most.


Why not a billion micro clouds, with minimum standards like "anything you put here has appropriate backup with a RTO of ~2 days"?


Ok. So now your house is on fire. How do you retrieve your datas from the melted local NAS ? Or a buglar come and empties it.

Don't get me wrong, I'm full "move away from the cloud" in the "proprietary cloud" way.

But getting some drive space on a different location is important if you care about your datas.

Could be in a relative's or friend's place (so almost no per month cost) or on a cloud drive (I personnaly use the Hetzner 10€/m 5TB drive), depending on your choice of backup solution.

But it's important.


Ok. So your credit card expired, and the provider cleared your cloud data. How do you retrieve it?

See, each has issues.


Or you don't read the small print and Backblaze deleted the backup of your external disk if you don't connect your computer for X days. Happened to me, very fucking pissed. Even when you're an "IT guy" think you've got it all covered it can still blow up.


Do you have any pointers to such fineprint? I can't find it nor did I notice it when I just very recently began using Backblaze.


yeah here https://help.backblaze.com/hc/en-us/articles/217665398-Backi...

> Backblaze will backup external USB and Firewire hard drives that are detached and re-attached as long as you remember to re-attach the hard drive at least once every 30 days. If the drive is detached for more than 30 days, Backblaze interprets this as data that has been permanently deleted and securely deletes the copy from the Backblaze datacenter

It's a real fucker like if you have your stuff in storage for a while and then reformat the drive and then try to download everything from Backblaze (what I do every time I get a new computer), only to discover it's not there. Left a REAL bad taste.


It's because the service is billed as a "backup" not as "cheap cloud storage".

If they allowed people to back up unlimited USB drives and networks shares, /r/datahoarder would be all up their business with 1PB storage.

This is why they limit it to only stuff you're actually using.

If you want generic cloud storage with infinite retention, they have B2


They mention in their docs that this is why they forbid (not limit) NAS. Afaik you could still connect as many 8TB USB drives to your machine.


Boot your windows machine using an iSCSI storage block supplied by your NAS. I’ve wondered if this would work with Backblaze or not. I don’t really care if I lose my plex media so I’ve never tried it.


And as long as you keep them connected regularly, they are backed up.

What you can't do is connect a 20TB USB drive, back it up and put it in your cupboard for a year and expect the data to still be there.


As is clearly stated in their FINEPRINT.

/s


I mean, they also email you after two weeks and then again at 21 days, so it's not like they're hiding this from you.


They email about "missing computer", that I'm aware of and is fine but I'm not aware of them mentioning deletion of data of the external disks. Fully admitting I may have missed it.


It takes up to a month to recover from some situations. You maybe even heard about the one infamous three years ago.


I don't really get that example. Assuming that your using the cloud solution as a backup, you'd still have the original and can restart the backup, either on the same or a different cloud provider.

If your house burns down there's a good chance that you're losing both the original and the backup.

Ideally you have the original, a local backup and a cloud backup. The cloud backup can be updated a few times a day, and the local every few 10 - 15 minutes.


This is not a good example. One of them is under your control, the other isn't.


I read many posts of people complaining about some cloud service deleting their data for „no reason“ (= user error, mostly not reading emails) every month, while I can’t even remember the last post of someone losing their data due to a fire (except OVH bursting into flames ;) ). So I think it’s a valid argument to be made.

Bottom line: have a backup, and utilize cloud as well as local storage.


I thought I stated in my comment that you should have both, sorry if it was not clear


You missed a step here.

1. my credit card expired

2. I've started getting warning emails about data being cleared, and giving me a few weeks to fix this (because my provider does want me to keep paying.. it's their business after all)

3. I put in a new credit card and it all works again

Compare it to: you came back home and your data is gone. Or you power up your PC after power outage, and the disk fails. Or even worse, you are sitting in front of your computer, and the cryptolocker/ransomware window pops up.

(And one of those is much more likely to happen than others. I don't think I've ever heard stories about cloud data deleted for non-payment, but I've had plenty of first-hand experiences with disk failures and send-hand stories about ransomware)


Well it's an important point, but I don't think the author of OP argues you shouldn't do that.

By the way: have a look at restic, it's great (if you don't require a GUI).


yep, agree backups are important and that could've been pointed out in the OP. Any storage plan will also mean including a backup plan so if something goes wrong you can restore.


> So now your house is on fire. How do you retrieve your datas from the melted local NAS ? Or a buglar come and empties it.

I fully expect my possessions and my data to be perishable. Losing it in a fire is still better than giving it to others. I will cry a little, when that happens, but as a consolation to myself, I'll say: at least it wasn't stolen from one of those big companies and won't be used to harm anyone.


Why is losing data better than “giving” it (an encrypted version if you care about privacy) to others?


Who says it's going to be encrypted? The service provider may screw up in more ways than one...

The data that may be plausibly stolen from me alone doesn't have a lot of impact if I'm the only victim. It starts to matter a lot more if it's stolen as a part of a larger heist. Eg. if someone knows I'm saving money to buy a particular brand of car, then it doesn't matter really, but if someone knows that among a million people 3KK want to buy that particular make, they may have a valuable insight about making investments into car manufacturing industry. Of course this isn't limited to cars and investments. Suppose, someone stealing data discovers a trend in health problems which would increase demand for a certain drug... same idea.


I have a raspberry at a family member's house with an encrypted backup on an external HDD I sync with rsync.

I used hetzner before but the problem was I needed about 1.5TB and their only options in that range are 1 and 5TB. The 5TB option was a bit too wasteful.


I use the same, just I sent incremental encrypted ZFS snapshots. These are a game changer since the target can import them and verify that they work without ever needing the key


yes, a backup solution is an important part of the overall setup. Didn't mean to make a case for a permanent move away from all cloud storage, but wanted to understand the shortcomings of cloud-only storage, especially from proprietary cloud companies. Setting up backups, whether they are with other cloud providers or local NAS, is important to safely store your data.


My local setup - please critique as you like.

For NAS, I am running Windows Server 2022 (I have a rational reason for this). It hosts many things.

* IIS, for my local browser start pages, I configure with "new tab redirect" plugins

* JellyFin - Basically a personal Netflix for all my ripped media

* DokuWiki, thumbdrive version - for documenting house stuff

* Twonky - Because it's the least-bad DLNA solution I've used so far

* NGINX - provides public reverse proxy function for Jellyfin

I have a backup script which backs up the most critical documents (taxes, work code, personal writings) up to S3 and Azure - pennies a month. I have all my stuff on mirrored drives locally.

I have been mulling over burning MDISC discs for my larger media (personal photos, video, etc.), and doing something with that. This is partly aspirational - Once-a-year MDISC burning for stuff could be cool, but where do I store those when safety deposit boxes are becoming more and more rare?


Do you store stuff on multiple disks? I struggle with windows as a NAS.

Windows storage space sucks hard (particularly their RAID5 style parity which has unacceptable performances, but also I had it failing without reporting it, and it doesn't handle parity with column size >8).

The native windows software raid is deprecated, and can't handle disks of different sizes.

I use stablebit drivepool for a secondary backup where I don't really need parity (backup of the backup). But it comes with all sort of issues of its own (some weird file locking bugs, performance limited to single disk, also if you setup an SSD cache, it can't handle files larger than the smallest SSD (happened when backing up a vhdx).

For NAS, for the moment I haven't found anything beating linux based NAS solutions (synology style) for simplicity and stability.

For MDISC, call me skeptical. I think in 10-15 years you will struggle to find anything to read a CD or DVD, like you would struggle now to read a Zip or floppy disk. The most durable and cheapest way to do long term of storage is to replicate your data on new media regularly. The price / GB drops so fast that no non-retail form of storage (MDISC, LTO, etc) will be economical vs buying a big drive from time to time (every 5-7y). You can buy a 12TB drive for $200-300 now.


> The native windows software raid is deprecated, and can't handle disks of different sizes.

First I've heard of the deprecation! I'm not sure the second part is true though. I recall during setup my 14tb drives were not exactly the same size, and I found some way around it by just creating a volume of the same size as the first. My OS/Scratch is mirrored 10tb drives, which didn't have the same issue. These were refurbs since I didn't want to break the bank on scratch space.

> I think in 10-15 years you will struggle to find anything to read a CD or DVD, like you would struggle now to read a Zip or floppy disk.

CD/DVD drives were wayyy more prolific than floppy drives. Personal computing exploded in the 90's when optical drives were the standard.

> The most durable and cheapest way to do long term of storage is to replicate your data on new media regularly

Yeah, basically where I'm at. My prior NAS, I went through a few sets of drives. The latest ones I didn't even wait until one of the drives crashed, I just swapped out the hardware once the drives hit 5 years of age. I'd love some backup solution where I can archive stuff, and just leave it for decades, hence MDISC.


> First I've heard of the deprecation

The option to create software RAID 5 from disk management has been removed already from Windows 10 desktop (you are supposed to use windows storage space, but as I mentioned, it really sucks for RAID 5). I am sure Windows Server will follow at one point.

> I recall during setup my 14tb drives were not exactly the same size, and I found some way around it by just creating a volume of the same size as the first

Yeah you can create a RAID volume of the min size across all disks. And then you can probably even create another RAID volume with the rumps of the larger disks. But you end up with multiple volumes. More modern solutions (synology, windows storage space) allow you to create a single volume that spans all disks and utilises their whole capacity. Synology by creating a bunch of RAID 5 volumes behind the scene, then joining them with a RAID 0-like (don't think it's RAID, just some sort of virtual volume). Windows storage space by writing strips of your data randomly across all your disks (the number of columns is decorrelated from the number of disks), ensuring that every strip is at most on one disk for single parity.


I have an mdisc drive, and a few mdisc DVDs I haven’t burned yet. Someone on/r/datahorder noticed they don’t sell real mdiscs anymore: https://news.ycombinator.com/item?id=33593967


Wow, that is evil! I can see exactly the thought process behind this change as well...

People buying M-Disc's will not notice that they've been sold BluRay discs for decades when they start to decay some day. It's a malfeasance which will only be discovered in the distant future once the discs start decaying, and will be someone else's problem by then. Meanwhile whatever product manager will have made their promotions for cost cutting and increasing revenue.

tl;dr only buy the M-Disc's which are rated for 4x write speed.


I've got nearly 30-year-old optical drives that still work fine today. I can plug a 3.5" floppy drive into my Android phone and it still works. Sure, there's a chance in a decade or so SATA ports disappear, but USB adapters still work. Do you really think in 10-15 years it will be hard to find a device that still talks USB?

ZIP drives had a number of mechanical issues (click of death). That's probably the biggest reason why they're challenging to try and use today. But as it stands right now, I can take a CD-ROM drive from the 90s, use a cheap USB adapter, and plug it in to a USB port on a brand-new Linux or Windows or MacOS device without issue.

And even then, so what. In 25 years when I'm an old man I'll need to plug in my last optical drive into my 8 year old computer to read the disks and move that data off into the four dimensional hypercrystal storage mediums we'll all be using, and then I'll go into my cryogenic storage chamber to sleep for another 100 years and then transfer my data to the next media format when I wake.


> Do you really think in 10-15 years it will be hard to find a device that still talks USB?

It would take some time to find a miniUSB patch right now, especially a good one. Sure, now I can find it. In 10 years?

N=1 anecdote: two years ago I helped to recover data from a 2.5" USB drive, which was left in a desk for 5 years. I was forced to use R-Studio and still I got only 90% of data in good enough condition.


A 2.5" HDD isn't the same kind of media as an M-DISC. Magnetic storage will lose its charge over time. I've lost content from bitrot on magnetic media many times as well. Arguing I won't be able to read an M-DISC in 10 years because your magnetic media wore out in 5 years is beyond comparing apples to oranges.

> It would take some time to find a miniUSB patch right now, especially a good one.

I've got a pile of them already. I'll likely keep at least one, probably multiple, along with the optical drive/enclsure (of which I do have multiple as well). I imagine if I hold on to a new in box USB optical drive it will also probably have a cable. So in 10 years when I need to get the data off the drive, I'll probably just use the cable that's right there with the drive that's currently sitting on my desk.

I can go down the street to Microcenter right now and buy one, they have dozens of cables in-stock in-person. There are other stores around in-person which also still sell them. There's loads of vendors online which sell them new today. To my eyes they're not hard to find today at least. I can't imagine in 10 years I'll have somehow exhausted the cables I currently have and will need to source one and there won't even be a listing on ebay for one when there's probably at least many hundreds of millions if not billions of them in existence at the moment. For reference, every PS3 had a mini-USB cable, and there were >84M of those sold. How many other devices came with a mini-USB cable? How many were manufactured and sold on their own? How many are still being produced?

I can't imagine there being probably near billions of something existing and then disappearing entirely a decade from now, other than some kind of food or truly disposable item.


Hmm, I've seen old HDDs fail to spin up or otherwise have catastrophic malfunction, but never heard of them having real bit rot.

I recently booted up some portable PCs from around 1990 to review their HDD content and scrub the disk state before handing them over as e-waste. Easily 25 years on the shelf. I would think you'd have to do something very wrong like storing the HDD near a degausser to lose the magnetic recording.

I do think that SSDs and particularly cheap flash "thumb drives" are capable of having bit rot when left on the shelf for many years.


Did you actually validate every bit was exactly right on the drive? Did you even have the information to validate that data? Not just that they booted up without a noticeable error, but that every bit was exactly as written 25 years ago? How many drives did you test, how many manufacturers?

Most of the bits will probably be fine. Some of them probably won't be. The rate depends on a myriad of factors. Even within a brand different models will be better or worse, and it's not exactly a metric you'll find listed. And then yes the way they're stored is also a factor and even the environment when the data was written.

I've had drives from the 80s still work and still pretty much have all the data. I've had files get corrupted on modern drives with no other explanation than bitrot. Even just cosmic rays can eventually flip some bits.


Fair enough, I did not have checksums to compare. And I only randomly sampled the contents, not every single file. But yes, it booted and every program I tried executed normally and every data file I inspected seemed valid.

I do have some familiarity with how such old systems would misbehave with corrupt data, such as from bad data cables. I didn't see any of those kinds of symptoms and so do not think there was any significant storage decay from being on the shelf at least 5x longer than the 5 year interval mentioned up thread. I tend to think that such a failing drive must have had some other kind of damage and not just some kind of spontaneous demagnetization


Another thing to remember is some of the really older disks will have larger magnetic areas as platter density wasn't nearly as much. As densities increased the amount of total charge per bit decreased as there's just less magnetic material available. Just one of the many factors of how likely you are to experience any issues.

Refreshing every five years for magnetic storage is probably overly cautious to me from my own personal experiences. But FWIW if it's data I really care about it doesn't spend years sitting around on just magnetic media. For my own personal risk tolerance putting important data on say an external hard drive and sticking it on a shelf isn't a great way to archive data. I might do it with data I'm OK with eventually maybe losing, but if it's something I know I'll want in 20+ years it gets a few copies with one being on some highly reliable optical media like M-DISC.

Your hard drive might go a long, long time without losing a bit. And maybe if you do lose some data, it's a couple of typos in some help file. Maybe it's some rare code path in an executable that almost never gets called. Maybe it's some b-frame in a video that you'd never even notice. Maybe it's some important bits of the last photo you had of a loved one. Maybe it's some important field a database. Maybe it was empty space. Who knows.


> Arguing

It's more about winning a jackpot in storage lottery. That HDD shouldn't had that data corrupted so bad, yet it did.

> I've got a pile of them already

And I have at least two without actually looking in the 'cables box'. 10-15 years ago I had from five to ten of them at any time, but what about some time later?

> along with the optical drive/enclsure

My desktop DVD drive just quietly died after prolonged period of no usage, just being powered on. You know when I found out? When I needed to read a DVD.

> Microcenter right now and buy one, they have dozens of cables in-stock in-person

Now? Sure. Two years ago I even saw an Apple 30-pin cable in the same type of store (and I checked now, they still sell it, even in two colours). Would there be a 30-pin cables on sale in 10 years? Doubt so, even in 5 is questionable.

And now everyone and their grandmother are moving to Type-C for everything. The same store has no Type-A -> mini-USB cables, except the weird one with additional miniJack on the Type-A side.

> I can't imagine

You can't. Do you still have Nokia thin charger? Moto CE-Bus?

And you missed the point arguing about specifically mini-USB. Sure there are a lot of mini-USB cables out there. But at some point most of them would be in the dumpster pile. Just like Wide SCSI and FireWire cables now.


> That HDD shouldn't had that data corrupted so bad, yet it did.

No, that sounds like just what I'd expect from magnetic storage.

> Now? Sure.

Well yeah, you specifically scoped it to today in your comment with the words "right now". And no, it's not the weird multi-head variety, they have just regular USB-A to mini-USB. And then thanks for showing that even a decade after the dock connector was phased out it's still easily to find the cables in store today. And thanks for pointing out there's multiple varieties of MiniUSB cables sold today, really reinforcing the idea it's still easy to find.

https://www.microcenter.com/product/263914/qvs-cc2215m-06-us...

> Do you still have Nokia thin charger? Moto CE-Bus?

No, but neither of those were even a fraction of market penetration as MiniUSB. My household maybe had one Nokia charger that wasn't just a barrel (of which yes we still have compatible barrel connectors despite being nearly 30 years since). Meanwhile my household has probably received close to 50+ MiniUSB cables over the decades.

Are you seriously arguing Moto CE-Bus was as popular as MiniUSB?

> Just like Wide SCSI and FireWire cables now.

Well if they're like SCSI and Firewire cables today I guess I shouldn't have a problem. There's Firewire in-stock in-person at Microcenter today, right now, as well. eBay and Amazon has tons of listings for SCSI cables, I could get a dozen in just a few days. So it sounds like it should be pretty easy to find MiniUSB if Firewire is the earlier example.


Sigh.

I expilicitly said there is 1 (one) miniUSB cable available at my local MC co7nterpart, and it's not even a generic one.

Dock connector is rarer than miniUSB (I don't think you would argue that?) and the reason it is still there is probably because this is an old stock. As soon as the sales drop low enough it would be in the landfill, just like USB floppies and miniUSB cables in the stores near me.

miniUSB wasn't the best example but right now (and it was my point in the first place) it goes through the same process it did with all the serial and custom data ports 15 years ago.

You nd me wouldn't have trouble procuring miniUSB for a couple decades (cable box!) but overall their availability would decline and in 10 years an Average Joe would need to search if he would need to plug in an old WD Passport from 2005.

edit:

> No, that sounds like just what I'd expect from magnetic storage.

No, that is not what I expect from an hdd. I've seen mulitple failures in my life, that was quite atypical. Most drives from that era or dead mechanically or fine, but not that


Sigh.

I guess Average Joes just can't figure out the complex ordering or Amazon or eBay. Because by your own examples of actually common, as in even remotely as common as MiniUSB, they're on there. You need a 25-pin serial cable? How long? What color? You need a SCSI terminator? You think those were even as remotely as common as MiniUSB? How many are on eBay right now? There's at least five pages of listings and I imagine there were many many times more MiniUSB cables out there.

And even then, that's fine. It's still not going to limit me from playing back my optical disks. They made optical drives with IDE, SATA, MiniUSB, micro USB, USB-B, eSATA, SCSI, etc. What are the odds all of these, along with all potential adapters, disappear?

Sigh.

> I've seen mulitple failures in my life, that was quite atypical.

I've seen that exact same kind of failure close to a dozen times in my life, mostly from a select few models iirc. Then again though my sample size of magnetic drives is probably >300 or so, a bit atypical id imagine. It's not an extremely rare thing to happen especially to low power 2.5" portable drives powered by USB.

And finally...

Sigh.

https://www.ebay.com/itm/325911999375


Honestly, for me, JBOD works fine on my media PC. With some manual re-balancing about once a year, it works fine for me.


Thanks for sharing your elegant and versatile setup.

A thought for the discs and safety deposit boxes,

I live in a rural area and have an account with a credit union that service several labor unions. They’re rooted in the community, profitable, and I don’t see them changing anytime soon. 3”x10” boxes are $20/yr.


To expound on safety deposit boxes since GP was asking for critiques. From what I’ve read they can be prone to accidents, theft, damage and are a big reasons to have them is for showing off. My understanding, which entirely comes from this NYT article, is that banks hate them.

Safe Deposit Boxes Aren’t Safe, NYT, 2019: https://archive.is/NfnL6


This risk is a bit overhyped. Things can happen, yes, but it's pretty rare, and if you're doing regular weekly offsite backups to your safe deposit box, you're not going to suddenly discover that it's gone.


Agreed on Safe Deposit boxes.

I would probably invest in a real fire rated safe. Real defined as a safe over $1000. Not the consumer crap that gets shoveled out.


Slightly off topic, on the client side how do you use JellyFin? I tried both browser and some of the iOS apps that just wrap that browser interface and it never worked well. It perhaps was also an issue on the JellyFin server itself. I went back to Plex only because it was plug-n-play for me. Just seeing if you have any additional tips here.


Browser, WebOS, Android clients. They all seem pretty great to me.

Android client has issue talking to a port other than 8920. There's multiple people debating on github if it is a bug or not.


Infuse [0] is very nice (subscription, but the price is reasonable).

[0]: https://firecore.com/infuse


I am very honestly curious, what's the reason for Windows Server 2022?


A few reasons, in order of my priorities:

- Win10 / 11 seem to be more and more crabby about talking to SMB shares, so I reasoned I needed to be able to exert more control over the file server. I can't tell you for my old nas how many times after a reboot I found myself running "net start mrxsmb10".

- Much of my professional life has been on the Windows stack, so when things aren't working right, I can troubleshoot it a lot more efficiently. Think: Sysinternals tools.

- I get free windows licenses through my MSDN through work, so it didn't cost me anything.

- It's nice to have an RDP host I can use to have some separation from my work stuff.

- I had some existing software licenses on stuff that only runs on windows: PlayOn, Twonky, TVersity. It turns out these older versions of PlayOn and TVersity are mostly useless these days.


I run a 48 TB ZFS pool (Z2; 24TB usable), and an offsite, offline 48 TB ZFS pool (mirror).

My primary server is Proxmox, where I have 8 unprivileged VMs with Docker nested inside (since ZFS 2.2.0, Docker on unprivileged ZFS works now without detours).

My backup is ZFS-send plus Borgmatic as a fallback, until I trust the ZFS-send approach that I just added (I usually have "migration phases" for new features that may last 1-2 years).

I run Gitlab, Nextcloud, Funkwhale, InfluxDB, Graphana, Homeassistant, Mailcow-Dockerized (as an email-management and collector solution) and other things. All of this not public, but on local VLAN and VPN. The good thing with Docker + Watchtower is that I only have very little (<30 Mins) administration work per month, most of this is also testing new features. Has worked reliably for the past 5 years.

For connecting households (my brother's family etc.), I use OPNsense and pfSense boxes.

My server is powered by a 90 kWp solar array - I don't have a (house-) battery yet, but that will be added in 1-2 years. Current energy consumption is about 250 Watt - but I also have an old Xeon from 2013.

The main cost (after initial investment) is harddrives and (of course) learning time. It pays off in other areas, too. I just got a job that I think I only got because of my report of self-hosting.

The biggest convenience is that I can sync all devices and all data (e.g. Nextcloud auto-upload from phones, DAVx5 for calendar/contacts) to a single, secure and private place that is then automatically backed up. I don't know the correct word, but maybe "liberating" describes the feeling best.

All of this does not require super-secret knowledge, just some motivation (and you should not be completely broke). I started in 2017 with the plan to learn Linux and get away from FAAN(M)G and I am happy I started this endeavor back then.


That all sounds great but what we desperately need is instructions on how to do things like this for the less technical, and easy maintenance… I think those are the limiting factors for wide spread adoption


I agree. We need more and better instructions and easier-to-setup tools - all of this got a big boost with docker (compose). On the other hand, we need to pair the less-technical with the power-users and help each other. I am responsible for about 8-10 people. This frees them from cloud providers and helps me to built better and more robust solutions.


You’re doing great work then, well done!

I run a synology nas with docker compose for some services and it works well. With a young family though when something does go wrong it can throw out a few days of my (very little) spare time.


Yeah, I have a family myself. This only worked for me because I have the habit to go to bed early (like 8PM) and stand up at 4-5 AM in the morning. This gives (or: gave) me about 1-2 hours before breakfast to learn Linux and DevOps.


I had a plan once for a website where we’d all post our self hosting architectures as kind of blueprints of known good configurations plus some instructions to build them… another project dropped due to spare time


Having used FreeNAS and TrueNAS Core and TrueNAS Scale, honestly it's not that complex and doesn't require any technical knowledge to get a basic system set up with backups to a cloud location of choice. The docs are very straightforward apart from a few specific terms you might need to Google but help tooltips generally do the trick, and everything that regular users might need is accessible through a trivial web UI.


There definitely is a market for someone who wants to make a full-on at-home cloud like that a product.

Just a fancy box, you give it an internet connection and power and then it Just Works. 100% local.


I spent a good number of years thinking this was the right solution and heading towards it. I had drawn up designs for my house, solar installation and half a rack of kit. This was to run a domain for my immediate family and all services including media storage and streaming. Solar was going to be 100% grid independent i.e. not some crappy provider based install. The infra was all targeted at Debian.

Then I got two sick parents and a divorce on my hands.

Then I realised exactly how much time, energy, money and headspace managing all this crap really took out of my life because I had to spend it on other stuff instead not out of choice but necessity. One of the absolute killers for all of this wasn't really the cost and complexity but the time to curate my data and deal with it is incredibly high if you have a lot of it. One of the worst sub-parts was the question from the kids: I want this music and then going to get it and deal with it and then distribute it to everyone. This would take hours a month to manage.

I came out of the other side of this somewhat unscathed thankfully and with a much larger pile of cash than I thought I was going to.

So a year long project took place. I combed through several TB of data, from myself and my parents and did a huge minimisation effort. I deleted all the fuzzy ass photos, pictures of stuff no one cared about, all the films I'd already watched, deleted all the crap music, deleted scanned paperwork dating back to 1989 I'd never need, everything. I also did a physical clean out at that point. Then I scanned all the family photos as a backup.

End game was everything everyone in my family ever did, or found value in, could be crammed in 200Gb of iCloud with 40Gb left over. Apple got to handle the music too. So that's how it rolls. I have no infrastructure other than a laptop, router and offline backup drive now, no huge energy dependency (my energy usage is stupid low) and my domain sits on AWS/CloudFront and Apple iCloud+.

There's a dependency in one form or another so I'll pick the one that makes my life easier. Feeling liberated for me is not having the weight of this on my shoulders personally and not leaving that weight on my kids shoulders one day to pick through and deal with. I've been through too many dead people's stuff to want to do that to someone else now.

If I want to learn something, I'll do a formal higher education thing, because I have the time to do it now!

Edit: notably there was an advantage here at two points. Firstly my internet connection was down for 2 days when someone cut the cable. I did not lose any services because I could 4G tether. And secondly I moved house and had several months of hell trying to get an internet connection sorted out. Again, 4G was fine. I was pulling 100 gig a month over that 4G working from home and there was no noticeable material difference for me.


I know what you mean. I cannot really comment (I have a sick parent, now, too, unfortunately).

Just one answer to this:

> One of the worst sub-parts was the question from the kids: I want this music and then going to get it and deal with it and then distribute it to everyone. This would take hours a month to manage.

There are million ways to set this up, I know. This is how I did it: People have their "music" folder that is synced to the Nextcloud Server. This folder is then bind-mount (read-only) to the Funkwhale-Docker inside an LXC container. On file updates, Funkwhale scans files and add any new music to people's libraries, so they can immediately listen to their new music. There is no human intervention required, zero admin work (beyond the initial setup).

I followed this design principle throughout all my services and it is pretty much "if I have no time: leave it running and it will run 12month + without requiring work".

I am at a stage now where this really _safes_ me time. All the document organization, paperless office etc. free up my spare time so I can play with my kid, help my parents etc.

I simply lost all trust to cloud providers when Amazon once said I could backup my photos for free (5TB). The upload took one month. 12 months later, they said they would deprecate the service. I had to pull everything again. Never again.


Sorry to hear about the sick parent.

Regarding the music, mostly obtaining it in the first place is where the issues appear. Either it's difficult to get it (buy / warez / rip) and I have people with little to no technical ability and interest using it. Ergo Apple Music worked nicely.

Wait until it all breaks and you spend all night up because the kids are complaining that they can't get to their music (been there). Can point at Apple now and say it's their fault ;-)

As for trusting the cloud, you don't have to. Just have an exit plan. For me it's a backup drive and Spotify should apple go to shit.


Could you elaborate on what you mean by off-site and offline mirror? How do you go about maintaining that?


Good question. This is a bit tricky and you may argue about the term "offline". My offsite ZFS array sits at a remote location (my parents house, 100km distance - just enough for a nuclear strike). It is automatically turned on using a Shelly PlugS on a timer, once per week. It then connects via VPN automatically to my main site, runs some checks for ransomware (like changed file-count), pulls ZFS updates and then shuts down again.


If you’re sending zfs snapshots (and not deleting them), doesn’t that give you protection against ransomware? If so, a high number of files changed might be exactly when you want to snapshot and replicate, to minimize the worst case outcome (that the ransomware gets root and zfs destroys your local copy).


Yes, but if your hypervisor is compromised, ZFS Snapshots could theoretically also been deleted/modified/etc. - I wanted to cover this scenario. This is still work-in-progress and my script currently just aborts early in case of anything suspicious. Also, the file changed check only applies to my Borgmatic solution, ZFS only works at the snapshot/dataset level.


Sorry to be disparaging, was this generated content? The points made are so hand wavy, and address barely any of the topics they're headlining, and pretending there are no similar issues with local storage.

This is very questionable quality, relying on just one piece of reasoning repeatedly.


Security and cost tend to be less of a win locally than you might expect. I'm a huge advocate for going local and owning your stuff, but it depends on circumstances.

2 TB from dropbox is $120 per year. A 2 bay Synology is $448. Brand name 2TB harddrives are ~$50, so all in you're looking at $550 for a reasonably nice home NAS. That will probably use around ~20 watts on average, or 1 kwh per 2 days, working out to 175 kwh per year, or ~$52 if you're paying $0.30 per kwh in a high cost of energy place like California. Assuming you get ten years from the NAS box, that puts you at $1,070 for the NAS versus $1200 from dropbox...

So you get into big questions of who do you trust to last longer: hardware or the current storage provider pricing. Also which one do you think will be less of your time to manage. You should still 100% have an offline backup for either one, so that's another $50 for both for a external HDD that you store in a closet.


I setup storage and replication for my father between Florida (primary) and a midwest state (secondary) using two Synology NAS devices. 30TB (VMs and install images because he likes to tinker with tech being retired, as well as lifelong personal data) corpus he was carrying with him on external drives, ~50TB usable after accounting for RAID whatnot. To store this in Backblaze B2 was going to be $180/month ($6/TB). He breaks even in ~6 months at ~$1200 for equipment outlay. Power costs are immaterial outside of California (less than 10 cents/kwh at both sites, rooftop solar on both properties, YMMV).

I really wanted the cloud solution to win, because I am not around often and don't have time to troubleshoot mouse trap tech setups. I can twiddle cloud primitives from anywhere, which is very appealing for obvious reasons. But in this case, the math was clear.

(no affiliation with Synology or Backblaze, but a big fan of both, infra/devops engineer in a past life, I also pay for Dropbox personally)


Synology is mostly web based configuration or ssh… maybe install Tailscale so you can remote admin it easily?


Indeed, we use Headscale [1] to maintain a Wireguard mesh. Synology NAS devices can act as Wireguard endpoints, but we also pick SMB routers that can do the same. I'm partial to Glinet at the moment (GL.iNet GL-MT3000 (Beryl AX) [2]), but the recent HN thread on the UniFi Express [3] has me keeping an eye on it (no Wireguard support I believe).

[1] https://github.com/juanfont/headscale

[2] https://www.gl-inet.com/products/gl-mt3000/

[3] https://news.ycombinator.com/item?id=38504027


Or just set up openvpn, really only takes a few minutes on say pfsense/opnsense.


I'm quite a fan of self-hosting, but I can certainly acknowledge that sometimes cloud solutions are better. But it's not fair to only compare to a Synology NAS. If you're fine just running a standard Linux distro, rather than Synology's software, you can do much better in terms of hardware, for much cheaper. An Optiplex is ~$100-150 on eBay, has better hardware, and you can either just save that money or invest it in more storage.

It's worth acknowledging that it uses more power, but still, that's $300 less than the Synology up front. Some extra power draw doesn't add up that fast. Plus, IMO there's a lot of value in being able to control it all yourself. If something breaks (which isn't that common, assuming this simple setup) you can fix it yourself, and you have full control over your data.

I'd like to emphasize that it's worth taking into account the time and stress to run a NAS, it's not a perfect fit for everyone, even technical people, and priorities vary from person to person, but assuming you're just running a basic NAS with Syncthing, especially if it's all in Docker, it's very simple and low-maintenance. Hell, use an RHEL clone and set up watchtower, and you can have automated updates for a decade - that is, until it's not supported anymore.

Disclaimer: I'm not familiar with Synology NASes, I haven't used one myself.


If you repatriate content locally, do you need a NAS ?

I see the alternative to the cloud as being ~50, or more realistically ~100 for a usable external drive. No consumption when it's not used. Higher longevity than a NAS with the same drives. I don't think the alternative is a self-hosted cloud with requirements of being online, reachable by multiple devices, etc...


It depends on your use case. If you just need storage as a backup for a single device, then there's no need for a NAS, but if you're backing up multiple devices and/or want to share data between devices, then a low-power NAS is a good option.


Yes, it's good to remember that use cases differ, and that not everyone has a few TB to share between multiple devices. Of all the people I know who still use cloud storage, none of them pay for it because it's not needed, and the free quota (15GB) is largely compatible with a sync-everywhere strategy where all files are always copied everywhere, thanks to Syncthing for example. Even for photos this is doable.


For the home NAS, Assume a 5-6 year lifespan for the drives so in those 10 years you will spend another $100.

Of course this assumes you just need storage while a NAS can provide other services allowing you to consolidate calendars, shared note-taking, password managers, etc.


Are the economies of scale really that aggressive here? A feature-rich cloud solution ending up cheaper than self-hosted by around 10% just using napkin math?


No one would spend $450 on a NAS and then put 2 TB drives in it. My NAS was $450 and has 3 18TB drives with another slot not yet used. The drives were $250 each or so on sale. So for $750 + $450 I have an always-on home server that runs my home automation, DNS Adblock, local webservers, media server with hardware transcoding, ip cam to HomeKit adapter, a Nintendo switch homebrew e-shop, a software controller for my Omada network, a Tailscale node to access my home network from anywhere, and many more services, AND it has _36 TB_ of storage with parity so it’s safe from a single drive failure. I replicate everything important to an 18TB external drive on a PC with Backblaze Personal which gives me offsite backups of 18TB+ for $60 a year. Nothing is really competitive with this if you have a lot of data you want to keep (my wife has a lot of video content for her work.)


This also adds to the cost. You start looking at 1-2k in equipment minimum for a good amount of space, and it adds up fast. Best way to get that more storage, but it's not cheap.


If you go cheap, you could probably cut the cost of self-hosting in half in terms of obvious cost. The cheaper you go the higher the maintenance tends to be though, so it may not be a real win.


Don’t forget to adjust for inflation.


One of the big factors in this is how the costs go up! Will the cloud provider raise their price? Probably. Will your cost of electricity go up? Probably. Becomes a highly dimensional decision with no single obvious optimal answer, as a lot of low probability high risk events will dominate cost. If you're scared about house fires, the expected realized lifetime of the hardware goes down. If you're scared about OVH fires... likewise.


> a one-time payment for an external hard drive may be a more economical alternative over time.

If you care about your data, you are playing with fire.

Drives do not store data forever. Data must be read and rewritten occasionally to maintain it, from old media past its lifespan to new media.

Good storage software, with the ability to write your data with either mirroring or striping of some sort is able to routinely scan your entire data set and detect bit rot - and rewrite sectors that contain bitrotted data to new media.

You simply do not get that level of protection buying a single external hardrive alone.

Most enterprise storage systems do this. Most cloud storage does this. It’s worth paying for if you have data that is valuable.


I don't have the impression that anybody here is discussing a single external hard drive.

Also, there's nothing particularly "enterprise" or technically exceptional in backup software that can read old versions of files (for testing) while writing new ones (for current backups) and can copy old backups to new storage locations.


I use the 'bitrot' app to store checksums of files and to check for bitrot. It works by checking for data changes that aren't accompanied with a file modification time change. With this you don't need something like RAID or ZFS or detect bitrot.

https://github.com/ambv/bitrot

But of course you still need backups. The way to use 'bitrot' in combination with backups is that you don't backup the bitrot DB file. Instead, you run 'bitrot' separately on the main disk and the backup disk.

As for fire-proofing my data: I store a disk at an off-site location (parents' home), and I regularly swap my main disk with the backup disk.


I have used local drives for my data for decades. I have data since 1995. Nowadays I also have a remote offline backup somewhere. It seems to work for me so far.


With your setup, is that having your data on a single local drive at any time, or was it generally copied onto a few?

aka "making sure you're not hosed if a drive goes bad"


It started with floppy disks in a large box. They actually went bad quite often. From the moment we had a 20MB hard disk[1] for about €400 all went on that hard disk, I think we made copies on the floppy disks just in case (not sure though). It was not important: private stuff, school stuff, programming for fun.

After I got a windows computer, I started hoarding stuff on the harddrive. I don't think I bothered to make copies. When I started doing paid stuff, I bought an external drive for backups. Have been doing that since, in various configurations. My configuration is now: all computers in the house backup to a central server in the house. I rsync to an external HD from that server every now and then. The external HD is stored on a different location.

I have realized that the data is the most vulnerable when I am doing the backup. If the house goes up in flames at that moment, I loose everything. I should get a second HD to prevent that.

[1] https://www.computinghistory.org.uk/det/47582/Atari-SH205-Ex...


I have a cron job running snapraid scrub. And smart demon with email alert. Once I had to quickly buy a HDD (next day Amazon) because it was dying.

Big appeal of cloud storage is that the data won’t go up in smoke if there’s a fire. But everything else you can do if you’re patient enough.


> Big appeal of cloud storage is that the data won’t go up in smoke if there’s a fire.

Unless it’s OVH…


Security: with local storage it's on you to make sure that your NAS or whatever is constantly up-to-date with security patches. And that's assuming your NAS provider keeps providing patches. It's on you to do all necessary monitoring for intrusion detection etc.

Reliability: with local storage it's on you to keep backups, make sure those backups work, replace failing drives, don't accidentally wipe the drive, and make sure you have distribution across geographies. Good luck with that.

Cost: sure, local wins here.

If systems administration is your fun hobby, or you are incredibly cost-sensitive due to large scale, by all means manage storage locally. Otherwise pay professionals to do it.


> Reliability: with local storage it's on you to keep backups

With cloud storage it's still on you to keep backups, never forget that.

First, the cloud company may spontaneously lock you out forever with no recourse. You better have backups.

Even if that doesn't happen, they might lose your data. If you are a personal (non-commercial) account you very likely don't have any SLAs, so anything can happen. Even on commercial accounts, SLAs ultimately only provide liability, not a guarantee of service.

So it's always on you to have backups.


My experience setting up a Linux NAS server on a HP Proliant N54L is that once you’re past the initial set up cost (doing Linux sysadmin things which is very time consuming and sometimes frustrating) it’s basically done.

Sure you’ll have to apply updates and reboot from time to time, but that’s almost nothing.

As for security, set things up correctly (proper samba auth) and use firewalls. UFW on the box. I have a Linux router and only computers on the right Ethernet ports can touch the the NAS. Done.


Security: with cloud nobody cares, things leak all the time w/o much consequences

Reliability: unless you’re multi-bilion customer nobody cares, Google can simply ban your account and they won’t assign a human intern to perhaps reconsider dropping all your data because the effort would never pay

With local storage there is at least one person who cares about those problems. With cloud there is none.


>Reliability: unless you’re multi-bilion customer nobody cares, Google can simply ban your account and they won’t assign a human intern to perhaps reconsider dropping all your data because the effort would never pay

Because of course it just has to be Google that you use for your cloud storage? We can't consider any of the dozens and dozens of cloud alternatives also available with much better customer service involving real humans?


This is starting to become one of my pet peeves.

People just tout "lol every cloud provider only has AI support with no humans" - when it's just Google - a trillion dollar company - skimping on user support.

Meanwhile there are dozens of alternatives, small, medium and large where you can contact an actual human and get help. No algorithm will ban your account automatically without the right to object.


Security: you don't need a NAS because your files don't need to be accessible "online" -- they're already local to your own computer.

Reliability: with local storage, you have the ability to make backups without re-downloading multiple terabytes of data each time.

And there's definitely no way your files could just spontaneously disappear from cloud storage, either:

https://www.forbes.com/sites/jaymcgregor/2023/11/29/google-i...


All of these “cloud is better” reasons have all proven to be no better than some kind of local nas + backup, but a lot more expensive.


for 99% of applications, especially with proper system design, this stuff is super easy now with containers and automated backups

if you’re storing lots of user generated data, probably should have just scrapped the idea


It's definitely worth people considering getting local storage. Personally I like the combo of onboard storage, sync to cloud storage, and routine backup to external storage.


You can deal with this problem only if you are very disciplined about it, like GenX and prior generations were with their physical photo albums and paper financial records.

At some regular cadence (say weekly or monthly, whatever is your balance for loss tolerance vs discipline is), you will have to organize (tag) and archive your digital assets (photos, finance/health records etc), compress and encrypt it, and then do an offline disk backup as well as upload it to a cloud storage like S3. You will have to keep your encryption keys, S3 credentials etc safe and secure. And you will have to do it serious discipline, at the cadence you decided or at least upon completion of major life milestones, to save your digital life.

Then, every once in a while, at least once a year, you will have to download your entire archive, decrypt and decompress and verify that your system of tools and processes still work.

If you skip any of these steps, you are likely to find that you have screwed up and lost your digital life irrevocably.

Btw, just like your GenX parent/grandparents kept physical photo albums and file folders, you can keep digital equivalent in BD-Rs.


You should do this even if you use Dropbox or some other service provider. There’s still data loss potential. Google has lost entire Gmail accounts for example. The durability isn’t 100% after all… those extra .9’s aren’t to cover durability issues in the architecture, they’re to cover “oopsies” where the provider loses your stuff.


Organizing and keeping a photo album seems a much lower barrier and mental load for most people than the steps in paragraph 2. Let alone paragraph 3.

I am all for it, and practice something similar. I don't know anyone else though.


How is this reaching the front page? I assume everything I read is to some extent generated content, but I can generate an article on this topic in under a minute that puts forward more depth and nuance.

"The cloud may be a seemingly secure space, but storing your photos and documents on your computer provides an extra layer of control and security. There is less concern for unauthorized access or data breaches because your files are physically with you. "

Regardless of what the audience is here, this is just nonsense.


Yeah, gonna jump on the "this seems like autogenerated blogspam" train here.

The whole golivecosmos site is filled with similar articles that just link to other places with very little actual substance.

Also the submitter's history seems like they've only posted stuff from the same domain...


It's a marketing blog for a service that relies on storing files locally, so that sounds about right. The HN username also matches the blog's author, so it does feel like they're posting exclusively for self-promotion.


I don't disagree with the premise, but the author's case is deeply flawed.

> Instead of paying a monthly subscription to store and access your files, a one-time payment for an external hard drive may be a more economical alternative over time.

Cloud storage has much, much more redundancy than a single external hard drive, and the author discounts the value of that redundancy.

> Reliable access

Have fun getting grandma to cycle her drives when the drives inevitably fail! Also, have fun setting up networked access over the Internet, without resorting to a drive provider's portal (misses the point of getting off the cloud), dealing with the sorry state of static IP addresses for residential connections, helping grandma set up a domain and DNS, hoping nobody attacks your exposed network, etc.

Non-starter.


Unfortunately, cloud storage really does have reliability issues. A recent case in point, though there are many more: https://www.forbes.com/sites/jaymcgregor/2023/11/29/google-i...


So in other words, Google in particular has a reliability issue (big news flash there). Unless you're nitpicking for its own sake, you presumably know there are many other cloud options out there for which you can even use your own encryption system (think boxxryptor, cryptomator, reclone, etc) for maximum privacy. It's neither expensive or hard to run simultaneous, fairly simple multiple cloud backups of one's device data with different providers that each have better customer service reputations than google.


> It's neither expensive or hard to run simultaneous, fairly simple multiple cloud backups ...

Yep, that would potentially be a suitable workaround. Not sure I'd really agree with the pricing aspect not being expensive, but that's probably one of those things that depends on each users circumstances.


describing the costs of long term local storage in terms of "a one-time payment for an external hard drive" shows a lack of understanding. I want to keep my important photos for decades which is beyond the life of a single disk, or even a RAID. For most people cloud storage is a rational choice.


Run your own cloud (specifically OwnCloud). The setup takes effort, but afterwards it's easy. I run it on a microserver at home, with port forwarding for access.

All the benefits of "cloud", but you know where your data is. Do remember to make backups, but you need those for commercial services as well.


Considering OwnCloud’s current breach following recent acquisition, you might want to check out the NextCloud fork, which seems to be actively worked on

https://nextcloud.com/blog/security-statement/


NextCloud is certainly also an option.

That said, the security issue wasn't really the fault of OwnCloud. They use a Microsoft library, which itself included a further library. That downstream library had the security issue. Unfortunately, it happened to interact especially badly with the containerized version of OwnCloud.

That kind of downstream security issue can bite any project. I don't blame OwnCloud in the slightest, and they were very quick to acknowledge the problem and post the solution.


In the past I've been hacked for some owncloud lldap vulnerability.

Do not allow access to owncloud without a vpn!!


I don't doubt your experience, but I wonder what the issue was. The only LDAP-related issue I find (cve-2021-40537) also requires compromised admin-credentials to exploit. Of course, with compromised admin credentials, all bets are off anyway.

I know it is only anecdotal, but I have been running OwnCloud for many years now, available without a VPN, with no security problems.


What this article fails to justify is there isn't a simple solution for users to have a "Local Cloud". A NAS or Hard Drive on a computer will be beat by the ability to have all of your photos/software on you wherever you go.

There needs to be a better way for Personal devices to access personal or home networks for personal cloud solutions to become more accessible to the average person.


Most popular NAS have built-in services for remote access that regular users can use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: