Hacker News new | past | comments | ask | show | jobs | submit login
Interview with CEO of rsync.net: “no firewalls and no routers” (console.dev)
654 points by dmytton on March 18, 2021 | hide | past | favorite | 334 comments



I wonder if they have any sales to large enterprises or similar institutions.

In my experience, the larger organizations will have a "security" questionnaire required of their vendors, and the person administering it is a droid, incapable of evaluating whether the questions, originally written in the mid-00s and only updated for buzzword compliance since, are applicable to modern security practice today, or to the particular product/service/vendor in question. And no firewalls or routers would be massive, disqualifying red flags on such a questionnaire.

Never mind that a KISS setup tends to bring security because of its minimized attack surface. In the minds that write and administer those questionnaires, security only comes from sufficient amounts of the right kinds of complexity.

I'm sure it can be done. IIRC, Cloudflare doesn't use any firewalls, and they do some big business. It just isn't easy to get past the droids programmed to ensure that all pegs shall be properly square, IME.


"I wonder if they have any sales to large enterprises or similar institutions."

Yes, certainly.

We frequently fill out very detailed checklists and questionnaires related to our quality policy, standards, internal policies, etc.

We're also very honest about how we approach these issues:

https://www.rsync.net/resources/regulatory/pci.html

... and they generally appreciate the honesty.


It’s hilarious that the first "vulnerability" in the example report[0] linked in this page is basically "SSH is accessible". Well… Duh !

[0] https://www.rsync.net/resources/regulatory/PCI_usw-s005_repo...

EDIT: It’s marked as "PASS" though, so it’s all fine, just funny.


I once had a someone report responding to ping as a vulnerability. For the public facing firewall.

We sent them back a link of prominent servers that respond to ping.

Including the web server of the expensive agency that had produced the report. And whose web server had an expired SSL certificate.


Talking about PCI compliance I always remember this: https://serverfault.com/questions/293217/our-security-audito...


Oh wow this is next level though.


Well, PCI compliance is different from regular server administration (a lot of it being smoke and mirrors, yes).

I do not believe ICMP (ping) is an automatic-fail condition for PCI (at least for certain SAQ levels that I'm familiar with) - however they do show up as warnings, particularly if you can get a timestamp response (to be used in timing-based attacks).

PCI prefers systems that handle CHD be "invisible" to the outside world, in an attempt to hide the systems an attacker might take interest in. Not always feasible (eCommerce, for example), but you gotta jump through the PCI hoops if you don't want to be stuck holding the bag if there's some breach.


PCI compliance is to reduce the chances of legal liability. Better security is sometimes a side-effect of that compliance.


To reduce the liability of the credit card company maybe, by making the process so complex and onerous that it is virtually impossible to complete a survey without some error or omission that would almost assuredly be used as a reason to invalidate any liability for the credit card company should some bad event take place. Source: have had to complete PCI surveys from multiple vendors.


I used to get so tired of having to write up explanations of why my FreeBSD server couldn't possibly have failed a security check for a Linux vulnerability, or that the web server they were complaining about didn't actually exist, or a million other retarded false positives every quarter. Thank goodness I don't deal with PCI any longer.


For the last security report I had to deal with for a client, the main vulnerabilities were reported against a Google site that was merely linked to from the clients site. Not PCI compliance, so more flexibility in dealing with their incompetence, thankfully.

They reported a number of purported (non-existing) "vulnerabilities" against said Google site that included that it stopped responding to their probing soon after they started hammering it with sketchy requests... They did, to be fair, point out that this could be a defence mechanism, but dinged it for preventing them from checking for other vulnerabilities.

At least I didn't have to explain why that one was nonsense - it was rather obvious to my client that the agency they'd hired were being idiots. It's not like it was difficult to see either - the domain name of the site they'd hit had "google" in it.


Sounds like a scan mis-configuration on your client's part. All PCI vuln scanners I've used require you to specify IP addresses and Domain Names you want scanned, and do not follow on-page external links.


Not the client. Third party agency hired to assess the security, and who clearly did not apply any critical thinking before sending it off. And as I pointed out, not PCI.


This was not for PCI compliance.

And the system by definition could not be invisible - the ip in question was in DNS and was what you'd connect to the web servers on.


tenable keeps telling me that something "interfered" with its scans for PCI compliance - isn't the point of a firewall IPS system to do that? but i don't even have it on for the IP in question, so I basically have to just edit my network range to remove it. it really is bullshit.


My parent org is starting to take their vuln scan results and report them to c levels.

When they told me I informed them I stopped using their vulnerability scanner years ago because they would not allow me to chnage anything in it, including exclusions to icmp time stamps or other vulns Ive mitigated while proper fixes were in the works.

So I rolled my own and use that to audit my systems. They don’t care because “policy”. My c levels will just ask and then promptly disregard all future reports, adding to the noise


I did a job where I was given access to a server in the form of a set of credentials for an HPE iLO, which was accessible over the Internet. From there, we could use the remote console to logon as root.

HPE iLO doesn't support MFA or any form of public key authentication, and its security history is much worse than SSH. It requires several ports open and the old version they had required Java plugins on desktops and all sorts of nonsense. Using it outside of emergency repairs is a terrible experience due to console refresh lag and the fact you can't copy + paste.

The reason I had to do this insecure and annoying process is that a PCI assessor had told them it would be a hard fail to have port 22 open on the Internet, but this would apparently be fine.


I don't know... I mean, maybe the security posture is to annoy the hackers into giving up?

("This thing requires a Java applet and is slow as hell. Screw it, let's just pwn the bank across the street")

I'll call it Security by Inconvenience.


That reminds me of a post[0] on alt.sysadmin.recovery.

The hackers were annoyed by the compromised machine so they installed security updates and did other system administration tasks.

[0] https://groups.google.com/g/alt.sysadmin.recovery/c/ITd7OlMr...


Amusing but today there’s two kinds of hackers: people who manually run a campaign like in your story and the endless hordes of bots that automatically exploit systems to turn them into botnet slaves or cryptolocker hostages.

You can’t inconvenience a bot.


I vaguely recall some kind of malware that upon infecting a system scanned the system for other malware and removed/disabled it. The motives were far from pure, obviously. (Although there also was a case, I think, of a piece of malware specifically created to ensure "infected" system had up to date AV software and were up to date update-wise. We sure live in strange times.)


That's why your security solution should include a mix of every known technology, hackers need to know everything from COBOL to rust


Such strategies are remarkably effective, and maybe arguably describes all security in a nutshell.

Every time I notice an obscure feature in a Google product or service and go "hm, I wonder if that could be exploited", I then always go "...meh, it'll take too long and require too much concentration to figure it out."


No, not at scale. You may be discouraged, but there's someone who will have lots of fun breaking that specific feature.

See for example @jonasLyk who spent the last half a year (?) trying to abuse almost only the alternative streams and junction folders in windows.


Wow, this guy is mad

https://twitter.com/jonaslyk

I can't help but picture him as a sysadmin walking away from a bunch of servers that are mysteriously 40% faster than ever before, but then he gets stopped at the door of the datacenter by some unimpressed looking lawyers who glare at him until he puts everything back

Thanks for the reference, and fair point, yeah that's not how it works at scale.


The new Ilo has a "HTML5" and Java console. But 3 wrong passwords and your blocked for 10 minutes (by default).


PCI QSAs are notorious for being complete jackasses. You have to be very careful about vetting them. That isn’t the dumbest thing I’ve heard like that!


Too annoying. I would’ve shut off SSH for the “assessment” then moved it to a different port after.


SSH doesn't have to be on port 22. you can leave an empty honeypot on port 22 and run SSH on whichever port you like


It irks me a lot that GCP always put "SSH port opens" in Security Command Center as HIGH Vuln. while I'm running private subnets, FFS.


I was just lamenting the state of these forms today. I feel like many of the questions are variations on the theme of “what antivirus software do you use to protect your basement mainframe from your corporate network traffic?” “N/A” isn’t a long enough answer, so it feels like you need to keep explaining “the Internet”, LLC of a centralized corporate network, SaaS, etc.

Is there a modern, no-nonsense guide to filling these out honestly without telling the person doing the checkbox checking that their form is dumb?

I realize there’s a lot of rent seeking and money to be made by consultants in this space - I’m looking for the GitHub published guide or wiki to help smaller no-nonsense shops navigate the phrasing and map these vendor security questionnaires to “modern” technology.


This is going to look like I'm a plant with the timing of my comment and this "Launch HN" post that just made it to the front page, but with that out of the way: https://news.ycombinator.com/item?id=26513040

Looks like Stacksi is aiming to fix this pain point.


> Our platform only answers on port 22 with OpenSSH.

I do security and I title this "Most secured platform in the world."


Oh yeah? Well I run one where no ports are open. In fact, I haven't even connected it to the network.


Wasn't that the joke about how the original windows NT server got it's C2/ITSEC rating?


I haven't heard that joke!


I'll raise you-- I have one that I keep powered off...


I can top that: I don't have one.


That's nothing. I don't have thousands.


What are you two even talking about?


The three golden rules of computer security: do not own a computer, do not power it on, and do not use it.

https://en.wikipedia.org/wiki/Robert_Morris_(cryptographer)


A compo what?!


Been a happy customer for a while now, really love what you’re doing. I wish more companies were as direct and competent.


This is extremely honest and transparent. In addition to being good marketing, it probably attracts customers who won’t make BS support requests.


FYI, your "pricing" link at the top of that pci.html page 404's. The pricing link works from other pages however.


I see that that has now been fixed - thanks for pointing it out.


If we’re discussing website bugs, the “SSAE16 / SAS70” page[0] mentions 2015 as “upcoming”

[0]: https://www.rsync.net/resources/regulatory/sas70.html


The “burger menu” does not work on iPhone. Works from the front page.


Man, everything about your service is simple and direct! Amazing.


Isn't the nice thing about having an access fortress, that you can monitor the access more simply? Or is it just as simple for you to monitor access to all the identically configured machines? I suppose it might be.


I believe the usual name is a bastion host, not an access fortress.


Anything is possible if you are willing to get a little dirty and negotiate with other humans.

We initially had some troubles navigating these waters in the financial sector, but once we were able to convince 1 big customer to try our system on a trial basis, everyone else started to play along really nicely. No one wants to be the first one to try a new thing and get burned by it.

In 2021, you can sometimes leverage things like technological FOMO to make a business owner believe that they are going to lose out on future business value relative to competition, who you might frame as be willing to take on a bigger technological risk. And indeed, smaller clients in our industry are willing to overlook certain audit points (at least temporarily) in order to compete with bigger players.

Some might not like it, but being able to engage in the sales process and bend some rules occasionally is absolutely required to play in the big leagues. Once you are in, it's a lot easier to move around. No one has a perfect solution and everyone knows it. It's just a matter of who is the better sales person at a certain point.


I used to (late 2000s) work for a tiny, tiny company that was courting a customer in the mobile banking space. They wanted us to tick boxes. So we bought a box (some sort of Fortinet) that said it was a firewall and IDS. Plugged it in, used it as our new router. "Cost of doing business."

Could we have argued with them during the sales process? Only if we wanted to lose the sale. The Fortinet was cheap compared to the value of the contract.


Cost of doing business, or ... introducing new Fortinet vulnerabilities into your infrastructure?

I know you mentioned 2000s, but it's funny that these contractually obligated boxes might introduce more worry: https://www.bleepingcomputer.com/news/security/fortinet-fixe...


I guess you can still tick all the compliance checkboxes if you install the contractually obligated Fortinet device outside your DMZ and internally treat it as no more trusted that the rest of the internet...


Exactly. My first thought was: did they install Solarwinds as well?


Which is exactly what Kozubik was talking about!


lol. anti-viruses are the virus. the ultimate virus. don't execute any binary and you'll be fine.


Iptables didn't count?


And you can update it at its own rhythm, potentially different from your upgrade path. And you can make them tls-end for you. Your customer might even have 3000 of those and already know how to keep them happy running. Not so bad.


> And you can make them tls-end for you.

Nothing says end-to-end security like terminating TLS at a network choke point so intruders can easily snoop all traffic.


Case and point "SSL added and removed here! :)"

https://blog.encrypt.me/2013/11/05/ssl-added-and-removed-her...


FYI the common idiom is "case in point," rather than "case and point."

The phrase represents this idea: I have a case to make (or an argument), and there is a single element that is conclusive enough to make the whole case in and of itself.

You might say, "I can address this entire case in a single point." This shortens to "case in point." The implication is that the single point you make is enough of an argument to prove your whole case.


What is the threat model there? What if the system can't be upgraded for reasons? What if your service/gateway is just behind the 'network choke' (who said you had to have only one?). Are you paying to upgrade everyone and their perfectly working mainframes or java 8 apps to TLS 1.3? How do your intruders come in? They have to break the appliance? How's the chance you have better tuned/setup your TLS terminator or FW than network security 'experts'?


The threat model is that any foothold an attacker gets behind you TLS terminator potentially allows then to snoop plaintext traffic - which likely includes login creds and auth tokens for all those mainframes and java 8 apps.

(Note, I do exactly this a bit myself - terminate TLS at Elastic Load Balancers - and I feel a little dirty about it ever4y time I'm reminded... I sometimes wonder if I spend more time ensuring VPCs are appropriately isolated and keeping instances running untrusted or less trusted code out of vpcs with production customer data flying around unencrypted, than I would setting up to use encrypted data-on-the-fly everywhere. The big inertia holding that back is that we have so much legacy stuff running on stuff like Grails3 and Java8 that) he benefits of starting "doing it right" are not going to be fully realised for many years while those old platforms still need to run, and the added complexity of running two differently architected platforms is a big issue... I know what we should be doing, but the path to get there and the expense of travelling down it are high. We'll get there in "drip feed" mode where new projects and major updates to existing projects will do it right, but I'll be astounded if we don't still have some old untouched Java8 or Grails3 running in production in 5 years time...)


You don't have to forward traffic onto HTTP - ELBs & ALBs will happily forward traffic to HTTPS endpoints. That gives you an encrypted backend, but still allows you to manage the certificates & TLS policies in one spot. The backend servers can happily run on self-signed certs and the load balancer won't care.


Take a look at AWS nitro instances they give you "free" ipsec type network encryption with some caveats I forget.


The main caveat is that it only applies between Nitro instances: not AWS services like load balancer or databases, which would be one of the areas where it’d be most useful for anyone with legacy apps.


We ditched the the tls termination in nginx strategy, shifted to a nlb with tls termination in app. We used let’s encrypt because we needed something dynamic so we didn’t have to manage certs. Well we received one of those CISO forms from a big enterprise and one of the questions. Do you use LetsEncrypt? We said yes and the responded that it wasn’t a permitted tls solution. For what it’s worth I was happy with it.


Could you share what firms you're working with now so I can skip doing business with them?


Throw a rock and you will hit one


Might be more useful to get a list of companies that don't have requirements like this.


> In my experience, the larger organizations will have a "security" questionnaire required of their vendors, and the person administering it is a droid, incapable of evaluating whether the questions, originally written in the mid-00s and only updated for buzzword compliance since, are applicable to modern security practice today, or to the particular product/service/vendor in question.

And in many cases on the vendor side its some dude from sales filling it out... so pretty noisey on both ends.


Which seems somewhat questionable. It seems like you’d want someone to fill it out that had a less direct interest in a closed sale. Getting hacked is one thing but getting caught lying on a security evaluation could really harm the company’s ability to secure future enterprise customers.


> Getting hacked is one thing but getting caught lying on a security evaluation could really harm the company’s ability to secure future enterprise customers.

“Could” being the key term here: unless it’s epically bad and unequivocal, there are pretty good odds they could skate a long time without anyone asking and even if they did it might end up simply being that they blame long-departed sales guy and patch things up with the CIO over martinis.


> I'm sure it can be done. IIRC, Cloudflare doesn't use any firewalls

This is a little disingenuous because their product is a modern firewall. It drops packets and conditionally allows sessions to your backend.


While I agree with your assessment that Cloudflare serves as a modern firewall, I disagree with your assessment of disingenuousness. My lament was mainly at the more robotic and/or malicious "auditors" and "compliance officers" who will mark one or more line items as a critical failure if you don't have one or more Cisco Firepower/Juniper SRX/Fortinet Fortigate/Meraki MX/Barracuda NG/etc in your network such that its position in the network diagram means that it protects you from Evil Hackers. And AFAIK, while cloudflare's service is a sort of firewall, they don't use any traditional layer 4, layer 7, or 'next-generation' firewall appliances to get that result.


> any traditional layer 4, layer 7, or 'next-generation' firewall appliances to get that result.

Cloudflare’s product is just a web application firewall as a service. It’s not magic. The major difference between Cloudflare and a pizza box WAF is the global distributed nature to absorb ddos in the same product. The things it does to the packets are the same though.


Cloudflare not using any firewalls seems like a strange concept, considering they literally sell firewall-as-a-service.

https://www.cloudflare.com/waf/


A WAF is not the same thing as a general-purpose firewall. Think of it as a web proxy with filtering capabilities.


Yeah, I find myself kinda annoyed at the term WAF, as it overloads the term "firewall". But your description is quite accurate. Whether you're doing your filtering with an expensive F5 Big-IP with its nifty glowy logo on the front bezel, an haproxy instance from some WAF-as-a-service vendor, or Nginx and some plugins running on a VM, any of those, done right, can serve in a WAF role.

I say that now, but I wish I understood that a few years ago, when faced with the WAF line item on an a WAF, promptly went "WTF is a WAF? googles No, we have a WAF." Could have saved myself some security audit pain.

On the flip side, I did manage to get some upgrade budget out of failing that battery of line items.


It seems to me the basic definition of a firewall is:

"A firewall is a network security device that monitors incoming and outgoing network traffic and decides whether to allow or block specific traffic based on a defined set of security rules."

https://www.cisco.com/c/en/us/products/security/firewalls/wh...

"usually firewall \ ˈfī(- ə)r- ˌwȯl \ : computer hardware or software that prevents unauthorized access to private data (as on a company's local area network or intranet) by outside computer users (as of the Internet)"

https://www.merriam-webster.com/dictionary/firewall

This and other "web application firewalls" seem to meet these kinds of definitions. Add to the fact more and more "traditional" firewall appliances are adding behavioral filtering and have had application tracking for decades and we're far from firewall being limited to only a layer 3 device.


What would they need a firewall for? They have full control over the entire environment. They can (and should) just filter host-side.


Host level filtering doesn’t make it “not a firewall”. If they drop packets in the NIC before hitting userspace (they do this), that’s a firewall. Iptables is a firewall.


I am confident 99.9% of the people insisting on having you tick the checkboxes have no idea what "they drop packets in the NIC before hitting userspace" means...

Pretty sure getting your PM to just drop the firewall icon into their Visio diagram is a better way to meet stupid compliance requirements than explaining the difference between user space and kernel space to a just-graduated big4 consulting company intern "auditor"... /s


Exactly.


> In my experience, the larger organizations will have a "security" questionnaire required of their vendors, and the person administering it is a droid, incapable of evaluating whether the questions, originally written in the mid-00s and only updated for buzzword compliance since, are applicable to modern security practice today,...

They may be even aware, they are just bound by their companys ruleset...


> ...incapable of evaluating whether the questions, originally written in the mid-00s and only updated for buzzword compliance since, are applicable to modern security practice today...

You just described my workplace. We have some rules that nobody understands and nobody remembers where they come from, but we have to follow them blindly. For example, they require that any access to the web services should go through a VPN, which would be fine if:

- The VPN actually worked, but it doesn't.

- The servers already uses TLSv1.3, all the services require user authentication, and there are 3 layers of firewalls and an integrated virus scanner in front of the services.

- We are an international project with people from 10 different organizations in 6 countries on 2 continents, and it's really difficult to impose these kind of rules.

So for example, I'm managing a GitLab instance that I can't use myself. I can only SSH login from a very specific computer to manage it, but I can't upload my own code from my office computer.

And I don't want to go into their blind devotion to the firewall and their concept of one way connections...

So I'm just letting time go by, until everybody is so angry they are finally forced to change. Doesn't help that this is Japan, the epitome of rigidness and "even it is broken, don't fix it".


> Never mind that a KISS setup tends to bring security because of its minimized attack surface

Security is also about depth. You should assume breaches can happen and have another level of defense.

That only increases the attack surface but it's a much better approach for imperfect beings.


In my experience that approach actually ends up leading to weaker security. When you have 5 or 6 security layers it's not clear which ones are important; people get confused about which parts can be safely bypassed and how, and you end up with a swiss cheese where sooner or later all of the holes line up. Having a really clear distinction between public and private services works better.


In a way yes. I'm sure everyone here has heard "This service can only be accessed on VPN anyway so we don't need authenticated access or care about security in the service itself".


On one hand, a firewall that accepts incoming port 22 connections isn't that different from only having port 22 listening.

On the other hand, a firewall is an explicit declaration of the ports you want open and who you want them open to, which seems like, at the very least, a useful thing to do. If nothing else it seems like defense in depth. I'm not sure I buy that a system designed around "default deny" is an increase in secrity complexity, certainly it's complexity that would hurt availability, but complexity that would hurt security?

Either way, the real security comes from monitoring the reality of what ports are actually open/listening and verifying a person's assumptions about their systems.


> [...] but complexity that would hurt security?

Higher complexity = larger attack surface.

For example, if they used a firewall with one of Cisco's infamous backdoors.

https://www.zdnet.com/article/cisco-removed-its-seventh-back...


Isnt a firewall largely a defense in depth thing? If everything is working perfectly and the firewall gets compromised, then yes, you'd just be listening to port 22 with SSH anyways, so it wouldn't really matter. But if something went wrong on the system behind the firewall (configuration mistake, software defect, malware, hacking) but the firewall itself was still secure, it would limit the damage that could be done.


A FreeBSD machine running as a firewall might make sense without significantly increasing attack surface. But inviting a new hardware product into your infra is a significant increase in attack surface. SolarWinds is a very fresh example of a catastrophic supply chain attack and a clear example why it might not be a defense in depth thing.

If your network is secure, and a well configured machine only listening on port 22 is pretty secure, you have to ask how the production machine will interact with the outside world. Well every update is an inverse remote code execution. You are getting remote code from an external location and then running it directly in production. So while you might trust FreeBSDs package manager, do you trust Cisco? Do you trust SolarWinds? Even if you do, it's hard to argue that your attack surface hasn't been increased.


I generally agree with the sentiment, but it's not strictly right.

For example you have your main machines set up securely, and requests go through the firewall. Of course a compromised firewall doesn't make it easier to compromise the main machines. But because your users are going through the firewall, _they_ might now become vulnerable to some classes of redirection attacks.

Even in a scenario where you're using the firewall as a passthrough, you're still looking at a scenario where (For example) your DNS entries are now pointing to a machine you have less control after. It might not mean that now HTTPS doesn't work anymore, but that (combined with some other mistake) might be enough.

One potential class of vulnerability might be related to recent git client issues: the software your client is using might have an issue that would be a security issue when connecting to an untrusted source. You wouldn't try to get a keylogger on your clients' machines, and the software is always pointing at your own domain etc etc. But the firewall vulnerabilities have opened up that angle of attack!

It's definitely a balancing act, and dependent on how much you trust each layer of your stack.


they just don't have dedicated firewall. That doesn't mean they can't run ipfilter or whatever, on the hosts themselves with default deny.

In fact he mentioned port knocking. You pretty much need some kind of host firewall for that.


Disney, 3M and ARM are on the website.


> I start the day with a short walk outdoors. I don’t want the first thing my eyes see to be print, and I don’t want the first thing my body does to be sitting. So I walk a bit.

I like that. I like that a lot. That's a very enviable practice. I think I know what I'll be experimenting with this next week.

Did not expect to read that article and have the most stand out thing be a routine change I'd want to copy. You never know.


It’s one of the best “lifehacks” , walking, biking and the likes in a traffic free or low environment to just clear the mind. By the end of the walk you’ll be full idea’s, or just ready to start the day. It’s coffee but without stimulants :)


Get a dog and you won't have an excuse for skipping a morning walk.


Nice article. rsync.net is one part of my personal computing setup that I never even think twice about. It's simple and it works, and that clearly applies to the infrastructure too. I use ZFS locally and it has made managing my own data strangely pleasing, and it's nice to have the same system on my off-site storage too.

On the laptop-front, I find myself drifting towards a similar setup to John. I have a hefty workstation laptop but the battery life is dire and it weighs a ton, so I pretty much just run it as a headless machine next to my server now. I'm planning on picking up a Pinebook Pro as an "outdoors" machine to just remote in. I also find myself extremely unwilling to arse about swapping multiple machines on my monitors so being able to keep my work machine separate and secure but operate it from my desktop is a nice compromise.


> rsync.net is one part of my personal computing setup that I never even think twice about

I've been using them in a small but important-to-me way continuously since 2008, and I have occasionally forgotten the service needed maintaining at all - at one point I forgot to pay them for an embarrassingly long time after a credit card expired, and they kept my storage going for me until I finally got myself in order. Please don't try that.

(My first contact with them was in 2007, to ask whether they supported pushing directly from git - the answer was no, though they added the feature a few years later - a bit ironically, I've never used it)


RE: git ...

We just added git-lfs / LFS support. So now, when you do things like:

  ssh user@rsync.net "git clone --mirror git://github.com/LabAdvComp/UDR.git github/udr"
... you can successfully pull over LFS assets, etc.


Oooh, nice!


I would love to use this simple setup as well. It's too bad ZFS snapshots cannot be sent and stored encrypted. I would love to use rsync.net but the idea to have my data sitting in someone else's computer in plain text feels wrong.

So instead I have to use restic, which re-implements many features of ZFS and this also feels wrong.


You can 'zfs send' to a (special kind of) rsync.net account.

We support encrypted zfs[1][2][3] and raw-send, etc.

The pricing is the same but there is a 1TB minimum because we need to give you your own VM (bhyve) and we have to burn an ipv4 address for you, etc.

[1] https://www.rsync.net/products/zfs.html

[2] https://arstechnica.com/information-technology/2015/12/rsync...

[3] https://www.servethehome.com/automating-proxmox-ve-zfs-offsi...


Sounds like a good opportunity for an IPv6-only version at a discount/lower minimum. Many people (Google says 45% in the US) don't need servers to have v4 these days.


> The snapshots are immutable (read-only) and cannot be altered in any way. In this way, your rsync.net account protects you from ransomware or malicious parties.

Is this still true for these special ZFS enabled accounts?


Not /u/rsync, but I have one of these accounts. The snapshots are immutable (as are all ZFS snapshotss) but you have the ability to run `zfs destroy` on them, so there is a risk there. (When they're doing the snapshots for you, you don't have that ability, but then you just have a filesystem, with no access to the underlying ZFS.)

My solution to the `zfs destroy` risk is to make my backups pull-based, where rsync.net connects inbound to my production server, and rsync.net specifies the necessary commands on the production box to grab the raw encrypted streams. That eliminates the ability of an attacker that is on the production server to run arbitrary commands at rsync.net.

There is still a small risk of data destruction if an attacker gets your rsync.net credentials, but those can be protected via off-line storage and secured workstations, which works pretty well.


While that sounds like a workable solution, out of curiosity does rsync.net support multiple users and OpenZFS' delegated permissions for more fine-grained control? They're pretty useful, and amongst other things can ensure any given user can create/clone/send/receive/etc, with per-file system capability, inheritance and so on.


Is this VM like a DigitalOcean or Linode VM with storage attached and the customer is fully responsible for it or is this VM managed by rsync.net like the normal storage accounts?


No. Yes.


could you allocate the VM on demand? xinetd style

(you could route the ssh traffic similarly based on login)


I have a local ZFS backup server which sends encrypted incremental snapshots to my rsync.net account, no problem. You can't mount the encrypted snapshots since freebsd ZFS doesn't support that yet, but I don't need that (and it would defeat the security point anyway).


This seems strange to me - any version of ZFS recent enough to support receiving encrypted snapshots should support mounting them (which, AFAIK, means "OpenZFS-running FreeBSD implementations" - I don't believe the non-OpenZFS, integrated implementation ever got encryption support of any kind).

I've seen several people report using OpenZFS encryption on FBSD on various mailing lists, so I'm 95% sure it's not secretly broken on there.


Unrelated, as an aside ...

I really am enjoying the developer Q&A interviews that console.dev is putting out.

They're very much like the "usesthis"[1] profiles but more in-depth and with more interesting details ...

[1] https://usesthis.com/


It was an interesting read!

I did a usesthis a little while ago. https://usesthis.com/interviews/matt.lee/


John's usage reminded me of something I read in Rob Rike's "Uses This" interview[1]:

"I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve."

[1]https://usesthis.com/interviews/rob.pike/


The main downside of this is if you lose internet access you lose access to most of your data. I intentionally have a local NAS so that I don't have to worry about outages.


Essentially a Chromebook


It’s worth reading the rest of the interview, I find Rob Pike has a very interesting/unique take on the current landscape given his involvement with Plan 9:

> Now everything isn't connected, just connected to the cloud, which isn't the same thing. And uniform? Far from it, except in mediocrity. This is 2012 and we're still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary.


If you want to see more on this theme, the Upspin docs[1] are a really interesting read. This 2017 talk[2] that Rob Pike gave on it is also really good.

[1] https://upspin.io/ [2] https://youtu.be/ENLWEfi0Tkg


Thank you! I hadn't seen this.


To bad development has stopped.


Less so now that ChromeOS runs Android Apps and Linux in a VM. Both, unfortunately are heavily restricted to local storage so there's no real sense of being able to log into another Chromebook and have those ready and waiting for you. Android apps will reinstall but not always (read: almost never) with app data as before. Linux Apps is 100% manual still.

ChromeOS was great for 'switch to a new machine and log in' use... but it's so much more complex than that now. Only the most locked down managed devices wouldn't have to worry about anything left behind on a device before someone abandons it.

At this rate, short of a native package manager and repo for installing applications to run on ChromeOS (instead of a container Linux distro on top), ChromeOS might as well be considered a distro in of itself. Especially with CloudReady being installable on non-Google sanctioned hardware.


Even with just ChromeOS, as far as I can tell any files you download go to a local folder that's not backed up. If you want anything to be safe you have to manually copy it over to the google drive folder.


All of that is optional and manageable via Workspace admin console.


This was a pleasure to read. I've been an rsync.net customer for ~6 months now, and am using Borg to send de-duped, encrypted backups to rsync.net from a few on-premise linux systems. As compared to other similar backup systems I've used, it's been a pure pleasure to implement and maintain.

Thank you for your great product and support, John!


Meta: I really dislike the style of console.dev, the article is shunted to the left and leaves the rest of the screen real estate to be taken up by an - albeit pretty - but unnecessary piece of digital artwork. This - https://ibb.co/nzbFxjW - is what the article looks like on my ultrawide which made for very uncomfortable viewing


I use a simple bookmarklet in such situations:

    javascript:(function(){w=window;d=w.document;b=d.querySelector('body');s=d.createElement('style');s.innerHTML='body{margin:0 auto!important;width:100%;max-width:120ch;font:normal 18px/1.5 Helvetica,Calibri,Arial,sans-serif;color:#333;background:#f9f9f9;}main,#main{margin:0 auto!important;}';b.setAttribute('style','');b.append(s);})();
It's pretty basic - changes body width to 120ch, centers body and main, and updates body font. Despite some issues, works great for a one-click fix.


Not sure why you couldn’t just resize the window here?


I can, however I don't think having to resize your browser window to comfortably view an article is a very good UX, especially when it could be rectified by positioning the content in the middle of the screen.


Right, but can you comfortably view anything over there? Why maximize the browser in the first place?


Thats a bit of a chore with a tiling window manager if you generally have the browser on its own workspace - you either need to spawn new windows around the browser window to push it into the geometry you want, or add borders to it.


It's not a great suggestion for, say, iPad users either...


An iPad isn't nearly big enough to have this problem.

This is what I get for iPad emulation: https://i.imgur.com/FriTf6X.png https://i.imgur.com/gLhqvFO.png

It looks just fine.


What would you prefer? Having an entire paragraph of text on a single line? Your monitor is the wrong shape.


That's a pretty unhelpful way of thinking about this.

Any web designer who doesn't, in 2021, understand and make allowances for 4k/5k ultra wide monitors as well as phone sizes screens in portrait mode - isn't doing their job right.

The problem here is not the monitor shape or the user's browser window width, it's the css (and maybe html) and the lack of understanding of how to use it properly (or, more sympathetically, perhaps a conscious choice on the part of the people paying for the website to not allocate enough budget to cover all their competent webdev's suggestion?)


The designer here did make allowances for "ultrawide" monitors. He stopped the lines of text becoming unreadable by wrapping them at something reasonable.


There are precious few reasons to run a web browser full screen on an ultrawide. Most web sites don't even go that wide, it's just a ton of wasted screen real estate.


I wonder what % of users:

1. Have an ultra wide monitor.

2. Regularly use their browser in full width mode on it.

Pretty much everyone I know with an ultra wide snaps their browser to the left or right side - having a single browser window that wide is not very useful, and makes lots of websites look weird.


I’m inclined to agree. Even on a normal sized monitor I almost never fullscreen my browser.


Yes, I want an entire paragraph on a single line. If I wanted the content to be narrower then I'd make my browser narrower.

http://motherfuckingwebsite.com/ looks great on an ultrawide. It's pathetic that so many websites don't.


I had to disable CSS entirely to get the text readable.


On my phone it looks like the screen is cracked.

I was going over the lines with my finger to see if I could feel them. Thought it was internal. Wasn’t until it switched to another app that I saw them move.

Scared the shit out of me.


> "I have a early-2009 “octo” Mac Pro [...]" > > OS: macOS

Does this make anyone else a bit uncomfortable?

I don't think MacOS is still receiving security updates on that hardware. I'm all for using old hardware for as long as it keeps working, but I would never browse the internet with a vulnerable OS on a vulnerable processor (spectre etc...)

Or am I missing something?


"Or am I missing something?"

Yes, one minor thing ...

Although you are correct that Apple is not officially supporting the latest versions of OSX on that hardware, there is a trivially easy hack of the system that will allow you to load newer versions of OSX.

So, like many of you, I am not running Catalina but I am running an updated, patched version of OSX.


I have a late 2008 MacBook Pro running Catalina that I still use daily. As far as I can tell it still receives security updates.

There's a simple patcher you can use for these old macbooks:

http://dosdude1.com/catalina/


Neat, does that include System Integrity Protection and Authenticated Root Volume on your hardware?


from what i remember open core doesn't need to disable sip to function, just to install. the firmware is the actual firmware upgrades released by apple [everyone was shocked they supported this] that come embedded in the osx updates. my updates worked fine.

I'm pretty sure sip gets re-enabled each boot, but to check for authenticated root volume i think i need to install the g20 to run crutil or whatever it is.


happy to hear that I'm not the only one [sys admin type person] doing this.

although i use Windows, i do have Catalina installed [and Debian for the triple boot]. also using open core. I'm pretty sure i downloaded a copy of osx from one of their repositories 0.o I'm super lazy, it's really not that hard.

my average cost for hardware since i bought my Mac is now less than 400/year CDN. is it worth it? while I'm slightly concerned about the security [I'm probably the biggest risk anyways since I'm not confident in my knowledge of secops], i get 95 fps playing pubg, can edit in 4k, run 100+ tracks in Cubase, and run 3 different OSes or as many vms as you'd like [which i think can also run bare metal vm on the 144 firmware upgrade]. on top of that the case still looks good and I've kept at least 50+lbs of ewaste out of landfills or whatever... seems pretty worth it [hopefully no one ever tries to steal pictures of my cats]

[we could also get into a discussion about the right to repair bill in the EU, talking this way]

do you game? i feel like that might have been intentionally left out of the interview?

what info would you keep unencrypted on your servers?

how much does a colo cost for a 2u server typically? how about back in 06?

is rsync a good solution for video files backup? what are the benefits over say, running a home server and keeping physical backups at your friends house or iron mountain or something?

can rsync use 'live' encrypted data? in other words, how do you encrypt/decrypt on the fly? say for streaming an mp3 or something? [not that you would do this if you were paying per GB...]

please excuse my ignorance. I'm not a real sys admin, just an old wanna be hacker that could never get his shit together.


For many versions of MacOS you can just edit a plist file to get it to install on unsupported hardware. When I've done this there were no stability or performance issues, but YMMV depending on what OS and hardware versions you try.


Browsers have put in patches for Spectre. I turned off Spectre and Meltdown in my OS because I wasn't willing to live with the performance hit for a scenario that is unlikely to befall me. I think it's fine if the Mac Pro is using a completely up to date browser and isn't installing new random applications.


>> I would never browse the internet with a vulnerable OS on a vulnerable processor (spectre etc...)

You might be paranoid. I've been browsing on a few 2008/2009 obsolete Macs for a while, on the highest OS that they will run.

Eventually they'll be a pain to use because of browser incompatibility, pages will get even more bloated and these machines will run them even slower.


They're possibly something like a dosdude patcher or modified bootloader to run an OS like Catalina on it.


I have trouble understanding why people go through these hoops.

Yeah, I get it, people love their Mac's... but the company that produces them actively undermines your ability to continue using perfectly good hardware past what they feel is "profitable". This leads to huge efforts to hack/reverse the updaters, or alter newer OS versions to trick them into installing, etc.

I'd personally jump over to some system that doesn't hate it's users nearly as much. But, that's just me.


Linux just isn't plug and play enough yet to make the switch less painful than dealing with the pain-points created by anti-consumer practices by Apple and Microsoft on MacOS and Windows, even for technically literate people.

I made the switch a year ago after having reached my breaking point with Windows and it still was a massive pain and daily loss of performance. For comparison, I also rooted my Android phone and installed LineageOS without google services which crippled it significantly and it still wasn't as much as a pain to do as using Linux on my workstation.

People often say (not talking about you, just something I see on HN often) that it's easy nowadays and anyone can use it but it's not been my experience and I think it's the very attitude that keeps it from being a commonplace OS for the consumer market. I keep a list in a file I call "linux sins" but without having to look at it you can figure out the problem by just googling any benign problem someone might encounter on their OS and checking the answers. Do the answers start with "Click there" or "Open your terminal"? I don't see the situation changing since people who develop for linux generally refuse to acknowledge the problem.


Fair criticisms. We're still waiting for the fabled "year of the linux desktop".

Although, I feel the specific issues you raise are less of a problem on a desktop-focused distro like Ubuntu or Linux Mint. Those distros really focus on a complete desktop experience, and really try to never require a user to drop into a shell to get anything done. So, perhaps it's a case of people using the "wrong" distro for their needs?


I'm afraid the issues I describe have been with Ubuntu.

Here's the first line from my "linux sins" file as an example: https://askubuntu.com/questions/1151283/disable-nautilus-cac... If you copy a large file to a USB drive on either Ubuntu or Mint the progress bar goes to 100% instantly and closes and the actual transfer of the file is done in the background without the knowledge of the user. And the answer is "It's your fault, just try to eject the drive until it works."

And even beyond the OS, the whole software ecosystem is broken. It's impossible to find simple, working UIs for the most basic pieces of software, everything goes through the commandline.


Fair enough, but I'd just like to point out that specific issue you linked to happens on Windows too (and almost certainly MacOS as well).

It's just how device writes work, and is why Windows users have been told for years to select their device -> Eject instead of just yanking the USB drive out when Windows says 100%.

So, not exactly a fair criticism in my opinion, but your overall point stands - Linux can be rough around the edges for some use cases.


> I'd just like to point out that specific issue you linked to happens on Windows too

The poster of the question explicitly states that this behavior does not happen on Windows using the same hardware. And indeed, Windows doesn't cache as aggressively as Linux does (which is one of several reasons why Linux tends to have better disk performance and less risk of disk fragmentation), so no, by design, this issue is more pronounced on Linux.

The actual reason why Windows users are told to explicitly eject instead of just yanking the device is because there are various background processes that might be writing to the device (particularly relevant if you're using SpeedBoost or whatever it's called), not because of file copy progress bars being entirely unaware of the OS' caching mechanisms.


It doesn't happen on Windows because the cache is made to be small enough that the caching and flushing happen at the same time regardless of the size of your RAM. So your transfer progress bar will end at approximately the same time as the actual transfer. I don't use MacOS but I assume they have the UX & UI figured out as well. That's not the case on Linux, the progress bar will disappear in seconds while the transfer can last hours.

And, I say this with no ill-will toward you, I'm not trying to be antagonistic but you're having the same response as all linux users I encounter online. You're denying the problem even exists, saying it's not fair and it might be rough for some use cases? This is transferring a file to a USB stick, this is a very basic use case, and the UI is broken and the UX is dogshit (excuse my french). If we can't admit there is a problem we're never going to get around to fixing it.


I have had gripes about usb drive writing in the past, but what you describe does not happen in my install of Ubuntu 20.04.


It depends on the size of your RAM as far as I understand the issue.


It's not out of some love for Macs. I have a 2008 MacBook running Catalina and it's simply because the cost of replacing it is >0. If this works and works well(and it does) then why would I get rid of it? Just to spite apple, which doesn't care either way?

I also have a 2005 car that still runs - should I get rid of it because the company that made it stopped providing any kind of support for it long time ago? Or you know....keep using it because it works?


Maybe it was easy for you to modify your OS to continue updating, or you downloaded some ISO of Catalina someone else pre-hacked for you - but it was certainly a non-trivial effort for whoever figured out how to trick the OS into installing and/or updating.

It just seems like wasted effort, since the company all this supports really has made it clear they do not want you to have this ability, and can at any moment make future updates break everything all over again, leading to a new effort to reverse engineer the changes.


So I don't agree, and I will use the car analogy again - old cars are not "supported" in any way and yet many people keep them going. There's serious engineering effort to make the parts, to write new software, to improve existing firmware etc. By your logic, that's also "wasted" effort since the manufacturer chooses to abandon cars after just few years, so why would you keep them going.

I feel the same way about computers - like, who gives a damn what apple thinks. I have a laptop that is still going because people keep making it compatible. That's a good thing, not a bad thing.


The difference there is you're not violating some TOS or EULA by replacing parts on your classic car, and when you change your oil (do OS updates) there's no chance of suddenly your transmission refusing to allow you to shift gears until you perform more heroics and disable the artificial limitations.

Very few non-classic and/or popular cars receive massive aftermarket support for all parts - often the aftermarket supports parts that are in common with a lot of vehicles or are vehicle-agnostic (such as belts, etc), and in some cases you're plain SOL (try replacing an airbag on a 1993 Dodge Caravan, for example - all you can find are OEM used ones pulled from junkers).

I think your comparison would be more apt if, say, Ford disabled all vehicles that were 10 years + 1 day old. While Apple isn't disabling your OS, they leave you exposed without security patches, etc... - making it approximately the same.


>The difference there is you're not violating some TOS or EULA by replacing parts on your classic car, and when you change your oil (do OS updates) there's no chance of suddenly your transmission refusing to allow you to shift gears until you perform more heroics and disable the artificial limitations.

How long do we have to wait for the early Teslas to be considered "classics", because they're doing worse than this already...

"Self driving? No, that was only licensed to the original purchaser, you need to pay us $8000 now because we just remote disabled it when we worked out you bought this Roadster second hand. Hope that helps, have a nice day - Elon"


I agree with this. It's why a hackintosh has never appealed to me.

However, in this case, the tweak I needed to do to the mac pro was so trivial as to be (essentially) cost-free. No need to alter the installer, etc.

It pleases me to be (re)using this machine for over 12 years now - especially given what a triumph of workstation design these mac pros were ...


My wife's MB Pro faced an upcoming support reckoning with Apple, and I just tossed Linux on it. Problem solved ;)

I've also used FreeBSD on (non-Apple) laptops in the past. It actually worked ok, I even had wireless working (this is very hardware dependent though, and things may have gotten worse over the years for all I know).

Based on the rest of your profile I think you might enjoy switching that workstation away from OS X to FreeBSD. Of course, it means some tinkering and looking for new tools to replace the ones you use now, but the tinkering is half the fun... :)


I like to get that kind of use out of my machines though I upgrade workstation on a more regular basis (though the last one went a full 7 years with nothing new but a RAM upgrade and an SSD midlife) - You come to identify with the hardware after a while, it takes on a life of it's own.

Since I'm (excluding Win10 for gaming when I rarely have time) exclusively a Linux user I get to use the old hardware for other purposes at the end until it finally becomes either useless or lets out the magic smoke (as my 2004 R50e Thinkpad finally did - man I miss those keyboards, so much better than the T470P (which itself is excellent)).

It paid of just recently, I had 2012 Vostro 3750 kicking around and when schools went into lockdown with a quick wipe and Fedora install it made a perfectly serviceable machine for my step-son to do his remote learning on - there was an irony in running MS Teams on Linux on a machine that wouldn't have been able to run current generation Windows 10 and Teams anywhere near as comfortably.


My last personal desktop was about 11 years old when I retired it. It had an AMD Phenom II 965, just to emphasize it's age.

It started life with Windows 7 (Win7 was like a month old at the time) and was subsequently upgraded to Windows 8, then Windows 8.1, then finally Windows 10 (and all it's "feature" updates) until it was retired. It ran slower than a new system, but fit my needs perfectly.

If Microsoft had arbitrarily decided I wasn't allowed to run Windows 10 on that hardware, it's very likely I would have installed Linux or BSD - after all, the hardware was a non-trivial investment and discarding it purely to please some company really rubs me the wrong way.

So, I guess I can sort of understand why people jump through these hoops... although personally I would just move onto some other OS that doesn't undermine my ability to operate my personal computer.


Hah, I am still occasionally using my AMD Phenom II 955 as an occasional gaming PC... I admit it now is powered off more than half the time.

Anyways, similar story: I'm not about to put up with Microsoft telling me my machine is too old to us; that just promotes e-waste.


It probably helps that it sounds like you mostly use it as a dumb terminal. If you had a compute-intensive workload, it would make sense to upgrade, since even a quite powerful 12-year-old desktop is likely outperformed by a current-model smartphone for many types of work.


Do you really need to pivot into Apple bashing on this thread? It’s not really on topic or needed.


Every operating system/hardware combination has its own pros and cons. For you, it seems the cons outnumber the pros when it comes to macOS and Apple hardware. Fair enough. For me, I see no major reasons to consider anything else than Mac. I really enjoy using both the OS and the hardware. To each their own.


As opposed to how “easy” it is to install Linux this doesn’t seem half bad.


What do you mean?

You download an ISO, put it on a USB key or burn it to a CD, and install it like you would Windows10 or any other OS.


You download an ISO, put it on a USB key or burn it to a CD, and install it like you would Windows10 or any other OS.

If only it was that easy all the time.

I have an old laptop (2017) that I wasn't for anything else, using so I tried putting Linux on it. Nope. I went through five distributions before I found one that would finally work. And then, it was not really useable.

The whole reason people use MacOS is because they know what to expect. Linux is still a crapshoot.


YMMV. I've installed several distros on my old laptop (2010) and never had any issues. Currently running Manjaro on it.


I have one of the 2008 Mac Pro’s running the k8s control plane at home. Those things just never die.


I really appreciate the information and would love more information on your architecture in particular. Also definitely love your information on your personal process and flow. Some of it seems interesting to adopt and some of it seems bad for me but I could see why you do it, some I do in similar ways but a bit differently. e.g. I use git and online services to make sure that any of my computers are completely replaceable and that I can pick up work at any moment from any one.

I can't agree more with the "no firewalls" approach to things, though I prefer to call it "host based" firewalls as it scares people less! I'm glad you've had no compliance/audit pushback on that, I architect things similarly and have had success pushing back on the requirement as well.

I'm very surprised by the l2 switches and actually choosing to run completely unmanaged switches. I assume you're running all 10G or more? Maybe i'm overthinking the complexity of your network but I would be lost without snmp counters on my switches and running switches+networking in fully l3 mode has some great isolation benefits, especially if you want full switch-level redundancy.

Do you have some more details on your data architecture? I'm very curious how do you do data direction/redundancy/sharding and balancing customer data across servers. I'm not trying to pry for things you consider secret but I think you have a very similar architectural mindset and I'm curious how you solve these things.


A simple layer 2 network topology only works in very narrow use cases (like this one). But a "dumb switch" means you also lose a lot of observability and it's very difficult to apply consistent network acls.


Agreed - we are, in a sense, "cheating" because our product is so simple that we do have one of these "very narrow use cases".

The benefits are tremendous, however, and go beyond day to day operations. A dumb switch has no credentials to protect and there is almost zero attack surface.

Further, if our switch dies we can immediately replace it with any other dumb switch that just happens to be lying around.

If you read failure studies - like those in the excellent Charles Perrow book _Normal Accidents_[1] - you see that in many cases there is a very special component that fails and everything goes to hell when they can't find a replacement for it.

So, while I can't encourage everyone to use dumb, unmanaged switches (because not everyone can) I can encourage everyone to remove as many very special components as they can.

[1] https://en.wikipedia.org/wiki/Normal_Accidents


I work in industrie automation and I can't agree more to dumb devices. There are a lot of nice special products, but if something goes broken, you have first tousend of pages manual, just maybe an identical part. The guy on the night shift has also to know this special part well .. and so on. The dumb devices, you can replace easy without any problems tomorrow or also in 10 .. 20 years.


This aptly describes why I do not want a smart home even as a tech professional and why I drive a generic Toyota with easily replaceable parts.


How are you providing network level redundancy with dumb switches? My only guess is that the ISP is already doing HSRP/VRRP on the gateway and you can setup multiple NICs/switches with something like CARP and being careful not to make L2 loops.


Why would they need nework-level redundancy? This is a backup service, and should not have production load on it at any time. I'd rather see a system with a dumb switch and the risk of a 3-4 hour outage if it fails than a smart switch that can then be cracked. (Even then, 3-4 hours is a stretch, as all the remote hands has to do to replace a failed switch is put in any other dumb switch.)


Your reasoning could be applied when there is low number of users doing not-so-urgent-nor-important-things. However, the service is provided for a lot of users at the same time here, spread globally and all with their own applications and problems, and accordingly the total impact could potentially be much bigger.

Having the handicap of needing to wait 3-4 hours before being able to access your backups in an emergency, could make a day-and-night difference for continuity.

So I would argue it has nothing to do with "being a backup service", but rather that their users could afford a 3-4 hours of waiting. Or that they don't think like that.


Charles Perrow only died recently, very sad.


Yeah in a previous career I was a networking engineer. I ran into a number of folks who were pure (or largely) layer 2 only environments.

In the right situation it's doable and potentially highly desirable due to the simplicity, but requires a lot of discipline by everyone involved, and the right conditions to make it work.

It was a design I supported and thought it was a great idea for the right situation, but I also was hesitant to introduce it to anyone but the 'right customer'.... who probably already knew what they needed to know about it.


That was a nice read! Good to read about something simple after a day working with AWS and their managed magic.

Scrolling through the cert pages 2015 seems to be in the future though?

> We personally toured every single major datacenter in Hong Kong and Zurich to choose the facilities that best met our old-fashioned standards for datacenter and telco infrastructure. The same will be true of our upcoming Montreal location in Q4, 2015. https://www.rsync.net/resources/regulatory/sas70.html


I go by the rule that if something is not secure enough to plug directly into the Internet, it is not secure. That doesn't mean I'll necessarily do that, but that should be the bar.

The only exception is special purpose backplane networks that are designed explicitly to be isolated. These are basically data busses for clusters, not user-facing networks.


Big fan of rsync.net but the firewall comment caught me a bit off-guard. The benefit of a firewall is that it's an isolated system which - apart from port blocking - guarantees a certain level of traffic logging and known-good state.

If you have everything on one host I'd say your overall setup on that host becomes much more complex because you only need to get hit by one successful exploit chain and all logs on that host cannot be trusted any more.


On a reasonable-size setup, I would expect that the logs are exported to dedicated log storage (log-only machines) as part of an effort to preserve accurate log files even in the case of a successful attack on one of the hosts. It is not especially hard to ensure that, for example, a record of an SSH login attempt gets recorded to an external server before the request is authenticated. So if you have (for example) an SSH account and a local privilege escalation exploit, there is still some evidence in the logs.

In the past, the benefits of a firewall were more clear-cut, but these days I think that it’s reasonable to have “defense in depth” without using a firewall as part of your solution.


The firewall is still helpful in case they hire a new person who opens a port and forgets to close it one day


"Steve, did you open a port? We only use SSH. What's going on?"



Yes, people make mistakes. That doesn’t mean you need a firewall.

In order for someone to accidentally delete a production database like in the linked article, many people have to make mistakes.

> The firewall is still helpful in case they hire a new person who opens a port and forgets to close it one day

Let’s talk about this scenario a bit.

What does it mean for someone to “open a port”? Really, what it means is that someone is running a program on the machine which listens to a port. But, why should anybody be running services on production machines manually, except in an emergency?

Normally, any changes you make to production machines go through some kind of configuration management system. You can’t just SSH into one of the prod servers. It doesn’t matter if you are an intern or if you’re the CTO. You don’t have the credentials. Nobody does.

Instead, if you want to run a service on a production machine, you have to make that change in source control, send the change to somebody else for review and approval, and once it is approved, submit it. Your configuration management system then pushes this change out to production systems according to the script.

Of course, not everyone works this way. Not everyone can work that way. But many companies do have tight controls over the production environment and the decision to forego a firewall isn’t unreasonable.


A firewall doesn't stop that, it just means they have to run some of the commands on a different machine.


The pricing model doesn't make sense to me. Their prices start at $0.025/GB/month, so renting 1TB of storage for a year would cost $300 - at that price, I could just buy my own disks and run ZFS myself. I kinda hoped they could offer lower prices using economies of scale. I checked the prices for Tarsnap, expecting it to be cheaper - it's actually 10x more expensive! Maybe someone can explain what I'm missing.


> Their prices start at $0.025/GB/month, so renting 1TB of storage for a year would cost $300

I came to the same conclusion. I was aware of rsync.net and tarsnap, and have checked their prices in the past, but for raw storage it's simply not competitive. Some of the other features they offer might make it worth it though if you need those.

Personally i just need a place to dump a backup of my family photo albums and documents. A full backup is around 1TB (deuplicated, somewhat larger raw), and for that there are much cheaper solutions.


Sure, you could purchase your own drives. But then it wouldn’t be offsite. And it probably wouldn’t be on a redundant internet connection. And most likely not have redundant power and cooling. And, and, and.

Self-hosting at home (or in the office) is a great option for some if you’re not worried about needing an offsite backup. For those that do care about this sort of thing, though, the extra you pay to have someone else manage the thing is well worth it.


It can be off-site if you have a friend with Internet where you can stash that disk attached to a raspberry pi. Quite a few people do that and it means you're storing terabytes not at the steep $300/year but at $30/year for every terabyte. I can eat healthily for a month from that difference, that's worth a few minutes of effort setting up port forwarding and installing zfs if that's your thing (personally I'd leave it plain ext4 and dump something encrypted like Restic on it, taking all of 5 minutes to setup from downloading the OS to Pi Reporting for Duty).

Calculation: pi 40eur, 1TB external disk 50eur, typical lifespan of a disk 5 years (when excluding including infant mortality which falls under the mandatory 2-year warranty for new electronics), ~8W power draw is ~€15/year. Let's say you also need to replace the pi after 5 years just for good measure. That's 15+(90/5)=€33/year for 1TB, which gets cheaper per terabyte with bigger or multiple drives.


Offsite - no server needed - not paying for electricity.

DIY is often cheaper if you ignore the cost of “doing it yourself” - and securely storing an offsite server is more than just the cost of a disk.

Native ZFS is also a feature for those who can use it.


DIY is way cheaper, true. but for comparison, aws is ~$100/tb/month


However, also for comparison, Backblaze is ~$5/TB/month.


For a backup use case, I haven't been able to beat the value of a Microsoft premium-whatever family plan. Routinely on sale for $60/yr, it gets you 1TB of backup plus an Office license for 6 users. Obviously you don't have as much control over it as "bare" cloud storage but it's hard to match the price.


AWS Glacier is ~$4/tb/month. Getting data out of there costs extra, but for backups of last resort you don't expect to ever need, that may be a workable tradeoff.


IIRC rsync.net doesn’t charge for bandwidth in any direction which for some use cases is nice - a set and forget billing type.


Really well done interview, some real interesting bits in there.

One part concerned me though, in the interview, it mentions "we own (and have built) all of our own platform." and it fails to mention a few critically important key parts of a storage platform, first being encryption. How are personal files being handled? Is encryption being used? Are you able to access this data using a shared key?

As well as contingency, what happens if critically important data is stored on your platform. On your website you mention:

"We have a world class, IPV6-capable network with locations in three US cities as well as Zurich and Hong Kong"

however fails to mention if replication is done across these locations. If technology (drives) is stolen from your datacenter, or mechanical failures beyond your control happen, how will you be able to recover from physical failure if you only appear to be serving from a single location?

Excuse me if I'm wrong but I couldn't find anything concrete in either the interview or your website. The premise of the platform seems quite well aligned with keeping alive the the UNIX philosophy, and reminds me of Tarsnap.

Either way, well made interview and interesting approach to a storage platform.

As a sidenote, what keyboard are you using? It seems really interesting and you failed to mention it in the interview :)

EDIT: It appears that you offer Geo-Redundant Filesystem as as separate product, maybe you would want to make this a bit more visible on your website except for only the FAQ and order pages. Either way, it seems like a sufficient move, that does still leave the topic of encryption though. As mentioned traffic is encrypted using SSH ofcourse, but is the data itself encrypted on your platform?


"How are personal files being handled? Is encryption being used? Are you able to access this data using a shared key?"

We give you an empty UNIX filesystem. So, if you push up files over rsync or sftp, they will sit here unencrypted.

However, there are now excellent "tools like rsync that encrypt the remote result with a key rsync.net never sees" - chief among them being 'borg'[1]. Other options include duplicity and restic - all of which transport over SFTP.

So it's up to you and you have total control. If you want ease of use and you want to browse into your account (or one of your immutable daily snapshots[2]) and grab a file over SFTP you probably don't want to encrypt everything on this end.

On the other hand, if you want a totally secure remote filesystem that is nothing but encrypted gibberish from our standpoint, you should use 'borg'.

"Are you able to access this data using a shared key?"

We are running stock, standard OpenSSH and you can, indeed, use an SSH keypair to authenticate with. In fact, you have a .ssh/authorized_keys file in your account so you can specify IP restrictions and command restrictions as well ...

" ... how will you be able to recover from physical failure if you only appear to be serving from a single location?"

A standard rsync.net account has no replication. We are the backup and your account lives in, and only in, the specific location you choose when you sign up. However, for 1.75x the price (ie., not quite double) we will replicate your account, nightly, to our Fremont, CA location.[3]

"As a sidenote, what keyboard are you using?"

It is a Keytronic E03600U2.

[1] https://www.borgbackup.org/

[2] We create and rotate/maintain snapshots of your entire account that are immutable/readonly - so you have protection against ransomware/mallory.

[3] ... which happens to be the core he.net datacenter - one of the nicest and most operationally secure datacenters I have ever been in.


Since I've had a handful of users ask about cloud storage for Snebu, Would you be interested in adding Snebu as a supported protocol? It should be similar to how you currently support Borg. For Snebu, the client runs find and tar, sending results via ssh to the snebu binary on the remote host. And more recently client-side public key encryption support has been added via a client-side filter called "tarcrypt". Ideally, a customer would use Snebu to back up to a local device on their network (for example a Raspberry Pi with a large USB drive attached), and then use Snebu's efficient replication to send deltas to the cloud-hosted server. Client files are stored individually (deduplicated) on the Snebu server, and metadata is in an SQLite DB (advantages over Borg is more open standards for the data storage and public-key encryption, disadvantage is file-level instead of block-level deduplication and a project that isn't as widely used).

If you are interested, I would be more then happy to have an extended discussion with you going over implementation options, and updating the client side script to make it work better with your service. (https://www.snebu.com, https://github.com/derekp7/snebu, and the tarcrypt extensions to tar are described at https://www.snebu.com/tarcrypt.html).


Once ZoL hits in 13 are you planning to give users direct access to ZFS for encrypted filesystems? My goal is to have a remote ZFS host I can push my snapshots to without loading the keys remotely. That would give me emergency access to the files if I load the key (less preferable), but mostly the ability to receive all the remote filesystems+snapshots to local storage with the flexibility of ZFS tooling. Right now I encrypt the incremental snapshot streams and archive them on traditional backup systems which doesn't allow the same flexibility or assurance.

I'd be happy with a socket/pipe to 'zfs recv zpool/benlivengood/data' that I could throw send-stream data at once a day or so.


Thank you for clarifying your points, as I've said in my previous reply I do appreciate the simplistic approach.

As well I mean no offense, the entire platform seems very sturdy though it leaves some questions which aren't apparent immediately (which may just be me)

If I wasn't contempt with my current backup solution I would seriously consider yours, and I wish you guys the best of luck. You're one of the few keeping simplicity as a key value.


The phrase 'Cloud storage' conjures distributed replicated fault tolerance within a region to provide high availability and strong durability against datacenter disasters (fire, electrical/mechanical failures etc)

and cross geographic region replication to protect against natural calamities (earthquake, tornado, floods etc).

It also conjures a managed service with object-level (volume, directory, file) metadata, versioning and strong identity access management capabilities.

rsync.net doesn't seem to do any of these and charges 0.5 cent more per GB/month. What's the secret advantage I'm not seeing?


"and cross geographic region replication to protect against natural calamities (earthquake, tornado, floods etc)."

As I mentioned - you can have that. That "geo redundant" service is managed by us and requires no intervention on your part. It costs 1.75x more.

"It also conjures a managed service with object-level (volume, directory, file) metadata, versioning and strong identity access management capabilities."

We give you an empty UNIX filesystem that you access over (Open)ssh. Whatever metadata and identity management comes with that (or with overlay tools, like borg or restic) you may use as you see fit.


I've used rsync.net in the past- it's essentially "filesystem as a service." You, the customer, use it to back your own software that handles the encryption and replication. Their website has some how-to guides for some common software, or you can roll your own with the rsync protocol.

Notably, their website only claims transfer encryption, not encryption at rest. You can of course encrypt your files yourself with your own keys.


Not having data encrypted by default is concerning, however I do admire the simplistic approach of handling your own dataflow and tools for sure.


> Not having data encrypted by default is concerning[..]

While I agree in general, I think rsync's case is special: Unless the file encryption on their side is somehow derived from the SSH connection (so the files are only readable by your connection and while you're connected - is such a thing possible?), it would mean that they have to store the encryption keys somewhere. The far better approach is to treat them as completely untrusted and only store content you locally encrypt before sending it over. That way you don't have to care about them encrypting your data, it's completely in your control. I use restic for that. Works great.


Agreed - They can't be compelled to give up what they never had and it means as a user you can control exactly how your content is encrypted.


If it's such a concern, why wouldn't you be sending them encrypted in the first place?


> How are personal files being handled? Is encryption being used? Are you able to access this data using a shared key?

Personally, I feel like if you're going to encrypt your data, you should be encrypting it on your end, before sending it to some backup provider who may or may not be keeping your data secure.


You can do zfs send to rsync.net and I believe that allows you to zfs send an encrypted dataset without sending any keys. But I’ve not checked.


You write down that you have no router, though your primary US location is connected to a "quintuple-homed network" and all global locations are at least triple-homed.

What does that mean exactly? Is your IP provider quintuple-homed? Or are you running a bit more complicated setup than you explain but the gist is that you have no particular routing mechanisms?

What does that say regarding your high availability? If one of your location is down, then it's definitely down until being fixed?

Anyway, that was interesting, just curious about the fact of having no router at all. Thanks!


The primary US location, in San Diego[1], gives us a managed, blended bandwidth product which is, in fact, quintuple homed and has been since we moved in (2001).

So we have a dumb switch in our rack, but they have routers.

In 2021 that's a weird bandwidth product and a weird setup but in 2001 it was "normal" and we just stay with that setup out of inertia (and the fact that we can't connect to he.net in San Diego).

A similar setup exists for us in Zurich with init7.

However, you are correct and we need to edit that FAQ language: our geo-redundant site in Fremont does not work that way.

(I will note that it has been 11 years since we put that location in place (he.net in Fremont) and it has zero minutes of downtime)

A tremendous amount of complexity and attack surface are eschewed by living with that setup and we're always looking for new ways to make that tradeoff.

[1] Castle Acess datacenter on Aero drive. Is now a KIO managed datacenter.


Good old Castle Access - and they really seem to take the “Castle” part pretty seriously.

We were there at a similar time - I probably saw the rsync.net servers.


I read it not as there are no routers anywhere, but that they've abstracted the problem of running the routers to their upstream hosting/colo/datacenter provider. Obviously there are routers and their systems are connected to somebody's ASN, or you wouldn't be able to reach them over the Internet.


Interesting interview - thanks John! Didn't know there was a UFS2 "phase" before ZFS... I wonder how much time those fscks took! :-)


They took forever ... and then they bombed out due to lack of memory.

Not lack of physical memory, but lack of ability to address it as the UFS2 tools, like fsck, were not written to handle billions of inodes ...

We really can't thank Kirk M.[1] enough - he wrote custom patches to ufs and fsck just for our (dirty) filesystems and, as I mention in the article, eventually gave us the push to migrate to ZFS.

[1] https://en.wikipedia.org/wiki/Marshall_Kirk_McKusick


Wow, what a cool story. Having Kirk McKusick write patches for you is almost like that old Weird Al song:

> I’m down with Bill Gates

> I call him “money” for short

> I phone him up at home

> And I make him do my tech support


@rsync

If you had to do it all over again, what would you do different (if anything)?

E.g. product/positioning/tech-stack/employees/business-decisions


That's a really good question ...

In terms of product / tech-stack I don't think I would change anything.

In terms of marketing and word of mouth I think we should have given away hundreds of free accounts in the early years (2006-2010) rather than trying to chase them down as paying customers. I believe we had a lot of decent word of mouth but I don't think I appreciated the power of influencers and their ability to amplify a message.

As for business decisions, I continue to wonder how much business we miss due to not having a Canadian location and we have considered deploying in Montreal for years now but have not pulled the trigger. I don't know if a Canadian location (but still a US company) solves the regulatory requirements of Canadian customers.


I think giving away free accounts as a targeted marketing campaign (think - free accounts for the ZFS developers) can be worthwhile - but trying to have a free plan whilst offering the level of support that makes it worth paying for can end up being an exercise in futility.

Even though I love free plans I think it’s better for small startups to grow organically with “cheap and easy to cancel” instead. Or offer credits for new users.


A Canadian location does solve the regulatory requirements of Canadian customers. Even federal government agencies in Canada no longer have issues using US-based hyperscalers (e.g., AWS, Azure, GCP) that have Canadian datacenters.


>but have not pulled the trigger.

Is this a ( lack of ) capital issue or simply an uncertain sustainable revenue stream issue?


The replacement for Spectacle, FYI, is Rectangle, which is almost identical, but still maintained


Jealous of how well you seem to be able to keep to KISS as a principle.


The number of "simplicity? what's that?" brain-implosions in this thread is kind of hilarious, though at the same time a little concerning.


This was maybe the first service I see that was somewhat complex, but the 4 line main page header text clearly explained what the tool does - the subpages are also great, low-key, great reads. Kudos to whoever copywrote the site.


Yes, the site is really well written, and I absolutely love its "no bullshit" approach which obviously extends to the product itself (and the CEO, from what I can tell). Signal to noise ratio is great. I wish more companies were like this.

Having said that, I do think the site would really benefit from a new paint job. A good UXer can make it so much more aesthetically pleasing, while still retaining its simplicity and quick load time. It doesn't have to be fancy. Just static HTML with elegant styling and a few minor tweaks.

For example, I was really surprised to see big name clients such as Disney, 3M, ARM & ESPN hidden a few clicks away (behind a button which wasn't very informative, from what I remember). Same for being in business for 20 years. A good UX/product person will tell you to put this front and center in your landing page, and rightfully so.

@rsync: I love what you're doing, but please get a UX person involved :)


Question for rsync:

You said: This might seem odd, but consider: if an rsync.net storage array is a FreeBSD system running only OpenSSH, what would the firewall be ? It would be another FreeBSD system with only port 22 open. That would introduce more failure modes, fragility and complexity without gaining any security.

You seem to suggest the big firewalls do not bring any value to the table. I always thought they had more "intelligence" - dropping sessions based on some bad patterns, guarding against DDoS (to some extent), etc.

Are you saying BSD is as good as these expensive boxes? Does it apply to SSH only or HTTP(s) and some other traffic as well?


OpenSSH and the OS is pretty much the best place to harden your SSH connection, no need for a Firewall.


I wish all SaaS services were like rsync

- No nonsense description of what they do

- Clear and simple pricing

- Simplicity as a core feature

Big fan. Look forward to using your services in the future.


My first experience with rsync.net was very disappointing. To this day they still advertise “append-only mode” support for restic at https://www.rsync.net/products/restic.html.

Their support people confirmed it doesn’t work (though they didn’t seem to understand why it would be fine for them to support it as advertised...) yet 6 months later they still advertise that they support it, even when I have e-mailed to remind them (and it still doesn’t work either) :(


The tl;dr as to why it doesn’t work is that they blanket forbid calling “rclone serve”, which is required for “append-only” support in restic.

This doesn’t make sense given that the specific invocation of “rclone serve restic --stdio” doesn’t open any network sockets, it’s no less safe than e.g. “tar”


I don't care for newsletters on tooling, but these Q&A interview posts are good -- immediately went in search of a twitter, couldn't find due to difficult naming, but want to follow to keep up from time to time

https://twitter.com/consoledotdev


Is rsync.net related to rsync the project?


No, there is no relationship.

However, in 2005 or 2006 when we spun out of JohnCompanies[1] and incorporated under the name "rsync.net" I requested, and was given, explicit permission to use the name and domain by the maintainers of rsync.


So how is this sharded? Or how do you load balance a customer to the correct server if there's no router?


Thanks for the interview, I was pleasantly surprised to see how simple the network architecture is at rsync.


I wish I had a personal use case where the pricing of rsync.net made sense. It looks like a great service. For now, I use Backblaze Unlimited. I realize they are not the same service, but Backblaze works for my personal stuff and the price is great.


I like backblaze, dont get me wrong, but my issue with them was their software is Windows Client and Mac OS only... No Linux or Windows Server offerings... My desktop runs either Windows Server 2019 or Linux... I havent run a desktop class verison of Windows on a phsyical workstation in years... As an aside, i use RSync and their Borg Backup option[1] for backing up my Linux box, and Windows is backed up to that Linux box too... works well... Borg can be gotten to work with B2[2], but its a bit more messing...

[1]:https://www.rsync.net/products/borg.html [2]:https://medium.com/@mormesher/building-your-own-linux-cloud-...



People think of backblaze as a backup/archive but it is explicitly not an archive - it is a backup and if you delete a file on your system it is purged within 30 days from the copy.

Most people don’t really think about this and expect that any backup is AlSO an archive. You can get burned by this and have to use b2 or whatever it is instead.


What? That article is about how it utterly failed as a backup, with files that were still on drives or had just been on drives but couldn't be downloaded.


Thanks for sharing this.


No problem. I hate the thought of data loss. They may be better now, but who knows, it's worth being sure.


I bought an rsync.net account a few years back when John made it known on HN, and have used it solidly as a backup for my .. wristwatch!

I have a LILYGO that I coded up a time-tracking app, which basically creates an event log whenever I tap it, wherever I go - and when Internet is available, it squirts the log over to some text files that live on rsync.net ..

Pretty neat to be able to do this without much of a desktop or mobile phone in the way, I have to say. I wonder if there are more opportunities for this kind of IoT service out there .. it sure was fun to get this working without REST ..


I always liked this set of marketing materials. But I also see where they conflict with my experience. "You may visit our datacenters any time you like for a personal tour and inspection to satis[f]y whatever due diligence requirements you may have" probably appeals to many customers, but for my dollar I would prefer a datacenter that nobody may enter.


Presumably the racks are in cages and visitors aren't able to actually touch them.


> I would prefer a datacenter that nobody may enter.

If a disk break, who changes it?


Not tourists, clearly.


With regard to the iOS import/export mentioned, does anyone have any more recommendations? (I'm not familiar with the mentioned option, nothing against it, just seeking out all options)

Simple file system interface to all devices first, then any further software interfaces on top only if desired.

Thanks for making the option available for remote storage John!


I used to run Linux for everything but I’m having to use Windows these days. What would it take to get rsync.net playing nicely with windows? I’m imagining Windows subsystem for Linux (ubuntu) with duplicity installed to it? Are there any major hiccups to that sort of setup?


From the standpoint of random access "browsing" over SSH/SFTP, you could just use filezilla or WinSCP or ... psftp.exe.

However, if you want a backup process then you will, indeed, need to find some way to run 'borg' or 'restic' or 'rclone' on Windows.

I've never used WSL so I can't comment, unfortunately ...


Rsync.net is amazing for Linux servers. For windows servers backups are complex and expensive. I tend to offload that to a cloud provider like Azure. Onsite I rotate hard drives. But for desktop users backblaze does everything I need.

If anyone has a recommendation for backing up Windows servers I'd love to hear it.


It looks like rsync.net is indeed compatible with Windows, just perhaps not out of the box. Keeping in mind that SSH on windows is somewhat new and I haven't really tried it with a service like this yet.

If you can get command line access to rsync.net with openssh and either CMD or Pwsh, then robocopy can forklift your stuff. This is without even getting into the weeds of the fact that WSL exists...

I am also seeing that some documentation exists for pointing Veeam at it, which is my preference. I don't run any metal computers that aren't hypervisors and using that to back up my VMs, be they windows or linux, is my preference.


WSL is tricky for backups because cron jobs don't always run (although it's possible to run WSL command through windows task scheduler). Rclone, restic, and kopia are useful tools with official windows builds.


rclone should work, afaik.


Why would i buy 1TB for $20 per month here instead of getting 6TB for $8/month from Microsoft?


Technical excellence and out-of-this-world support. I say this as a very happy rsync.net customer for over 10 years.

Though if the descriptions of the service on their web page does not make you salivate, perhaps it's not for you.


The limitations on OneDrive are considerable as it’s built on top of Sharepoint somehow.

But rsync.net is a backup product and OneDrive is a large file sharing drive.

Attempt to use that 6TB in backup situations and you may experience issues.


Reading this takes me back to when I started playing with storage professionally.

For me it was in 2004, also using 3Ware controllers. I was running on RedHat (before RHEL) and XFS before it was common on Linux, and similarly had memory issues when trying to repair filesystems.


I like the interview a lot, but oddly this one here is the part that I liked the most:

> I start the day with a short walk outdoors. I don’t want the first thing my eyes see to be print, and I don’t want the first thing my body does to be sitting. So I walk a bit.


//I try to maintain an “Hedonic Fast” Monday through Wednesday so, on those days, I am only looking for truly actionable headlines and comment threads that are relevant to my businesse//

This is smart move!


For people who use rsync.net, is this something that can replace Dropbox for multi-machine synching? For all its flaws, Dropbox does allows me a semi-seamless transition between my laptop and my workstation.


I think Syncthing could be better for those purposes. I use that for synching my homes (config files and essential work files, keys and so on) between my desktop and laptop.

I have a copy running in my NAS to always have a copy available, one in my laptop, one in my desktop, and I was thinking about having one in my phone to run only when I'm charging (so I don't kill my battery).


Can you compare Synching and Dropbox? I have been a (paying) Dropbox user for many years but the product is not that good, especially on Linux. I would love a more reliable alternative.


Syncthing is "just" a syncing daemon with a simple status web dashboard.

The pros (at least for me):

- You host/own it

- It's not centralized: every node can sync with the others

- Seems to be fast (the more nodes, the merrier)

The cons:

- Setup is slightly more difficult (you need to share some keys and ips)

- No iOS client (not an issue for me)

- You can't share a folder or file via web link

So it's great for syncing folders between systems, but it doesn't substitute the job that something like Seafile would do (managing permissions, collaboration, web file sharing, fancy web ui...)

There's a bit of info in the faq: https://docs.syncthing.net/users/faq.html


As far as I know, no, unless you manage aspects of that yourself via git-annex or something similar.

My setup to do this is that I run my own nextcloud server, which handles the computer and phone etc. syncing, then nightly that's backed up to a small computer in my house (I just use rsnapshot for that), which then backs itself up to rsync.net (using plain old rsync.)


happy customer here.

I do a simple rsync of my precious but not too sensitive data, daily.

and for the more sensitive stuff, gpg before sending daily as well, the copies will add up but I prefer it that way.

10/10 great business


I do get the "no separate firewall" reasoning, but I'm paranoid enough that I'd at least want some PF rules just in case some daemon gets started by accident.


Oh I'm pretty sure there is a firewall configured on the nodes themselves (customers get shell access) and he just meant that there isn't a separate firewall box in front of the servers.


Correct. The storage arrays themselves have a (modest) ruleset which, among other things, locks them to TCP22 only and disallows broken/impossible things like xmas-tree packets.

Simple stuff.


7 daily snapshots: So I could sync my hard drive over, wait for a snapshot, delete everything, and keep the space, and use the snapshot as backup?


Used them in a startup for a long time, was very happy, excellent support, good pricing. Would always use them again. (used the Swiss location).


"I initiate my work in the terminal by port-knocking".

Guess you don't need a firewall when you have no open ports?

Haha yes! Guess I'm not the only one...


Question to rsync: Which HDs does rsync use? Is there a preferred brand? Which brands have been found to be most reliable?


Firefox Reade mode makes this page more readable. Otherwis 50% of your screen space is taken by non-informative graphics.


Don't know if running a dumb switch connected to your ISP is the best infosec policy:

https://blogs.cisco.com/manufacturing/the-top-5-reasons-to-a...


I'm not sure those reasons really apply to their case.

Especially since they're running the boxes that it's connected to.

They can do resiliency, network segmentation, and monitoring on their platform.

What's a Cisco box going to do for them?


Dumb switches will blast packets to all interfaces that are connected. If there's a machine on the switch that's in promiscuous mode, it can see all the packets on the local network (including the backups coming in from customers).

Managed switches typically have ACL support. I get the KISS principle, but this setup seems to be trading security for simplicity.


Thie first paragraph is incorrect. A hub will "blast packets to all interfaces that are connected". A switch, even a dumb one, still switches packets. Broadcasts and frames addressed to unknown destinations will flood out all ports, but not unicast frames with destinations currently in the MAC table.

It is true that an attacker could flood the MAC table, spoof their MAC, etc, after compromising a layer-2 adjacent host and use that to manipulate traffic flows. That's somewhat disturbing, but no Customer backup data should be hitting their network outside of SSH anyway. I think the potential is more for DoS than compromise of confidentiality or integrity.

I really admire rsync.net's simplicity, but dumb switches give me the willies. I feel blind not having per-interface counters, at the very least. If nothing else, I'd like to be able to reconcile the counters coming from my OS interface with the switch in troubleshooting scenarios.


> it can see all the packets on the local network

I'm sure those packets (consisting entirely of OpenSSH) will be very useful to them


Don't be so sure :)

Quantum computing is improving everyday, and new methods of defeating RSA are being researched:

https://eprint.iacr.org/2021/232


OpenSSH now uses eliptic curves, not RSA.


> Dumb switches will blast packets to all interfaces that are connected

Multicast and broadcast sure, but dumb switches will still keep mac-address>port mapping. If the router sends to 52:54:00:ad:ra:a7, the dumb switch will remember that's on port 7 (having seen traffic from it recently - if only an arp reply) and only send the packet to port 7.

Hubs (remember them!) will blast every packet to every port.


They only support SSH (legacy FTP was sunset a year ago), so there's nothing to gain (except for maybe the volume and IP of the customer) by observing other traffic. Which happens to be the same information you can observe anywhere in the path from a customer to their machines.


> including the backups coming in from customers.

Which are encrypted in flight...if they aren't then anyone on the 30 machines between customer and final destination can also see the backups coming in from customers.


True, but the packets in-flight can take different routes. If you have a machine on the switch, you know you've captured all the packets that were in-flight. This make it easier to break the encrypted packets.

It's a choice--everything in security is a risk-management assessment, but I'm surprised rsync.net was able to get so many security certifications with this setup.


> If you have a machine on the switch, you know you've captured all the packets that were in-flight.

Same applies if someone takes over the firewall, machine on the last hop before they hit port 22.

In a world where stuff like this https://www.helpnetsecurity.com/2020/09/01/zero-day-cisco-en... routinely happens there is a benefit to forgoing all of that when it makes sense.


# tcpdump -i eth0

tcpdump: eth0: You don't have permission to capture on that device

(socket: Operation not permitted)


You may mean repeater hub.


The only "security risk" i see there is number 1, and that is all to do with physical security.

> Disadvantage #1 – Open ports on unmanaged switches are a security risk

Why? Is there something that would prevent an attacker with physical access from unplugging an existing cable? Does the average managed switch config have mac limits and auto shutdown if a link is lost for just a few seconds? Mac limits are easilly bypassed, even without (permanently) disconnecting the legimate device by inlining an active device, maybe some mac spoofing.

I don't include 802.1x or automatically shutting down a port that loses an uplink as a "simple and effective security precaution", it would be a right pain for many situations. Is the latter even a feature? I certainly haven't come across it (unlike normal portsecurity like limiting number of mac addresses, which just adds to overhead with limited effective security).

> Disadvantage #2 – No resiliency = higher downtime

If my device has one ethernet cable into one switch, how does that help? If my unmanaged switch goes pop, I have a spare that I can put in and be back running in a minute. My managed cisco edge switches take 10+ minutes just to reboot.

If my device has two ethernet cables, one into one unmanaged switch, one into another, losing that switch isn't a problem.

> Disadvantage #3 – Unmanaged switches cannot prioritize traffic

Correct they can't. Managed switches without qos set up can't prioritise traffic either. If your switch is dropping packets, you don't have enough bandwidth. I've seen packet loss when sending 500mbit down a 1G uplink on managed switches, even on QOSed traffic. Indeed I've seen higher priority traffic drop and lower priority not drop. QOS isn't trivial. Ultimately it comes down to how big your buffers are whether your packet gets through or not, so your application should cope with some loss, and if you get too much loss you need more bandwidth. If you have 48 devices connected at 1Gbit each, each firing 100mbit of traffic every second, all bang on the second, with a 10gbit uplink, on paper you only need 4.8gbit of uplink. You'll also need a 600MB packet buffer and expect a lot of delay on your packets, whether you have managed or unmanaged, QOS or no QOS.

> Disadvantage #4 – Unmanaged switches cannot segment network traffic

Correct, but then if I have 8 desktops in a cluster why wouldn't I pop in a desktop switch with 8 1G ports? I want them all on the same vlan anyway.

> Disadvantage #5 – Unmanaged switches have limited or no tools for monitoring network activity or performance

They don't, but again do I want that for a specific use case?

If I want a managed switch (which I usually do), then I'll spec a managed switch. It's unlikely it will be cisco. If my requirements don't need features of a managed switch then I won't bother.

I find it interesting that there's no mention of preventing broadcast storms, or IGMP snooping - both of which are far more useful for a typical edge switch than qos.

Personally, I tend to use managed switches - indeed I just bought a couple of 24 port TP Link POE switches for an event I'm planning. I'm not 100% sure I'd go for an unmanaged switch in rsync's case, but from your list

1) Doesn't apply -- servers are in a secure location

2) Doesn't apply -- servers are either single connected (so need a physical visit, and replacing an unmanaged switch is far quicker and easier than a managed switch), or they're dual connected to two different switches

3) If they're doing inline management then you might want to carve out a small part of your uplink to prevent yourself from being dossed by a dodgy server (if your server is saturating your uplink bandwidth and you ssh session can't establish that could be an issue. If you've got OOB access on a separate link though, not a problem, and clearly they don't have that problem)

4) Doesn't matter -- they don't want different vlans

5) They presumably measure the bandwidth use of each of their servers. The question thus is "does the ISP give me logs I can rely on for the wan". Personally I wouldn't, but I can see the idea

Spanning tree: Secure network, they aren't going to connect one port to another to cause a storm

IGMP: They presumably aren't using multicast for anything major so bitrates would be very low even if they were there

Reasons to use a firewall or a switch with an ACL in this specific case that I can think of:

1) 2 points of control -- a zero-day on freebsd's firewall could open a port to an unintended source which was listening but blocked by iptables (or bsd's version). If you had a non-bsd firewall it's unlikely the same zero-day would work

2) Port 22 is only open to a specific IP range, again there's a zero-day, and TTL of outbound packets is high enough to establish a session

Reasons to use a managed switch even ignoring firewalling:

1) Reliable traffic stats -- you could guess at these by summing the uplinks of all the connected devices although some packets will be dropped and some may be going to other devices on the network

Reasons to use QOS on a managed switch:

To allow inband managment if something goes wrong. A separate ilo/ipmi/kvm connection would be better for that though.

I don't think they'd need features like span ports (I personally use them all the time, and fibre taps, but I have a different use case which is UDP heavy and loss-intollerent)


> Correct they can't. Managed switches without qos set up can't prioritise traffic either.

> If your switch is dropping packets, you don't have enough bandwidth.

this isn't true, there exist more bottlenecks than just bandwidth, e.g. try sending 10 byte packets instead of 1500 byte packets and watch as your switch starts dropping due to CPU exhaustion

> Ultimately it comes down to how big your buffers are whether your packet gets through or not

not really, traffic prioritisation is about deciding which packets you drop when hitting your limits (or close to), not making sure that you never drop anything

obviously if you're never hitting any bottlenecks: the prioritisation does nothing


Dunno how you'd make a 10 byte packet, the smallest valid ethernet packet was 64 bytes, and I'd expect my switch to forward those at line speed just fine, and drop any runt packets just fine too. Maybe you could hack a network driver to deliver some really nasty frames, but that doesn't seem a likely situation for rsyncs use case -- not compared with a switch failure for other means.

The point about QOS is that it often isn't necessary because you shouldn't be hitting those limits, and if you do you often don't care (because you've got half a dozen identical desktop computers talking to an unmanaged network not doing any relevant dscp marking). In rsyncs case the traffic they're sending is all ssh traffic - what's going to be doing the tagging and differentiation?


> not really, traffic prioritisation is about deciding which packets you drop when hitting your limits

But everything is the same: ssh traffic for backups. And both ends do congestion control.

I don't care if nightly backups take 1 or 2 hours.


802.1x is trivially proxied anyway, unless you don't reconnect when the link is lost. So an attacker with physical access is going to be able to inspect your packets regardless.


The beauty of SSH-only is that you can assume that all of your traffic is being inspected all the time, but you have a protection against that: ssh-encryption and key fingerprints.

If you wanted to confirm ssh host-key validity, I'm sure rsync.net would perform an out-of-band verification. When they emailed me a request to do some server maintenance, I asked for a verification, and they placed a GPG-signed confirmation on their web-server for me to verify.


Would be interesting to see those shell scripts for sending SMS via Twilio.


I'm not sure what John is using, but they have a very simple example in their documentation. Go here and then click on "twilio-cli" in the right code type selector:

https://www.twilio.com/docs/sms/send-messages


Note that twilio-cli is a totally over-weight, un-necessarily complicated node.js app. If you just want to send SMS from the command line, the curl code is much, much cleaner.


I have not used twilio-cli for anything ... I just write my own scripts with curl - here is my basic 'sms' command:

https://0x.co/6K37UZ


Not everyone can share text using their very own pastebin. That's neat. Cheers.


Clean, simple architecture...a sysadmin's dream.


+1 on having your laptop as an ephemeral device


Yeah, assume it’s disposable not just for theft but because upgrading might be impossible and even repairs are very expensive (compared to a desktop).


rsync.net rules


Hetzner has a similar product at better pricing that i have been using a minimalist dropbox alternative

https://www.hetzner.com/en/storage/storage-box

Access via rsync/sftp/scp


It seems a very similar product, also offering zfs snapshots, but I like the fact rsync.net snapshots are immutable: you can browse them but there is no way to delete them without contacting support (and the CEO once posted he would review every such request). It makes me feel more confident about my backups if someone got hold of the cached credentials from my backup software.


Nice to read they've baked this in. It is a barrier that can let one recover even after attacks from insiders.


Your link is dead for me. Maybe you wanted to link to this? https://www.hetzner.com/storage/storage-box

Hetzner is throttling bandwidth after traffic exceeds ~5x the storage capacity while rsync.net doesn't seem to. Hetzner also only supports a very small number of snapshots in total while rsync.net supports more per day.

I don't think Hetzner and rsync.net are really competing with each other. rsync.net's focus is more on business customers, while Hetzner targets private customers.


I tried to view the link. Got a site not found error.


Also, Borg backup.


rsync.net was the first storage provider to support Borg out of the box and also has a special tier for Borg users (which was later expanded for restic and some others iirc).

I also like that their Europe location is in Switzerland. I think it's useful for a number of reasons to store critical data in more than one jurisdiction.


I think they need to hire someone that is strong on the security side of the business, for two reasons:

* he appears not aware of the role of hardware firewalls in mitigating DDoS by handling efficiently a lot of active TCP sessions (they have specialised hardware for this purpose)

* he is describing in great detail a lot of information that a phisher or other type of hacker can treasure to target him


You cant protect from a DDoS with a hardware firewall, a DDoS consists of so much bandwidth that your network hardware is not able to simply handle the incoming traffic before any filtering happens. Your expensive hardware firewall can protect from DoS attacks, but they don‘t happen anymore as DDoS attacks are really cheap.


In any situation where you need to create a filter rule that needs to run fast against a lot of TCP sessions a hardware firewall will do it faster than your general purpose server. It's not about protection but mitigation.


Can you can protect yourself from certain types of things (SYNC flood) with a firewall, though.


It's easier to protect against SYN floods if you terminate the connection.


I read it this way too. Having the OS kernel have to process all internet junk traffic (including all the IoT worms) wastes CPU cycles that could be used serving traffic to legitimate users. It is better to offload the packet filtering to a dedicated device, with an OOB management. If the host was flooded, one might not be able to activate a host based firewall to fix in such a DDoS condition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: