I self host everything I can at home, so this list might be a bit too exhaustive, but there wasn't any scope on this ask HN, so...
Overall hardware platform:
4 pcengines alix boxes for openbsd router/firewall appliances
3 supermicro Opteron servers with KVM/corosync/Sheepdog/csync2 for hosting all VMs
Software:
PF + CARP + pfsync + OpenBGPD for routing
Unbound + NSD + Bind for DNS
SSH/OpenBSD ipsec/apache Guacamole for roaming and permanent site-to-site VPN (pcengines ALIX hosted at my inlaws in Japan)
Apache + Lets Encrypt + awstats + relayd for serving web pages and analysis
ZoneMinder for video monitoring. Tied into legacy security system for automation
Postgres for database work. Some mysql/redis
NetDisco + Nagios + NagVis + NFSen + MRTG + Smokeping + PNP4Nagios + NUT + Splunk + Racktables for monitoring. All configs are dynamically generated from netdisco db
OpenSMTPD + Citadel (webcit) for email delivery and webmail
Minetest server for kids. We use this tons as a family, and the kids spend lots of time modding. TW2002 server. TShock server.
OpenELEC for diskless netboot KODI machines around the house
Samba4 Domain controller + NFS for sharing files in different applications
SVN for source control and Config diffs for all servers/tools/network devices
Asterisk via FreePBX / NCID for all phone/CallerID services, including remote handsets at VPN locations
And that's just the ones that I really enjoy using off the top of my head. I hope to find lots more things to try in this thread. Metabase already looks like an awesome candidate!
Wow that is a long and awesome list! If you don't mind me asking, how much did it cost you to setup / maintain it? Also what is the main reason driving you to self host so much stuff?
It was cheap to set up, all the cost is in time to learn to implement in a secure and performant fashion
Hardware actually cost money. Here's a breakdown:
> 4 pcengines alix boxes for openbsd router/firewall appliances
These were around $120 each with 4GB flash storage at the time. They're half that now. Low power, no cooling required, x86, 3 ethernet. You could buy APU2 now for more power
> 3 supermicro Opteron servers with KVM/corosync/Sheepdog/csync2 for hosting all VMs
I used cheap cases, eBay MB/CPU/RAM, tiered storage (green/black/ssd) to keep costs down and infiniband for 10gbit interconnects ($15/card on ebay!). I made sure to get quality components (esp MB/power supplies). One of the servers is also my desktop. I'm guessing they were about $800 each. Having the 3 node cluster is nice. When we had a forest fire threaten our town and we were ordered to evacuate I just grabbed one box and all my data was already replicated to it. When we returned home I plugged it back in and it re-synched back up.
Maintenance has been a non-issue. I haven't had any components die except the occasional HD. Power costs are the main thing. Estimate about $500/yr at $0.10/KWh
> PF + CARP + pfsync + OpenBGPD for routing
I use a local indy ISP that gives me a bunch of static IPs and lets me route a /29 with BGP. I know them fairly well, and get a sweet deal. Doesn't cost more than a regular consumer connect, though it is slower.
I've used OpenBSD since the early 2.x days, and find it very easy to administer. The release and documentation quality are second to none, and I've found the community to be very helpful as long as you've tried to help yourself first.
> Unbound + NSD + Bind for DNS
I keep my Bind server vlanned off and serve everything out thru unbound/nsd. Both of those programs are very easy to set up, the real beast being bind. I know there are better alternatives out there, but I know Bind well and have lots of custom config I don't want to throw away.
> SSH/OpenBSD ipsec/apache Guacamole for roaming and permanent site-to-site VPN (pcengines ALIX hosted at my inlaws in Japan)
If I were to pick one outstanding program on this list it'd probably be Guacamole. Pure HTML5 rdp/vnc/ssh/telnet/etc client that is seriously amazing. I've set it up at a half dozen places now, and it's never so much as hiccuped.
OpenBSD IPSEC is VERY easy to set up, if you've had nightmare experiences with other packages!
> Apache + Lets Encrypt + awstats + relayd for serving web pages and analysis
Apache is the old standard, and awstats is cool for keeping tabs on what is going on in the logs (geoip as well).
Lets encrypt was amazingly easy. I'm using certbot and set it up in under an hour. I'm forcing SSL on all my web services now.
Relayd is another "so simple and it just works" package from OpenBSD. I use it as a front-end load balancer.
> ZoneMinder for video monitoring. Tied into legacy security system for automation
On Debian (my Linux distro of choice), this was simple to set up. Perl scripts to integrate it into my DSC security system. Auto arm/disarm camera recording and relay light control required the IT serial integration board and programmers manual for the system
> Postgres for database work. Some mysql/redis
I've been using postgres forever, so setup and use are second nature. An absolutely incredible piece of software engineering.
> NetDisco + Nagios + NagVis + NFSen + MRTG + Smokeping + PNP4Nagios + NUT + Splunk + Racktables for monitoring. All configs are dynamically generated from netdisco db
This is another stack I've set up at many locations (including businesses). They are a real timesink to integrate together. I have MANY custom scripts to make the config generation from netdisco work properly, but once setup you have total insight into every aspect of your network (and I forgot to list RANCID!). Netdisco/NFSen on their own are still a killer combination, and work as well or better than packages that cost tens of thousands of dollars. I'm happy to help any one trying to set these up if you PM me.
> OpenSMTPD + Citadel (webcit) for email delivery and webmail
Citadel is maybe the weakest thing I have in my stack. I'm looking at the other webmail solutions in this thread carefully
> Minetest server for kids. We use this tons as a family, and the kids spend lots of time modding. TW2002 server. TShock server.
When your kids are asking to learn LUA, you know something is working!
> OpenELEC for diskless netboot KODI machines around the house
Amazing and easy to set up if you already have your own DHCP server you can modify. Just need tftp and nfs after that. Using OLD desktop PCs for this works great. I'm using cast off dell gx290s
> Samba4 Domain controller + NFS for sharing files in different applications
I've been using Samba4 since pre-alpha (TP series) when you had to run your own LDAP server. Things are so easy now its hard to overstate. Using Bind makes it a bit trickier since I need to add some magic entries, but if you use the builtin its a single python script between you and a full SSO AD domain.
> SVN for source control and Config diffs for all servers/tools/network devices
I found SVN config to be a bit of a head-scratcher. I think this is another one where other tools are probably better nowadays. I'm looking at some of the other things people are suggesting.
> Asterisk via FreePBX / NCID for all phone/CallerID services, including remote handsets at VPN locations.
Another timesink. PBXs are hard to configure, and I'd move to another system if there was something less esoteric.
As to why I do it? I find it satisfying to learn how things work, like the idea that I'm master of my own destiny and know how my data is being used
How do you secure your publicly accessible IP at home from the web. I see you use static IP's I guess it's no different than renting a sever? You probably have separate networks (local home vs. serving)
Looking at Netdisco I have 12 VLANs. ISP1, ISP2 (from Dad's house across the valley), OSPF backbone, DMZ, LAN, Guest, Cameras, Servers, Japan, Kids, Management and Test. Everything passes thru PF and only exposes services as required. The Guest VLAN, for example, has no way to get to the rest of the VLANs. Camera VLAN can only talk to the DVR software, etc.
The important thing with securing my public IPs (as with securing anything) is to understand the surface area and minimize risks. With the number of services I'm exposing, I have to be careful. First thing is to keep all OSs up to date. OpenBSD every 6 months and Debian on an ongoing basis.
Next, whenever possible use single-purpose proxies that have been well audited. In my case OpenSMTPD, Unbound and NSD protect the always popular mail and dns servers from attack.
Keeping complicated things off the internet is important. Big CMS packages are constantly under attack. I only run either static pages on my webserver, or very carefully audited custom PHP pages (written with an attacker's eye to exploit injection)
When I expose something more complicated like Guacamole or Citadel that might end up with a known hole in the login screen, I put it behind both SSL and an HTTP simple auth login prompt. It's ancient, and so well tested that it is unlikely to end up with an exploit against it. Or at least less likely than the app its protecting. I've actually been toying around with doing 2 factor auth on these services with a dynamically generated simple auth sent via SMS bridge or IFTTT...
On my internal network, I have my family running linux wherever possible, and keep Java and Flash off the windows PCs. Everyone runs with a nonprivledged account and the OS/software packages/PDF readers are all kept up to date. I think the wife and kid's PCs are probably the most likely things to be compromised on the network, so I keep a close eye on them.
I also take the reactive approach of log alerting, change monitoring and general diligence to the state of my network. If you know what has changed in the last 24 hours it tends to be a small enough dataset to eyeball, and will tend to tip you off when things are wonky. I also have enough logging and data sources (syslog, configs diffs, apache logs, nfsen data, mrtg data) that it would be exceptionally difficult for an attacker to wipe out all trace of their presence.
I do not know of a successful attack against my network, tho I'm not naive enough to think that it hasn't happened. I just haven't realized it if it has.
Wow thanks a lot for the details this is not my field haha regarding the tech you mentioned in the beginning.
Curious what your family thinks having to run Linux. It's my primary os myself with i3-wm. People usually complain "install apps command line" ui, etc .. whatever Ubuntu being as supported as it is that UI is so slow/bloated IMO.
I have Windows as well but use Virtual box and Linux to access the web if it's not just YouTube/Gmail/regular pages with adblock/uorigin running.
I don't even know how many ports there are on computers, I mean in my experience I've seen up to port 10000, I think you have to enable them? I usually use 21/22/80/443 though I've seen 3000 (websocket), 4200 (angular), 246 or 286 (windows RDP) I don't know...
Windows rdp is 3389, but really, anything can go on any port up to 65535. The common ports like 80 and 443 are just there as "it's common to use these for services, so you don't have to put in http://google.com:80". And there's no real "enabling" them, it's just a question of if your firewall allows it, and if you have something actually listening.
Also, websocket is just a protocol upgrade slapped on http/https, so it normally goes over 80/443. If someone's running it on another port, it's probably cause the server they have on their normal http/https doesn't work well with websockets.
Yeah the 3000 I just recalled that with a tutorial on socket.io I did a couple years back.
Yeah my bad in the RDP, you know I messed up on that. Was trying to get around firewall rules, I edited the registry and changed the port for it... Could not get back in. This was one of Amazon's windows servers, and I mounted it as a volume to another one and could not access the registry to change it back so yeah... Locked myself out haha.
I have a long history of having my family use Linux. I'm a Debian guy myself, so I've always either used Ubuntu or basic Debian. I run dwm on any machines that I put X on (very few, honestly).
I started my Dad on Ubuntu around 2008, and he hasn't used anything else since. Shortly thereafter my Mom, then my Sisters. Most recently my Step-mom, and Grandparents have gone to Ubuntu.
When I got some castoff laptops from work, I turned them into Sugar notebooks for my kids, and once that became too limiting for them I helped them install mainline Debian. Some of them run Gnome and my oldest runs fvwm.
My wife still runs Windows because of inertia, more than anything TBH.
In my experience, I've only had 1 peripheral that someone has bought that was totally unusable due to drivers (scanner), and the only programs my relatives have asked for that weren't available were my kids wanting to play Roblox, which I didn't want them playing anyways. Thanks to Minetest and buying most Humble Indie bundles I actually have a pretty good library of Linux games for them to play, so there hasn't been much bellyaching from them. Well, that and the Windows gaming/home theatre PC.
I've gone to a 2 strikes and you're out policy on Windows installs. If I have to re-install it for you more than one, and you're either getting Linux or finding someone else to fix your computer. My Grandparents got caught on this policy, but my stepmom actually asked for "That system that Dad has that doesn't get viruses". Happy ever since.
Most people that aren't highly technical tend be served by Firefox/Thunderbird/LibreOffice for 99.99% of their needs. It's mostly Facebook these days TBH.
It makes support dead simple. no-ip.com and SSH let me fix almost anything remotely, and no one has every gotten pwned that I know of.
Other people have answered the port question, but I'll try to go at a slightly lower level. Each open port will have a program running on the host that has opened a listening socket on that port. Netstat can help you find out what is listening, and on what interface/port. As a rule, only root can open ports under 1024, and any well written server will drop all non-required privs. You can check with the ps command. This is somewhat enforced on some OSs, eg OpenBSD with pledge.
From a security standpoint you should verify that you know and understand every server listening on a socket, which interfaces they are bound to (netstat asterisk means all interfaces), and whether they are exposed to the internet via direct interface/proxy/port forward/etc.
One trick to secure services that need remote access: if the service is only for technical users, you can give them each an account with no interactive shell, and then they can ssh port-forward to the port they want to access. Eg. you can make 3389 (RDP) only listen on the local LAN or the loopback device, ssh to your router with port forwarding local port 3399 (or whatever), to remote IP:3389 and point your local rdp client to localhost:3399. Great for ad-hoc limited VPN type connections. That way you only have to be aware of SSH remote holes, and not the more-likely RDP server.
Hopefully if any of this is incorrect, someone will correct me
Man that's cool and you save yourself buying windows licenses.
I'll have to check that out no-ip, so you install openssh server to SSH int their computers? Yeah I like that feature myself too lazy to get up and use the dev desktop.
That's my hope regarding Linux that it's safe, using virtual box and Linux.
Haven't heard of Sugar notebooks.
Yeah I used to run Linux Mint, then Debian, then Ubuntu mostly because of their good driver support, a few times I've had laptops with Debian installed and they couldn't connect to WiFi right off the bat. Then i3-wm because my computers are generally garbage. At least now both my desktop/laptop have 8gb ram. Could go higher maxing out chrome tabs.
Anyway thanks and for the info on the ports, I have used netstat before it's intimidating haha, so many at least when I checked on my windows laptop if I recall right.
Not OP, but canceling Youtube Red, HBO Go/Now, and Hulu easily covers the cost of electricity on beefy server. Just depends on your priorities. I consider $10/month (what I pay for my desktop class i7 machine, not GP's setup) to have a place to call home on the Internet (that isn't in the cloud/on hard drive island/somebody else's computer) to be well worth it, and if you're using smaller hardware (eg Pizerow) it costs less in electricity than a cup of coffee in a month.
Is KeeWeb new? I went through an extensive search for a new password manager a few months ago when I transitioned from macOS to Arch Linux and yet I've never heard of it. I ended up settling on Enpass which is decent but not perfect. KeeWeb looks nice though, how do you like it?
KeeWeb is a drop in replacement for KeePass, it uses the same format, so you can use the same Android clients to open the same file. IMHO It's a lot better than KeePass, especially if you are in Linux.
KeeWeb is written in JS with desktop apps using Electron. I moved away from KeePass to KeeWeb because, although KeePass was first, it is old now, it was written for Windows and then ported using Mono to Linux.
As it uses Mono for Linux, that generates some issues. For example, I couldn't copy a password from the interface and paste it in a Terminal (I'm not sure if it was because I use Tmux all the time). It handles the clipboard in weird ways. I had to paste it somewhere else, like the browser and then copy it from the browser to paste it in the Terminal. With KeeWeb it works normally.
That last part is what made me finally decide to go for KeeWeb instead of KeePass. It gives you "LastPass" like functionality in the browser while you keep being the one that handles your encrypted DB. And then you can store that file in Dropbox, so that you have access to it everywhere.
FWIW, I use KeePassX [0] on Arch Linux. I also use LastPass (because $work has an Enterprise account) but I prefer using lastpass-cli [1] instead of the browser extensions.
KeePassX has been on my radar. I think I gravitated towards Enpass because the UI is more similar to 1Password.
I originally used lastpass years ago and I tried it again recently and hate it. I didn't try the cli, though. That would certainly be a better option than dealing with their horrible web app.
I'm thinking of self-hosting a password manager like KeeWeb but I'm afraid of my own self-hosted solution not being as reliable (downtimes / loss of data). Do you have any precautions against catastrophic failures?
The self-hosted solution I use for this (switched from LastPass) is pass[0] plus syncthing[1]. Passwords are just GPG-encrypted files, so they replicate seamlessly - much better than the "single monolithic database" approach of things like Keepass which is prone to sync conflicts.
Syncthing mirrors everything between my desktop, laptop, and phone (and there's an Android app[2] that works with OpenKeychain[3] so passwords are accessible from my phone). I haven't done this yet, but it'd be trivial to also run syncthing on a cheap VM somewhere, and replicate the passwords to it (but obviously not my GPG private key) for disaster recovery.
I use Keepass with Dropbox. Keepass keeps the DB file secure with encryption and many rotations, so I'm not afraid of a brute-force on the file itself. It takes 500ms on my beefy desktop to unlock the DB, and almost 3 seconds on my macbook.
Most services are tied to my email, so I have both 2factor auth AND recovery codes that I have stored in a safe place. Additionally I have the Keepass password written down in a safe (separate) place just in case. This is my backup in case I lose access to my Keepass db.
As one last bit, I have Keepass to auto-lock after a bit of inactivity, so I'm constantly retyping that password. This helps me memorize it.
In many ways this keeps me safer. I stay logged out of most websites by default. It can also protect me against terrible password policies. For example, I once had a bank that limited passwords to 8 characters. I had Keepass remind me to generate and rotate that password every quarter just in case. When Heartbleed dropped, I marked all my passwords in red and only changed them back when I updated that website password.
I wish I had a success story to tell, but I've increasingly moved away from self-hosting. Whenever something breaks I have to pull myself away from the programming I'm enjoying and go fix it. And if something breaks when I've already had a long day working under a tight deadline for a client, it feels like a disaster.
Also, thanks to the stupid design decisions of most package managers, you will have trouble getting anything done anyway if github.com is down, even if you just get dependencies from there. If that problem affects you, then self-hosting your Git repo may just add a second point of failure.
You want to be setup so you can do a build completely offline of head or any tag. GitHub being down shouldn't make a difference. If the PM is getting in the way of that then dump it.
Rebuilding the entire toolchain of your language sounds like a colossal waste of time when you just wanted to avoid a few hours per year of developer downtime.
I guess you are referring to the recent incidents with Github. At least when Github (or any non-self hosted service) goes down, _you_ don't have to stop doing the task that your user/clients care about and go fix it, you have a whole team dedicated to that, for free.
I don't know about GitHub or GitLab, but in my entire professional career, I think my total downtime due to failures in locally hosted Git repos is zero. Likewise I don't recall ever having a problem due to locally hosted bug trackers or code review systems. This stuff isn't hard, and we've been able to do it reliably for a very long time.
What certainly has wasted a horrible amount of my time in recent years is working around build systems and package managers that are so badly designed that not only do they have a dependency on some online repository in the first instance, they also make it difficult or impossible to download and cache those dependencies in a supported way so that you can have 100% reproducible builds with nothing but locally hosted resources. Surely this is just about the most basic requirement for a robust software development process?
Since you're looking to collect anecdata, I've had 100% uptime from my self-hosted GitLab instance, which has been online for about a year. GitHub may be on par with that, but it's hard to beat 100%.
I self host my gitlab and when there's an update it goes down for several minutes (well actually I don't know if it's unusable, haven't tried, but with the backup and the updates, it's long)
I'm the only one working on it, so that's not a problem for me.
I must have transposed "unexpected" downtime in my head. Yes, I update the box and installation periodically, and yeah, that is time that my GitLab environment is not available, so I guess it's not 100%, but like you I am the only person using it, so it's effectively 100%
You should generate your own electricity too! Sarcasm aside, every client will accept "Amazon is down, millions of programmers are affected". And then I get to head to the pub while the only finger you can point is at yourself. And then cost. I don't know your hourly rate, but an hour of fixing something myself pays for a year of most services out there.
GitLab. I know it has it's problems in hosted form, but I've stood GitLab up on a Linode about a year ago and have had zero problems with it. Since then, I've grown increasingly dependent on it, starting to use GitLab CI for some basic automation around things like my blog, or managing my Chef environment. I started out by standing it up as a test, and am considering strongly reducing my GitHub footprint in favor of it if only for the "free" private repos (yeah, I'm paying for a server, so it's not free). I think having GitLab stood up in my life will actually open me up to other stuff that I hope to find in this list, too.
Does Plex count? If so, Plex. I love it, and don't remember how I lived without it.
Fossil, by the one and only D. Richard Hipp, is a superb, dependable, easily deployable tool. I use it for all sorts of projects.
The executable is one file, the repo is another (an sqlite one). Things really don't come any easier than that.
It's basically self-hosted Dropbox, with clients for all mayor Desktop and Mobile OSes. I set it up for a little team project. Just one account, and a shared folder where people with a password could upload. I think we will move to individual accounts at some point.
But it supports much more. It has a calendar similar to Google Calendar and I've switched to it. It also has plugins for image galleries, contacts, LibreOffice in the browser, collaborative editing like EtherPad, and so on. I was very sceptical, but it is really well done.
Thank you! I was not ready to make the jump to self hosting my own Dropbox alternative until I saw what a polished program you and your team put together. Great work!
Same set up here, and it's very easy to install and use. Non-technical users have no problem using it either as the desktop and mobile sync clients are intuitive and reliable.
I have mine configured as a public website with most data protected by the inbuilt encryption, and I use EncFS directories to sync more sensitive data across machines. Some things require client-side encryption and this is easy to achieve.
After getting tired paying GSuite/GMail $5/mth per user I figured it's time to get my own email server running again.
Runs on a single $10 Linode instance, pretty easy to setup, super-easy to maintain, does a great job making your emails _not_ end up in the Spam Folder.
Having been through the "hassle to host your own email server" experience, several years ago, I was very skeptical about doing the mailinabox.email setup.
I was very pleasantly surprised, and have been hosting email for one of my domains on it for almost 4 years now. It was very easy to set up, and has been very easy to maintain.
I know this is a stretch but do any other HN users have a Turris Omnia router running Pi-hole? I had Pi-hole running on a Raspberry Pi no problem, but now that my router should be able to run it I'd love to learn how.
The only reference I found online is a single deleted blog post. Can someone please point me in the right direction?
What you can do, which pi-hole does, is modify your local hosts file and redirect ad URLs to 0.0.0.0. I just set it up that way, just as simple if not easier.
Single, large machine with 28GB of RAM, Intel i7 @ 4.2GHz running in my bedroom closet, connected to a high-speed 1Gbps/down network. Whole thing runs Ubuntu 16.
It basically runs almost every service I use:
- Plex. I tried to use XBMC, but Plex just kills it with their mobile app as well so I can just continue watching on the iPad. It's like having your private Netflix.
- OwnCloud, a self-hosted Dropbox/Google Drive. I keep my password database (KeePass) here and it nicely syncs across my devices. I also store non-essential photos there.
- cgit, a simple Git server. I used to run gitlab, but this is much more elegant and simple for archiving repo's and hosting my personal repo's, I don't need much.
- OpenVPN Server, in case I am in a country where there are certain restrictions to what I can access and what not. Also useful in case I need to access some stuff I don't expose over the internet.
- Henk, my personal home-automation system. I've automated various parts of my home, such as lighting, air conditioning, heating, roller blinds etc. It's a bit too exhaustive to outline here. In short, some micro services hooked up over Kafka. I have multiple instances of those services running in some EC2 machines on Google Cloud in case something happens to my home-server. This is probably the most important piece of software I have running. It's fully automated, so if it goes down, I'll lose the comfort of the AC turning on when I am on my way home.
- Camera security system. I used to work at a camera security company. I run their software to monitor my home.
- Transmission, torrent client. I've written some scripts for post-processing downloads. When a move finishes downloading, it moves it into the right location, looks up subtitles on OpenSubtitles.org and adds it to Plex.
- Nginx + LetsEncrypt for all of that. All of those services have web-interfaces. I run the web servers locally and use Nginx's reverse proxy to expose them on a subdomain. LetsEncrypt certificates for all of it.
I've considered renting dedicated machines, but I don't really feel comfortable not having this on my own servers.
Other tid bits:
- I live in Romania, 1Gpbs/down costs about $5/month here. Same goes for electricity, that costs about $15/month for the entire home.
I run my own git server with Gogs, which is written in go. I set that up 2 or 3 years ago on a 5 dollar DOdroplet and have never had downtime and I haven't SSH'd into it since the initial setup. I use it all day every day for many projects and have had up to about 5 devs regularly committing to it as well. Zero problems, ever.
There's also Gitea which is a fork of Gogs after some contributors became concerned with the bus factor, very slow feature development, and occasional disappearance of the maintainer of Gogs. I haven't use it but that's probably what I'd try first now.
I'm hosting this thing [1] to have a search-by-image capability among my images. Here's a blog [2] which describes how to install it and how I wrote a Common Lisp client for it.
Fossil, yes. Yes, yes, yes.
Citadel and Webcit, oh dear, if only. I use it for purely in-house stuff, and would love to expose outwardly. Alas, it quotes my passwords back to me in plaintext - game over, as far as I'm concerned.
I always wanted to have the habit of taking lots of notes, but I didn't really like having to carry around a physical notebook. I set up etherpad and now I constantly use it to take notes. It has saved me so many times!
Pretty good! I've had boxes that got more than 4 years uptime there. Hardware is solid, network is nice (at least for EU), and when it come to the price: it's the best ones.
Yeah, I'm not surprised. I've stopped using OVH around 5 years ago for this and other stuff (they changed prices and options of an offer while already subscribed). My day job company is still using them but we're moving to Online because the support at OVH really, really sucks.
Technically it's an email client but since it works in your browser anyway, you can run it on a server as a personal webmail. It can work with a local MTA or regular accounts at other providers for which it provides automatic configuration with ISPDB. It supports all the basic functions and GPG.
One thing that hasn't been mentioned here yet is Emby.
Awesome home media library solution. I finally broke down and bought the lifetime license so I can download media to my tablet and watch it offline.
But even without the license the software is rock solid and amazing.
Other than that my list resembles other lists.
pcengines apu for home router
gitlab (I actually found that gitlab was overkill for personal use so I either use gitlab.com private repos or just git+ssh at home)
nextcloud for family pics
siptrack for password and inventory management
kodi
openvpn to access my LAN
I have a kvm hypervisor at home with a homebuilt nas for setting up testing and PoC virtually.
the nas is fedora+zfs+iscsi with one 4x2.5" SATA 5.25" bay in an external cradle connected with eSATA and one internal 5.25" bay with 6x2.5" sata disks. all disks 1TB, two separate zpools with raidz.
I've tried in vain to find an open-source equivalent that has TeamCity's world view regarding snapshot dependencies and VCS triggers, but so far my search has left me empty-handed. That, plus the wide variety of niceties it bakes in (manually cancelling a build that was running for much longer than usual? It will automatically offer to add a note to that build that it was cancelled because it was hung. Etc.) has made it hard to use anything else. The search continues...
There are barely any other programs in existence with quite as many tiny features, let along open source and for the same purpose.
For those who don't know, they give out $free licenses to open source projects so it's a viable and more customizable alternative to TravisCI for your next Github adventure if you want to try it out.
It's even $free for closed source with <= 20 build configurations. I'm sitting on... 19
Cloudron https://cloudron.io it basically gives you curated docker packages and does all the setup automatically. I use it for gogs, WordPress, email, meemo notes
I've started to use QueryClips for all of my simple querying and sharing. It's got some advantages over Heroku Dataclips, like the ability to invite your colleagues, support for MySQL, etc.
nginx (HTTP), Prosody (XMPP) and uMurmur (Mumble voice-chat) stand out as being very easy to set up (single configuration file, good documentation) and having no failure modes that I'm aware of.
nginx deserves particular mention for handling a HN frontpage crowd on a single-core VM without even blinking.
InvoicePlane is a superb tool that I regularly use for writing, sending and tracking of the invoices that I send to my clients: https://invoiceplane.com/
For news reading: miniflux (I hacked up the css to make it legibile)
I plan to start self hosting a copy of five-filters rss, which scrapes full text from rss feed articles. It is basically the ultimate ad blocker / AMP replacement.
Overall hardware platform:
4 pcengines alix boxes for openbsd router/firewall appliances
3 supermicro Opteron servers with KVM/corosync/Sheepdog/csync2 for hosting all VMs
Software:
PF + CARP + pfsync + OpenBGPD for routing
Unbound + NSD + Bind for DNS
SSH/OpenBSD ipsec/apache Guacamole for roaming and permanent site-to-site VPN (pcengines ALIX hosted at my inlaws in Japan)
Apache + Lets Encrypt + awstats + relayd for serving web pages and analysis
ZoneMinder for video monitoring. Tied into legacy security system for automation
Postgres for database work. Some mysql/redis
NetDisco + Nagios + NagVis + NFSen + MRTG + Smokeping + PNP4Nagios + NUT + Splunk + Racktables for monitoring. All configs are dynamically generated from netdisco db
OpenSMTPD + Citadel (webcit) for email delivery and webmail
Minetest server for kids. We use this tons as a family, and the kids spend lots of time modding. TW2002 server. TShock server.
OpenELEC for diskless netboot KODI machines around the house
Samba4 Domain controller + NFS for sharing files in different applications
SVN for source control and Config diffs for all servers/tools/network devices
Asterisk via FreePBX / NCID for all phone/CallerID services, including remote handsets at VPN locations
And that's just the ones that I really enjoy using off the top of my head. I hope to find lots more things to try in this thread. Metabase already looks like an awesome candidate!