The author is not wrong, there is a lot of old hardware out there that is still more than serviceable for many things. But the loaded cost (power, maintenance, space, etc) of running servers at "home" can be way more than the $10 a month for a droplet on Digital Ocean.
What I'm saying is that the math isn't as straight forward as it might appear.
That said, it is now pretty straight forward to build a 'nano-datacenter' consisting of an enclosure, thermo-electric cooling, rack, switch, UPS, a server providing NAS storage and a couple or 6 servers providing compute. Plug its PDU into the socket where your dryer used to plug in out in the garage.
I have an old Thinkpad T23, it's about 15 years old. It can run Windows 7 (with aero off) perfectly well, and also stuff like Libre Office. Linux is not so good on it, due to having to play with the graphics drivers, which I've never really got working.
It all falls apart when you want to use the Web. A Thinkpad T23 with a Pentium3 CPU just cannot render Modern Websites at acceptable speeds.
Basically if it wasn't for heavy JavaScript websites, most people could still be using decade-old machines for their day jobs.
Indeed! I used to run various services on my older generation hardware, but I realised that I was spending more than $70 a month on electricity to do so!
It's much cheaper using a couple $5 VPSes for services.
The only hardware left at home that is on 24/7 is my NAS, which is because 20TB of storage costs too much in the cloud.
You'll need a domain name to associate (and handle email, etc), and while technically you need 5 users to have unlimited storage, at the moment it doesn't seem to be enforced and a single user does have it.
Currently I'm storing ~50TB for $10/month. At this price and having 100Mbps Internet connection, I only turn on the NAS for local backups.
I use an old HP laptop/netbook for my personal stuff at home. Running is private git repos, dnsmasq with ad-blocking host, and Syncthing to keep my files handy at all times. I force it into powersave mode using cpufreqd and throttle the CPU to the lowest step. I haven't put a power meter on it, but I can't imagine it's using a ton of wattage. If it fails, there are a few others in my office to take its place.
I've been advised to remove and recycle the batteries due to safety concerns. Apparently when they are designed, they are not designed to be on 24/7. My laptop's batteries are toast, anyway, so it doesn't make a difference to me.
It's not so much that they can't be used 24/7, but that Li-Ion batteries really don't like staying at 100% charge (nor 0% either for that matter).
If you can configure your laptop to only charge the battery to ~80% or so, you'll extend the battery life significantly. The optimal range for long-term storage is ~40-60% charge.
What are you using for adblocking? I was thinking about setting up pihole, but I already run bind at home for a private zone and I didnt want to deal with setting up forwarders. Are you integrating adblocking with bind by any chance?
I used to have an old computer and a NAS, but they were too noisy.
I used the progress in CPUs in a different way: bought one of the fanless ITX sized computers and attached reasonably sized WD passport to it which is spun down most of the time (things are mostly served from an internal SSD)
I have similarly evolved my setup, initially a couple of 'midsize' towers and a StoreVault NAS, then the computers were replaced with a couple of Shuttle XPC type 'box' computers, then the NAS was replaced with a much quieter FreeNAS box from iXSystems, then the 'services' system (DNS, NTP, LDAP, Nagios, etc) was replaced with a couple of small ARM machines (initially Raspberry Pis but migrated to Odroids for performance reasons). And then the last XPC was swapped out for a fanless 'third height' ITX type system that is designed for home entertainment systems.
Realistically all of this could be built into a single hybrid of disk shelf and blade server box. That is something I've contemplated as the 'next generation' of the technology.
As it turns out, forced air cooling isn't as efficient as water is for carrying heat. As a result the most efficient way to cool a server is to put a water filled heat exchanger in front of it and let the computer's fans draw air through that into the chassis.
And that leaves you with how to dump the heat out of the water, and that is most effectively done (in a small scale) with thermo-electric coolers. You can add scale cooling very effectively by turning on additional coolers. My design is pretty simple, the reservoir tank is on top of the rack and there is a heat exchanger on the bottom and one in "front" of the CPUs. As they draw air through the heat exchangers the water warms and rises to the top where the reservoir is, and the chilling water in the reservoir naturally sinks back down to the bottom where the first heat exchanger is.
You can think of it like a rack sized version of the liquid cooler you can put on an enthusiast PC these days. Its quiet, self contained, and quite efficient, while still being able to use forced air cooled server equipment. The goal being efficient cooling in a temperate climate (California Bay Area) that is quiet and can use surplus/old equipment that was originally slated for a data center without violating any of that equipment's assumptions about the environment it can count on.
> As a result the most efficient way to cool a server is to put a water filled heat exchanger in front of it and let the computer's fans draw air through that into the chassis.
What standard are you using for "efficient"? Thermo-electric coolers are neat, but the last thing they are whatsoever is efficient from a power-usage stand point.
If you're doing chilled water cooling, the only real option to consider is a mechanical heat pump (e.g. the ones in a refrigerator), but even that is silly. Hardware doesn't mind being somewhat warm, and using ambient air for cooling is far more efficient then either.
There's a reason thermoelectric refrigerators aren't really a thing. Peltiers are horrible. Mechanical heat pumps easily achieve twice the efficiency or more.
>that leaves you with how to dump the heat out of the water, and that is most effectively done (in a small scale) with thermo-electric coolers
This is the part that sounds strange to me. Water cooling has its advantages, but the heat is almost always dumped from the water by pumping it through a radiator that has fans blowing air through it. What's the advantage of thermo-electric cooling here over the traditional method?
> What's the advantage of thermo-electric cooling here over the traditional method?
1) It is quiet
2) It is easily electronically controllable based on the heat load in the water.
The heat sinks on the thermo-electrics are convection cooled and you could blow air over them but as you increase the thermal gradient between the heat sink and the air around them you are able to dump more heat. The only parameter you can vary with the typical heat exchange is "blow more air" you can't change the working temperature of the exchanger.
That said, the way power plants cool their water is in big evaporative cooling towers, but since I don't need a humidifier for the lab I stick with just convection cooling :-)
I had something similar to this on my old P4x rig had watercooling with the radiators mounted horizontally outside the case, made a thermal controller for the fans and most of the time they where stopped or at maybe 40% speed, only with heavy loads they went faster. Now i have an arcooled I7 and the fans drive me nuts.
I've spent a fair amount of time this year with liquid cooling designs, and final parts are coming in to do something along the lines of what you're talking about. Do you have any blog posts/videos walking through this? It sounds really interesting and I'd love to see it.
Sadly many of these old beige boxes weren't very well designed for efficient low power consumption. Always ask yourself whether a Raspberry Pi style platform, or even a jail-broken old phone couldn't run that workload just as well on a fraction of the old banger's power budget. Most often the answer is yes.
out of nowhere : is there any effort to build/tune linux kernel (or gnu/linux os) so that it consumes the least energy possible. (besides powertop and alike).
I really would not say that a 2007 Xeon is in any way comparable to something newer. Something that old is just a waste of watt-hours.
But if you skip forward a few years, a 2012-2013 quad socket, eight core Xeon machine that was many thousands of dollars is still quite useful. If you want to lab your own xen or kvm hypervisor stuff on a machine with 32 cores and 256GB of RAM you can do it for under $1000, if you know where to look, and can tolerate a 500W load addition to your electrical bill.
I've been planning to buy a used high-end Xeon server circa 2015, harvest the CPU's and RAM, stick them on a workstation motherboard and add some silent cooling solution. That way i'll have a 20+ core, 128+GB workstation for about 2k euro.
Electricity and heat is no issue for me as electricity is included in the rent of my office, and we currently use electric heaters to keep the office warm...
If I were going to spend $2000 to $2500 USD I would definitely build a new threadripper machine with nvme storage for a development system. But if I had a combination of only $750 to spend, free power, and a place to put something very noisy, I'd totally buy a 4U Dell 32-core server off eBay and use that.
I have a circa 2013 low end Xeon Dell tower server and the power usage is quite low. Certainly much less than 500w.
My entire tech power load, which includes my APC UPS, ubiquiti router and access point, 8 port gigabit switch, MoCA bridge, the above mentioned Dell server, and a Mac mini for remote Mac development, consumes about 100w at a steady state as measured at the outlet with a kill-a-watt meter.
At my current utility rates that comes out to be about $8.62 a month - not free but certainly imo a good investment to have local VPN, data storage, ESX and KVM virtual machine hosting.
My take on this is a small SmartOS cluster running on 3 x Thinkpad T430s - each with the DVD drive removed and two SSDs installed as a ZFS mirror.
The whole cluster idles at 150 watts and each node is backed by it's own dedicated battery. Upstream I have about 4 hours on an APC unit which also protects the router and modem from outages.
Do you have any guides you wrote or used that you can recommend? I'm looking at getting SmartOS set up at home as well, but I know I've got a lot to learn.
The main issue with repurposing random x86 hardware is power. This means that anything NetBurst based is just garbage, but on th other hand random Core2 based corporate SFF desktop (which you can get for less than 40EUR) is perfectly serviceable as home internet gateway/whatever server. You want SFF desktop because it is designed to be reliable, it is usually cheap and relatively quiet.
Anecdotally I see companies throwing out dell and hp 4th gen core i5/i7 sff desktops from 2012-2013, straight into electronics recycling, all the time now. One of those with 8GB of ram makes a totally fine Linux workstation.
The part comparing "Intel Xeon E3 1320 V6" [sic] with older "Intel Xeon E5620" as having the same memory bandwidth, same TDP, more L3 cache, etc., so having "the same performance characteristics" is just plain crazy.
1. Xeon E3 1320 never existed. Was it meant to be 1230?
2. Does DDR4-2400 really have "the same performance characteristics" as DDR3-800 (even after accounting for having 3 channels instead of 2)?
3. How can a 3.5 GHz processor be "the same performance characteristics" as 2.4 GHz one (1.5x more)? Add IPC on top of that.
3) GHz cannot be used as a speed comparison outside of the same generation of CPUs. A 10 year old 3.5GHz gets nowhere near the same compute power as a modern day 2.4GHz.
For a lot of basic operations, this is often false.
I have an application I'm working on and tested it across three different generations of CPUS going ~10 years back.
The code was written in C and the fundamental steps were to: mmap() two arrays of structs from a sata 3 SSD, compare the two, and write the intersection to a block of memory. The data size of each array was multiple gigabytes and the operation took more than a full second.
The ~10 year old low powered Xeon L5640 beat the higher-speed ~5 year old Xeon E5-2667 v2, and was only slightly beaten by the 3 year old i7-4790k. In short, they all ran at roughly the same speed.
I would say that this is because of the limits of the SSDs, but then I tested by holding that data in memory rather than disk, then ran the intersection. Same result.
This is a very basic operation that tests the performance of the CPU and memory. And for operations like these (ie: not using SSE2, AVX, etc), the perceived performance difference to the vast majority of people for a secondary home machine may be negligible.
That really depends on your load. I once feared a huge performance regression when a test took 2 minutes instead of 40s. Turned out one instance ran on a i7-6xxx machine, and the other on a i7-9x0. Normalizing for the different clock, the newer CPU had a 20% better IPC.
Disclaimer: I don't recall exact numbers, but they should be somewhat close to those given.
> I would say that this is because of the limits of the SSDs, but then I tested by holding that data in memory rather than disk, then ran the intersection. Same result.
Wouldn't it then just be testing the limits of your memory?
I think he means effective performance, ie if the machine is idle 95% of the time with the slow processor and 97% of the time with the fast one it really doesn't matter.
The Xeon E5620 mentioned is also from 2010, 8 years ago, not 11. Of course a Nehalem-Sandy Bridge era processor is still relevant. Many people are still using those today, given the Moore’s law slowdown.
Reusing old hardware is all fun and games, but please remember to make regular, offsite (online!) backups of your important stuff. Hardware will fail at some point, and I'm not even talking about the obvious risk of house fires/flooding...
This. Having old hardware running various services in the basement is handy, saves money (unless they use a ton of electricity), but the second stuff breaks down you will be screwed unless you have a backup somewhere.
I have an old IBM System x3550 M3 running in my attic.
It cost me $50 + a new ssd. It's been great fun to play with. 12Gig of ram and 16 threads so it can do more than enough for me. I found it a bit slow for game hosting but for the price I can't complain.
I also have a couple of cheap VPS, but having a server on the same network is much snappier and I can throw large files around with ease.
I love old hardware. I have a Pentium II box I've been upgrading for fun. It dual boots Win98 and OpenBSD from two CF cards, and can serve many contemporary application stacks. It's enjoyable knowing these big old chips can still turn out a lot of functionality even if it's just recreational.
I adore Windows 98. I spent most of my time on 95 but 98, especially SE, really polished the experience on a then-modern OS.
I mostly use 98 for nostalgia sake, playing old games, testing old software, and observing how things used to be.
I find it fun to firewall it off and connect to the internet with IE and old Firefox versions - finding what works and what doesn't. It helps provide some perspective and entertainment. It's incredible how interoperable Win 98 can still be with your modern devices.
Eventually I'll upgrade the Pentium II to an AMD K6-2 and see how it goes!
My new HDD started having problems in just 1.5 years.I just salvaged an old HDD from my dad's 9 year old laptop (Dell Vostro) and used it.
This old laptop is what I would go to whenever I was between laptops. It's a C2D, 4GB RAM. I installed debian with i3 and it was good enough for browsing internet, to do random coding stuff, reading PDFs, watching movies (although it would run hot). Also I played a lot of Roller Coaster Tycoon[1] on it. Man, I love RCT!! I just might plug in an SSD in it during the next windfall.
Anyone know what happened to Edna?
Hasn't been updated in years unfortunately & I'm wondering if it's still safe to run online.
https://github.com/thedod/edna
It looks like AudioStreamer that OP listed but I prefer to play my music with vlc (or whichever 3rd party player)
What I'm saying is that the math isn't as straight forward as it might appear.
That said, it is now pretty straight forward to build a 'nano-datacenter' consisting of an enclosure, thermo-electric cooling, rack, switch, UPS, a server providing NAS storage and a couple or 6 servers providing compute. Plug its PDU into the socket where your dryer used to plug in out in the garage.