Yes, it is viable, I have a Lenovo 3000 N100 (2007) still running as a VPN Server, while I am overseas since 3 years ago. A non-tech person living in that house turns it on for me every once in a while, as I have not configured any Wake-on-LAN or similar. The only new internal parts are the 2Gb of RAM sticks and SSD.
It runs CentOS 7, installed well before Red Hat did the CentOS Stream thing. It has a ZTE MF667 GSM USB modem I use to receive SMSes for 2FA that local banks still use.
It doesn't have any desktop environment, therefore I only use it via SSH. It updates its own IP address via a free subdomain on freedns.afraid.org using a cron job.
I have better than UPS. I have watchdog on my server. It is a reprogrammed wifi-plug, which turns power off&on if it cannot see the server, or does not get ping every 5 minutes.
And the server can also restart wifi-router and cable modem if cannot ping the selfsame watchdog (and 8.8.8.8) at reasonable intervals.
The server causes the Blue Led (aka "Sininen Ledi") to blink, so that I immediately know Watchdog is watching and gets nourishment. Sometimes I boot to some other distro and forget to turn off the Watchdog, which is annoying.
That's cool.
I used to have a raspberry pi hooked up to the reset switch pins via gpio on a mining rig in 2013. Back then, the mining software was all buggy and would crash the rig every couple of days.
When the pi couldn't ping the rig for 30 seconds, it rebooted it.
Ubiquiti sells a nice piece of hardware that is a WAN reboot plug. Unfortunately there is zero configurability in terms of how it determines if WAN is working or not. I haven't been able to find another purpose made piece of kit.
My server is so old (2012) it has parallel port. If you connect all pins together it has enough power to operate 220V relay and thus modem and router can be turned on and off without any fancy hardware.
This is all you need to know. Connect all parport data pins together and connect it to suitable relay. Preferably 3-pin relay, so the Wifi-power-on is the default state.
$ cat wifi-off
#! /usr/bin/python2
import parallel,time
p = parallel.Parallel() # open LPT1 or /dev/parport0
p.setData(0xFF)
$ cat wifi-on
#! /usr/bin/python2
import parallel,time
p = parallel.Parallel() # open LPT1 or /dev/parport0
p.setData(0x0)
If you are ever to pursue such a project, please pay close attention to proper electronic circuit design and safeguards when it comes to relays. Otherwise, you risk burning out the controlling pins or the whole system, due to reverse voltage (measured in hundreds) that is generated by the relay's coil when the relay is turned off.
The easiest approach is to just use a commonly available relay modules used with Arduino or similar platforms/microcontrollers, as they already contain minimum or all protection mechanisms. So, the module will contain the reversed diode in parallel with the relay, while additional transistor and optocoupler will allow you a complete galvanic isolation of the controlling- from the and relay circuit.
YouTube has plenty of user-friendly resources with schematics and functional overviews, just search for "relay module" or "relay arduino".
You'd think so, but lately I've found it to be quite handy. Over the last couple of days, the UPS has notified me that the voltage coming from the wall is a bit off, oddly high (250-256V). netdata is nice and even graphs it out to me so I can pinpoint exactly when and how much the voltage deviated from normal.
Happens twice a month, reasons vary. Mostly because Wifi is too congested in middle of Helsinki.
At one time it happened every day. This was because of open Telnet port. Crooks, mostly Russians, caused so many Telnet-open processes that machine choked up.
Recommend the esprimos. Fujitsu is very honest in their energy consumption white papers so you can know exactly what they'll draw.
Also their mainboards - as far as I can tell - have always been atx wrt screw holes, so you can put them into larger cases. Problem then is that the psu is proprietary regarding both size and connector. They've been doing 12V only to the mainboard for a couple years now, and convert to other voltages on the mainboard. But replacement parts are cheap and plenty on eBay, so don't worry too much.
While the performance/Watt is higher than modern hardware, what is the total resource-cost of buying a new mini-server? It might be relatively cheap money-wise, but we have yet to add in the long-term costs to all the gadgets we make. This old laptop is from 2013 and uses between 10-30W (highest when the screen is on).
You don't have to buy new, but maybe 2018ish is a sweet spot when it comes to price/performance in the used market. My Fujitsu esprimo has a 6th gen i5 and idles at 14W with one spinning disk, one SSD and one spun down disk, plus an additional nic.
I'm running an Asus N751JX with 16GB RAM. It works perfectly and is a great addition to my small Raspberry pi 3/4 army.
But a heads up for everyone doing this. Please, please remove the laptop battery before running it 24/7. Otherwise this is a serious fire hazard. The hardware and especially the battery are not designed to run under these conditions.
1) Source? Well, as it has been also written here, I hear that from every IT-Department I worked with at work. Also learned it once the hard way and got a swollen battery. It was an old old laptop (around 2003 IIRC) and old laptop batteries are an own story, but since people want to repurpose old hardware, I think this is a reasonable hint. Also Linus Tech Tips brought this up in one video.
Also, the newer your laptop the better it might handle it, but I wouldn't bet on it.
2) Why? From my understanding the constant on/off charging and heat development (also due to faster dust buildup. If the fan runs 24/7 it sucks a lot more dust into the case) is the problem. Especially if you run heavy tasks over a longer period, the hardware will heat up to a level that it is not designed to do.
I will grant that the lighter the load you put on that hardware, the less all this precautions matter. But my principle is, that if something is not designed (and likely not tested) for something, proceed with caution if you understand what you're doing, or leave it be. So if you want to host your static website on it, you might never ever be near any problems. But I still wouldn't recommend it.
Lithium batteries have an approximate 10 year shelf life under optimal conditions. Possibly less when heated consistently by a laptop running 24/7.
If your laptop is already 6-8 years old then if you want to keep the battery in as a UPS system it may be worth considering purchasing a new replacement battery and installing that (if it has not already been replaced)
Most computers will only need a small screwdriver and 10 minutes of work to swap the battery, although you will want to check youtube/ifixit for your specific model first just to make sure the battery isn't glued in or otherwise difficult to remove and replace.
Depends on the hardware - I run 2x Dell E6330 server-laptops and their BIOS has an option to change the charging configuration to "primarily always on AC" - some Latitudes are designed to run docked all the time as "business PCs".
While we're here, turn off TurboBoost in BIOS to keep them running cooler with the lid closed, I've found it really helps when tasks get a little bursty.
Being uncharged is actually pretty tough on the battery, for safety and longevity they're best stores somewhere between 40%-60% charged (that's why they come that way from the factory).
I ran laptops 24/7 on AC with the batteries acting as UPS for over 15 years. Granted, all of them had removable ones with 18650 cells, so no swelling.
The charging method has been standardized a long time ago - they are charged to ~96%, then cut off, if they fall below ~94% they are topped up again.
There is a significant loss of battery life if you do that for years. I used to unplug some of them and do 2-3 full discharge/charge cycles, that usually helped increase the battery life.
Some laptops use the battery as additional power when running at wattage beyond what the AC can provide.
If you want to be safe with your model, you can try removing the battery while the laptop is plugged in - it should not shut off, the circuit for charging the battery and powering the machine are separate.
I lost more batteries in storage - they still discharge, and even though the cells could be revived, the BMS won't let it recharge if it falls below a certain voltage, rendering them dead (unless you find it fun to restore them).
I haven’t seen soldered batteries. You must be thinking of ram. Batteries could be hidden behind the bottom plate but are usually connected by a cable internally and that’s all you need to disconnect. That’s true even for macbooks, iPhones, etc.
>>The hardware and especially the battery are not designed to run under these conditions.
What does that mean?
My laptops run 24x7, not as server but as docked workstations. I have never considered that as a safety risk, and definitely have not considered it outside of norm. At work for a public sector client, we have two floors (few hundred?) laptops plugged in and running 24x7 (ask from IT department is to leave them on overnight for maintenance). Sanity or eco-friendliness aside, this is a first I heard that laptop cannot / should not be run around the clock plugged in...
Curious. My understanding is that, when connected to power, most modern devices bypass the battery entirely when it's above some threshold of charge state.
I have some old Dell laptops that most definitely don't do that. Left running on AC for several hours, the batteries get very warm due to the constant trickle charging.
My media server since 2014 has been a 13" 2010 MacBook Pro running Snow Leopard with the built-in Apache server configured to show directory listings. Nothing else is needed... Apache automagically streams mp4s across the network to multiple users flawlessly. I never understood the point of installing a custom Linux media server with a slow funky eyecandy gui when any OS + Apache is all that is needed.
A simple setup like that works well for "just" media streaming, but quickly hits a wall if you want more features: stuff like library search, subtitles, chromecast streaming, etc.
No, that's all redundant. Put all your mp4s in one folder. All of them. Your browser can search the page of directory listing that Apache generates. Subtitles are built into the codex, has nothing whatsoever to do with the server. Chromecast streaming? You're kidding me. Chromecast is and uses the same web technologies as a web server and a web browser. There is nothing novel there. And I already specified that Apache streams to as many clients as can be thrown at the server simultaneously without any extra hardware or software. Oh, you want to stream remotely from your home server? All you need is Apache and an IP Address. Please stop giving Google money. Apache is free software.
Yes, yes, we're all very impressed with your techno-asceticism, but also some of us just want shit that Just Works, has a remote control that isn't just a wireless keyboard and mouse, and doesn't require us to actively sysadmin our TVs.
> Oh, you want to stream remotely from your home server? All you need is Apache and an IP Address.
Hey, you know what "Apache and an IP Address" won't do? Automatically re-encode my enormous remuxed Blu-ray rips at a lower bitrate so I can stream them over my comically limited upstream to a criminally throttled hotel wi-fi connection.
You're talking about something else with a remote control. When you're streaming, the device you're streaming to has control of the audio level, display ratio, and what you are watching, not the server. The remote device becomes the remote control. You're talking about a smart TV or a console system you attach to a TV. Streaming is a different thing.
Apache will stream your stupid 75GB BluRay files without any need to re-encode, and you can watch it remotely so long as the client device can decode it, even across a 10kbit connection, so you go ahead and enjoy your wasted processor cycles and higher electric bill, oh, and watching your content at a lower bit rate and resolution. I'll just stream the large media as is at full quality without compromise.
You have to work really really very hard to get Apache not to just work, and it takes a slashdot-level event to get it to stop working.
Meanwhile, whatever thing you bought will be abandonware in less than 2 years. And chances are about equal your thing is just running a crappy wrapper and gui for Apache anyway. So the business end of your thing is probably free, yet you paid for it anyway.
I feel like you're intentionally missing the point.
Your "dump all your videos into a big directory and Ctrl-F the Apache directory index" workflow will necessarily limit you to watching on devices that can navigate to arbitrary URLs and play videos from them. This will, in most cases, limit you to a computer (built-in browsers in "smart" devices, be they TVs, consoles, or purpose-built streaming boxes, will often play videos only grudgingly), and I don't know about you, but I don't want to sit and watch TV episodes or an entire movie at my computer desk with its relatively tiny monitor and relatively shitty speakers. I want to watch them on my big TV with the fancy sound system in the living room. So what would you suggest for this use case? Connect my computer to the TV? OK, great, but the video card won't output surround sound over HDMI, so now I have to futz with a bunch of other connections if I want more than just stereo sound. And once that that's sorted, how am I supposed to pause the movie when I have to get up to piss, because the TV remote can't control my computer?
> Apache will stream your stupid 75GB BluRay files without any need to re-encode, so long as the client device can decode it, even across a 10kbit connection, so you go ahead and enjoy your wasted processor cycles and higher electric bill.
Again, missing the point. Yes, Apache will happily squirt those bytes at whatever requests them, but that will not produce an enjoyable user experience (i.e., one where I can WATCH THE FUCKING MOVIE) over a sufficiently slow connection, and I'm not going to re-encode every file I have at a low bitrate on the off chance I want to watch it away from home.
> Meanwhile, whatever thing you bought will be abandonware in less than 2 years. And chances are about equal your thing is just running a crappy wrapper and gui for Apache anyway.
My Apple TV HD in my bedroom is still receiving updates seven years after it was originally released. The Apple TV 4K in the living room is still receiving updates five years after it was originally released. I run Plex to stream my local media to local (and my own remote) clients, and it is still actively maintained after 14 years--and if if Plex, Inc. should go under, there are plenty of alternatives that won't necessitate me regressing back to a computer connected to the TV.
Dude. I get it. It's HTTP all the way down. I know that; I'm a professional software developer and have been Into Computers since I was old enough to read. That's also how I know that "just dump your video files into a directory and let Apache serve them and watch them on a computer"--or, yes, "just connect your computer with the directory full of video files directly to the TV"--is a shitty user experience, which is the point you seem bound and determined not to concede, since you are instead laser-focusing on the literal reading of statements like "I'm not going to re-encode every single video file I have on the off chance I want to watch them away from home" (obviously I meant I'm not going to do that AHEAD OF TIME versus letting my Plex server do it on-demand and stream it to me when I'm using a device not on my local network), or splitting hairs about binaural hearing and the utility of surround sound.
In any case, I yield; you have Won the Conversation; congratulations. I'd give you a gold star, but I don't think HN allows emoji in comments.
Whatever works for you. If you have some aversion to hotlinked directory listings and need to see a glitzed up array of tiny movie posters, go with God. I'm glad your stuff works, if it does, and I cry with you if it doesn't.
I just put all my media in S3. Local minio and cloud (w/ presigned urls). Small web app to provide any organization required, with a URL handler to open VLC / music player app.
I'm using a 2006 vintage Compaq Presario as a home server running netBSD. The battery holds up for about 30 minutes, enough for an orderly shutdown. The LCD failed some time ago, so I simply removed it. Mostly I connect via sshd. Can connect a VGA LCD for rare low-level sysadmin work.
I still believe the mac mini is the perfect home server. You can now get them with 10 gig internet. They sip power so a UPS can run them for ages, almost no heat, can transcode media for plex, cheap, tiny, etc. You can also use them connected to a TV for watching media that isn't easily accessed from a smart TV or media player. The only con is either MacOS, or linux support. Besides that they are incredible. If you need more horsepower sure, not perfect.
According to apple they idle higher(?) than the 2014 model, though this value seems to have not changed all that much in a decade. For example, 11W idle was a popular pc build in 2011 already it seems that this is still a common value.
Is there any way to attach 4-6 SATA drives over Thunderbolt? One of my main uses for a home server is as a file share, for both device backups but also for photos etc. I’m still running a Gen 8 HP MicroServer because nothing since has really come close in terms of cost and form factor.
My 2012 i7 Mac Mini is still alive and kicking for its intended use, which is basically media playing and launching Homekit automations. Lots of Youtube content these days, but it saw quite a few torrents and Soulseek downloads on its day. I cannot justify replacing it, even if it is running an unsupported OS.
2006 or 2007 (I forget which exact year) Mac-mini-server guy checking in. Sorry, just felt like bragging about an old computer. It had spinning disks, replaced with 2 SSDs.
Gifted to my dad 5+ years ago, he refuses to replace.
Anecdote: He thought it would be a great idea to clean up certain ports when I gave the mini to him, sprayed WD40 liberally. Then it was all wet and I suggested he put it under (2 ft away) a 60W bulb overnight. He's, well, my dad... So he placed a 100W bulb right next to it and forgot about it for an evening. The backpanel was black plastic and it melted into the mainboard. This thing refused to die, and will probably outlive all of us. Anything Apple post 2012 is yet another story at least for me.
I wouldn’t go back further than the 8th gen intel minis that the m1 replaced, just to get a bit of longevity out of them (including macos support) and the price for those is close enough to the refurbished m1 models that I don’t think it’s worth bothering about intel minis.
Old laptops also make killer home routers. I recently replaced my consumer grade wireless router with an old Thinkpad in a router-on-a-stick configuration + PoE switch + dedicated access point and my home network is better for it in just about every way. Added bonus that it can run a simple webserver and run some light home automation tasks without breaking a sweat.
Of course the downside is that you lose a nice web GUI for managing port forwarding and stuff, but if you're SSHing in to adjust iptables on your router you probably don't need the web interface all that much anyway.
It's probably true that an off-the-shelf router uses less power, but if you already have an old laptop lying around, at what point do you break even on the electricity cost vs. buying more efficient hardware? Especially when the difference is on the order of a few watts.
A killer solution is to run OPNsense in a virtual machine, and then adjust the virtual<->physical NIC mappings when moving between hosts. Its super powerful and has plugins galore (haproxy, nginx proxy, dynamic dns updaters, openvpn etc). I run two VM's in high availablity mode and can take down any virtualization host without any downtime(important with WFH).
A USB 2.0 ethernet port maxing out at a few hundred Mbps would suffice for WAN for most people, and the built-in gigabit port can be connected to a switch
I have a similarly spec'ed mini ITX desktop from the same proc generation-- it's an i3 instead of an i5 mobile proc but roughly the same capabilities. I'm sure it sips a bit more power than the laptop but it works fine as a home server.
I'm always a bit surprised by that old machine. It has a slightly above mid-range GPU from that time period and the thing can still play newer games-- at 720p, with low to mid settings without much of an issue. It's certainly not setting any records but for a 10 year old machine... it just tells me that we reached a point some years ago where hardware capabilities were just fine for a wide range oF normal activities, and that unless someone is using a machine for gaming, large data sets, or compiling lots of code then they'd probably ne just fine with a reasonably spec'ed 10 year old system.
Most of the time that I'm called on to deal with a supposedly aging 3-year old computer for a family member that's slowed down a bit it has nothing to do with the hardware getting old. Most of the time it's accumulated crap running on startup combined with having purchased the computer that had the highest # attached to storage (ooh 2TB I'll never use!) instead of a snappy 512GB SSD.
OpenBSD will disable all but the first thread on any Intel processor by default. I'm assuming that an Intel i5-3320M (2 cores 4 threads) is too old to have microcode updates addressing the Spectre exploits (Meltdown, Foreshadow, Fallout, Zombieload, RIDL etc.), and disabling SMT/HT might be the most secure thing to do by default.
This script produces a good assessment of Spectre problems for a wide variety of CPUs. I know that they are difficult to exploit, and the mitigations are disabled by many because of their performance impact.
I have a 2012 HP Elitebook 8560p I inherited from work decom that I used as my primary PC for about 4 years, and even now is my torrent client for public domain movies and is also my 'I guess I just need to use Windows' box after I built my desktop as a Fedora KDE box.
It is heavy, built like a tank, and has all the ports I still expect laptops to have (VGA, cat5, USB A, Optical disk) even when those ports are woefully out of fashion. The only problem is it's all PCI 2 SATA bus so even the cheapest SSD maxes out the system bus. She's starting to feel like flying an Excelsior class starship in the TNG era.
Today's laptops are sans optical disk drive, have lots more 'goodies' like IME and Computrace, have soldered ram (it about killed me to find a 360 degree hinge laptop 2 in 1 with AMD and unsoldered RAM, found one by HP eventually but still it shouldn't have been so hard), and generally are infected with phone-itis where everything has to be skinny, thin, light!! More than one micron thick? Old!
But for all that, said laptop is not my home server, nor are one of my many salvaged 'just in case I need a home server' laptops sitting on my shelf in the computer lab.
Instead I have a random cubicle farm Dell SFF PC that runs my home server, does fine until it randomly locks up, probably overheated Mobo like the OP has mentioned as probable symptoms, mostly because I don't want to play with the configs again! It's all undocumented and manually configured and I haven't put in the work to clean it up and put the configs into nixos or config management yet.
The moral of the story is (I guess) to use config management else all of your computer hardware hoarding may be stymied.
I love this. It's weird that we don't consider the fact that computers becoming more powerful and home internet becoming faster as having an effect on personally managed servers being a more feasible option. How few of todays servers could replace the servers spotify used when they started up a decaded ago?
Actually, I just put a used HP ProDesk 600 G3 Microtower (Pentium Gold G4560) into operation, and my energy meter shows 0.0W most of the time (which can't be true, but apparently it's something <1W) with Ubuntu + TLP installed and the HDD in standby.
Here are my TLP settings:
TLP_DEFAULT_MODE=BAT
TLP_PERSISTENT_DEFAULT=1
# Change some BAT settings back to AC defaults
CPU_MAX_PERF_ON_BAT=100
CPU_BOOST_ON_BAT=1
CPU_HWP_DYN_BOOST_ON_BAT=1
# Disable options that are undesirable for servers
MAX_LOST_WORK_SECS_ON_BAT=0
WOL_DISABLE=0
DISK_DEVICES="sda sdb"
# Leave disk power management to hd-idle ('PM_ON=on' means PM off, obviously /s)
AHCI_RUNTIME_PM_ON_BAT=on
DISK_APM_LEVEL_ON_BAT="keep keep"
DISK_SPINDOWN_TIMEOUT_ON_BAT="keep keep"
I use hd-idle for spinning down the HDD (sdb) because the shucked WD "White Label"/Ultrastar He10 apparently doesn't care for APM or Standby Timer settings set through TLP:
because udisks2 wakes up the drives every 10 minutes, no configuration possible. Also, smartd (from smartmontools) shouldn't wake the HDD if it's in standby, but it will reset hd-idle's standby timer, so I tell smartd to run every 24 hours instead of 30 minutes:
Probably best to measure power in the DC side of the power supply to be sure it's under a watt...
Measuring AC power is very hard to do accurately - there could for example be something else in your town injecting a bit of power at some other frequency onto the AC power grid (for example 1000Hz), which your PSU ends up absorbing (due to filters in the input of the PSU). Regular power meters will usually only measure the 50/60Hz component of absorbed and reactive power, and can therefore over/under read.
Still good to know some desktops are designed with power consumption in mind!
Actually, that's exactly what I did and the energy meter will gladly measure a 1W LED bulb. That's why I think consumption is <1W most of the time. At the end of the day, everything <5W or even <10W would be pretty great.
BTW, there are very short bursts of 1W~10W every second. And FWIW, after ~24h, the energy meter shows 0.0117kWh. Obviously, these measurements are a far cry from lab precision, but they do give me some confidence that I won't really notice the server on my next energy bill.
While there are laptops which accept 3 or even 4 SSD's, and which could do this task, those are very expensive, so few might have one handy.
Older ARM SBC's had many limitations in the number of PCIe, SATA and USB 3 interfaces.
Nevertheless, now a much better ARM CPU has become available for cheaper devices (i.e. in the $100 to $200 range), the Rockchip RK3588 (quadruple Cortex-A76 + quadruple Cortex-A55 + triple Cortex-M0).
For now, this is the only ARM CPU with a decent speed, comparable or better than the Intel Gemini Lake Refresh CPUs, except for the much more expensive NVIDIA Xavier or Orin, or the Qualcomm Snapdragon SBCs, which are much more expensive than Intel/AMD CPUs + motherboards, having similar features.
There are at least 5 or 6 companies who have announced boards with RK3588, ranging in size from credit-card size, like Raspberry Pi, to the larger picoITX, and up to the miniITX form factor. Hopefully such boards will be available in the second half of 2022.
Some of these boards have up to 4 SATA connectors, besides an M.2 NVME SSD slot, so they could be easily used as a NAS.
According to the published reviews of such a board, the typical total power consumption (without SSDs/HDDs) is around 5 W, and the peak power, at maximum CPU utilization, around 13 W.
There are a lot of different devices that include ARM cores.
Some of them have actually been designed for TV set top boxes as their primary application, e.g. most of the models from Rockchip. These support a lot of video formats in hardware, even more than typical desktop GPUs from the same year. For example the Rockchip 3588, launched this year and used in many single-board computers that have just been announced, is one of the first devices providing a fast hardware AV1 decoder, besides decoders and encoders for all older codecs.
However, some of these devices with ARM cores have vendor-provided video drivers only for Android, to be used in smart TVs, so it may happen that the device used in your Synology NAS actually supports in hardware more video formats than you can use, and you are limited by the available video driver, which is incomplete.
Not sure what generation you're referring to, but my HP Microserver (Gen 8) has a 17W TDP Intel Xeon cpu (E1220v2 iirc) and an HP SmartArray controller, and HP iLO baseboard management controller.
I don't think that stuff qualifies as "laptop-class".
I have an HP laptop with a 3rd gen i5, fairly low powered repurposed with Linux and Plex as a home server. I am using it as a media server, file storage with hard disks attached and also as the primary tailscale host to have a mesh of personal devices over the internet without a public IP.
I ran a Debian web / webdav server for 12+ years on a 2007 Asus netbook. It worked flawlessly the whole time.
I only gave it up because I wanted something faster than the 100mbps ethernet and I had other computers lying around. Shortly thereafter, I wound up switching to a cloud provider.
Talking about low power, I am currently running a headless HP laptop which I recovered from a friend. This thing sports an i3-5005U CPU (2GHz 2c/4t) and idles at 4W measured at the wall!
Well, it's not powerful by definition but it can easily handle file sharing and torrenting (for my library of Linux ISOs obviously) while staying powered on 24/7. The fan also turns off at idle so it's totally silent.
What's funny is that this machine is currently resting on top of my decommissioned home server (Xeon E5-2697v2, 64GB RDIMMs, Supermicro X9) which idled around 100W...
Thinkpads are my laptops of choice (writing this on my X240) but I'd never use any any laptop as a home server: if something breaks there are no spare parts, no PSUs to repair or cards to add for more functionality (I simply don't trust USB for disks) or swap etc, and the whole machine is not intended to be kept on 24/7 (think about the fan wearing, not just the CPU).
I could only understand if one need to serve files in a emergency situation, and I did exactly that many years ago when we had to service a server which died without notice at one workplace where downtimes were not an option, so I copied all their documents on my mini laptop -Fujitsu Siemens Lifebook P7010, a netbook sized marvel well before the word netbook was invented- which became the file server with Samba for almost two full days until the server came up and I restored their modified files back.
Building a home server from scratch isn't that hard nor expensive; I made mine around a Atom Mini-ITX board that despite only 4GB RAM and fairly weak CPU does its job managing two ZFS mirrors under XigmaNAS (.org). On the ARM side, there are some other interesting products such as the Rock Pi 4 and associated Penta SATA Hat at radxa.com.
I hope also someone will one day take on development of the Helios 64 platform at kobol.io. They ceased development last summer but the hardware, albeit not yet stable, was very promising.
Old laptops still have uses for media playing, test machines, or lab instrumentation though; a decent audio card can turn any old laptop into a low frequency spectrum analyzer through Jaaa (http://kokkinizita.linuxaudio.org/linuxaudio/), etc.
> if something breaks there are no spare parts, no PSUs to repair or cards to add for more functionality (I simply don't trust USB for disks) or swap etc, and the whole machine is not intended to be kept on 24/7 (think about the fan wearing, not just the CPU).
I've used x220 and x260 as home servers for years. In both cases the fan is turned off when the machine is idle (TPFanControl can even keep it off under light load). Both machines have two internal storage devices (mSATA + 2.5" and m.2 + 2.5" respectively). Both have gigabit Ethernet. Both can support external SATA storage (via ExpressCard and mini PCIe respectively). Both have battery drivers that allows them to be discharged periodically while being plugged in.
Many laptops out there are indeed poor home servers, but the x2XX line certainly isn't.
Oddly specific title. Another 2012 laptop-at-home hoster reporting in, AMA I guess! I didn't have any thermal/fan issues the way OP described, but my laptop is also more powerful (Asus rather than a ThinkPad) so it was presumably just made to handle more heat.
This was an upgrade in 2019 from an Asus EEE box which had similar power requirements, which was an upgrade in 2013 from a 2001 laptop.
Whether this works for you all depends on your goals, or more and more often, what software you use to achieve these goals:
- Compiling huge software suites like an OS or Firefox as a service is just too heavy.
- Average company website, personal blog, torrent host, personal vpn server, Factorio server, mail server, git server (gitea), and more all at once? Perfectly fine, that's exactly what I do.
Be mindful of thermal throttling, though: the noise (in the living room?) can be a nuisance. I've bought an Intel NUC (https://henvic.dev/posts/homelab/) to use as a home server and kind of regret it. It's not a laptop, but has the small factor of one.
I don't know about the NUCs specifically, but I thing there should be a YMMV disclaimer in this recommendation.
From what I've seen, these kinds of small PCs are basically a laptop-in-a-box, but usually pack a desktop-class CPU. Which means that the cooling system can sometimes have a hard time.
I personally have an HP EliteDesk 800 mini, which I initially liked for its compact size while at the same time being able to have 2 NVME drives and one SATA. Boy do I hate it. Even at idle, the fan makes a terrible noise. Throw any kind of work at it for more that 10 seconds, and it starts to ramp up.
My work probook from the same era has a joke for a cooler, yet it's still quieter most of the time. The CPU is noticeably slower (i5-8xxxU), though for running a random home server it's probably good enough.
Yep, my old laptop worked great but boy did the fans get annoying in my office. Upgraded to a fanless raspberry pi setup and it’s both better performing AND completely silent.
I'm using an Acer laptop that must be 15 years old for pfsense, never misses a beat, built in UPS! I've got a second laptop, a MacBook air from 2012 running a bunch of containers for Plex, hassio, etc. Works fine. I've got it all in a 19" rack I built out of wood. World's cheapest homelab. Haven't changed it in years.
funny that i have been trying to get my solar inverter data into homeassistant and i have already burned through 3 Raspberry pi 3b+ for some reason. i do not know is it because of a bad rs-232 to usb cable or because i am using a mobile charger to run the pi itself. i have a 2b but the wifi signal isnt great on the usb dongle so for the time being nothing is working. i do not want to use a full desktop/laptop because of power considerations
People should really stop using SD cards for anything halfway important with Pi's. They are proven again and again to be just ticking time bombs. With new Pi being able to boot from USB just fine, get an SSD even if it costs more.
I slowly ran out of use cases for a "home server" as everything moved to "the cloud" (hand wave), streaming and Apple took most of it away. Keeping a static IP for mail and web got more expensive that Google Apps for Domains & Netlify.
Now I just have a RockPro64 with a 256GB SSD I had lying around running InfluxDB and Grafana. You do need 4GB of RAM and a disk for this and the RockPro64 is better and more available than a 4GB Pi 4. Seriously love that thing.
My Netgate router streams metrics to it. I have a Raspberry Pi running the Ubiquiti management software for my APs and monitoring my UPS with nut. Telegraf is easy to work with extending. I do need to scrape metrics from the Ubiquiti APs eventually.
Every once in a while I mess with some other little ARM board and environmental sensors, but that's it.
No fans. Less power than my kids' tablets. Just Ansible for config management and keeping them up to date.
Until I wanted Plex to start transcoding video durring the pandemic, I happily used a Mac Mini from 2008 as my home server, which itself was an upgrade from the Intel Geode FitPC I had been using for years before that. Like most things, it depends on what you're doing with it.
Yep, a lot of my content at rest doesn't work with the individual Plex clients. What's supported by the Plex client directly varies by individual client. I have for instance quite a collection of old DivX and Xvid content and I don't think any of the clients support that natively.
I don't understand the need to run home servers in 2022, at least not for most people and most problems.
This is coming from someone who has been running a home server for decades, hosted everything from websites and email, photos, music and movies, and various cloud services like Nextcloud, Gitea, etc.
As a learning experience, it's probably better than anything else, but for day to day work, it's simply not worth the trouble.
NOTHING you can dream up at home will be as safe and secure as a cloud offering from one of the major cloud providers, at least not if you have a "home budget" as well. We're talking redundant power, redundant internet, server grade hardware and spare parts, fire/flood protection, physical access control, dedicated security operations, and multi-geo redundancy means you get that across multiple data centers.
I've long since abandoned the chores that come with hosting anything from home, and instead moved everything to the cloud. If i need privacy, Cryptomator (https://cryptomator.org/) handles end-to-end encryption. Most other services have been migrated to cloud offerings, which in many cases are free, like GitHub or Bitbucket, and my (static) website runs on Azure for free.
The best part is i'm actually saving money. Before i moved to the cloud, i was running a 4 bay NAS as well as a server running Proxmox, and power consumption was around 250W, and even with normal electricity prices here (€0.35/kwh normal, current around €0.75/kwh), it was costing me about €8.50 every month. That's without the hardware cost, which will easily cost as much over 5 years.
For comparison, a "Microsoft Family365" subscription can be had for about €75/year (€6.25/month), and offers the above advantages on 6 accounts each given 1TB of storage, so if i was using OneDrive for file storage, i would be saving about €2 every month on just electricity.
Assuming the hardware costs as much as the electricity (€510 over 5 years), you could also get a VPS in the cloud, and still host websites and more, and still save money.
That being said, i do still have a small "home server", but it's main purpose is to act as a content cache for data stored in the cloud, to make it appear to be local when accessed from the LAN, as well as make backups (to another cloud) of my cloud data, but my firewall is now completely closed, and i sleep better at night :)
This philosophy can be applied for everything and unfortunately - for me - it's incredibly poisonous, I was very passionate about programming, coding, about this unique property of it that one can bring to life some abstract idea, concept that was only existing as thoughts
Unfortunately "not worth the trouble" mantra infected me and now I'm unable to do anything, most ideas are dismissed easily.
Want to write some kind of software? Makes no sense, it's probably written already and if not then will be replicated in the blink of an eye by more resourceful entity if it's worth anything, probably even open-sourced
Want to write novel? Makes no sense, authors with the help of GPT3 will churn out 20 times more, etc. etc.
I agree that it's easy to dismiss a lot of ideas like this, and the fact that there's so many people out there working on all these ideas can seem a bit daunting.
What has helped me is knowing that I am doing all of this for myself, and because it's fun and interesting to _me_. Yeah, sure, other solutions exist, but then I'd be trading more money for less fun, and that's not... fun.
I hope that you can get your spark back. It all depends on external factors and life events as well and with those I cannot help, but don't lose hope.
> Unfortunately "not worth the trouble" mantra infected me and now I'm unable to do anything
I'm sorry for your loss.
Personally i was spending 1-2 hours every day making sure servers were running fine, drives were good, backups ran OK, software updates, and checking logs. None of those tasks interested me the least bit. I've run servers professionally for close to 2 decades, so i get enough of that at work, and considering that i can actually save money by letting someone else do it, it means that for me it was simply not worth it.
Instead, I use my "additional free time" to dive deeper into areas that actually interest me.
Do you exclude NAS functionality? Because multiple TB of cloud storage are definitely not affordable.
Similarly, yeah you could store movies in the cloud, but streaming those all the time is a huge waste of bandwidth and for many home internet connections quite high load.
Oh, and I also want my smart home to continue working when there is no internet connection.
Not just that multi TB are not affordable, but also I can get >1Gbps throughout on my local network and essentially 0 latency - whilst my internet connection is max 1Gbps and effectively a little less, with more significant latency.
When using the file share for something like Lightroom it makes a significant difference.
If your needs are handling large amounts of photos/videos for work/hobbies, then a NAS probably makes sense, but if you're just hoarding a bunch of movies/tv-shows, does that really need RAID5/6 ? A single drive would probably be adequate, considering that you can probably "recreate" the contents.
Personally i use a content cache on my LAN that caches data for the cloud, meaning my data i available at LAN speeds on my LAN, but still stored in the cloud.
But my original comment was mostly about self hosting services.
I don't think that's enough to qualify as a privacy measure. Those encrypted files will still be on your account paid through your credit card and authenticated by phone. They know when you connect and how much data you have, download and send.
As for the rest of your comment, I pretty much like having my own stuff and I dislike giving money to hostile corporations like Microsoft and Google.
And about redundancy ? geographical redundancy ? With (major) cloud providers, your data is stored in multiple geographically separate data centers, meaning even if a data center burns down, your data is still available, and redundancy hastily being restored.
How about malware protection ? Most (major) cloud providers offer n days of x versions (OneDrive is 30 days worth of unlimited versions), so if your files end up as encrypted garbage, you can simply roll back to an earlier version. Unless your home server is airgapped, that could still apply to you.
You're probably also lacking in redundancy with power and internet (though if only available on the LAN that's not a problem), as well as fire protection/prevention as well as flood protection.
Correct me if I'm wrong, but isn't hosting your own mail these days nigh impossible because if you're not one of the major whitelisted providers everything you ever send will get thrown into spam, for obvious reasons?
It is doable, but certainly not something i would recommend. I quit doing it a decade or so ago.
There are a bunch of hoops you need to run through to keep your server whitelisted, though having a static IP in a non-residential IP block helps a lot. Sadly, most people attempting to self host email will (static IP or not) have a residential IP, meaning you're almost certain to get blacklisted fairly quick. I'd much rather leave that to someone who knows what they're doing.
I'm aware that people self host email for (supposed) privacy, but they somehow always seem to forget that most email threads have at least two parties, and despite all kinds of privacy measures, you cannot guarantee that your recipient is not using Google/Microsoft/Whatever, or they forward it to a person that is using one of those. In that light, your (clear text) emails are not really confidential.
Instead, if you must use email for confidential information, use encryption, or better yet, use one of the multitudes of newer "more better" platforms for sending your information. Most new platforms have encryption baked in, though in most cases you're trusting the service provider to handle your keys correctly.
(not sure who downvoted or why btw, even if I disagree on some points your comment is constructive and the conclusion, using something with more modern encryption, makes sense. If only people would comment when they downvote...)
> have a residential IP, meaning you're almost certain to get blacklisted fairly quick.
That's not how that works. Either they block residential IPs or they don't. It's not that they only notice you're hosting residentially by the distinctly green smell of your email and only then ban you.
I've not had much trouble myself with this, only a handful of systems across more than a decade of use by multiple people. From a residential IP.
> hoops you need to run through to keep your server whitelisted
Aside from sending spam, how does it become un-whitelisted? (I'm assuming you're referring to "off blacklists" when you say whitelisted btw, as I've never seen a whitelist implementation).
Typical use of home server is to lower expenses when abroad or in the wilderness. Nobody here pirates movies (of course), but for example watching Joe Rogan 3 hour Youtube-sessions on phone in Finnmark was quite impossible. You ordered your server to download it with youtube-dl and to decode the audio at 32 kbits.
FYI that wattage is quite a lot, if you get it cheaper by hosting it on someone else's server (aka cloud) then they've either got a more efficient server or they don't factor externalities into the price (at €.75/kWh it sounds like you're paying for Climeworks to compensate your CO2 emissions).
> at €.75/kWh it sounds like you're paying for Climeworks to compensate your CO2 emissions
It's due to the countrys backup power being based on natural gas, which costs about €2/m3 currently due to the Ukraine/Russia war. During february it hit €1.14/kwh.
Prices are dropping though, with about 85% of our power currently coming from renewable sources.
Everybody makes mistakes. You just need to take a quick glance over on r/datarecovery to see that it's not only Cloud providers.
Just because you keep your data in the cloud doesn't mean you shouldn't make backups. I make local backups as well as backups to another cloud, and data i really care about (mostly family photos) is archived on M-disc Blu-Ray media as well.
Plus: bigcorp mistakes/hacks affect a million customers. You might have a less fancy door lock than the data center next door, or might have a 1024-bit RSA public ssh key still, but how likely is someone going to get into your email by cracking your lock or key specifically?
I used to run my old laptop from ~2008 with intel P8600 CPU, 4GB RAM and two cheapest kingston 120GB SSDs in RAID0 for postgresql dev work. Main reason was that I didn't want to bog my main desktop when re-ingesting large data sets and then indexing or updating it. Worked very well even though it had SATA II, guess random I/O on SSD is no issue even for older hardware. Needless to say it beat my main desktops uptime because of having battery working as a great UPS. And it could always display htop/iotop on it's display as a nice indication that something was being worked on. Reusing old hardware is cool.
How about... Internet Service Provider pricing for home servers?
Mine in Texas, USA charges about $72 per month to allow me to host my own server, and have 1 static IP. That would be in addition to my $60 residential connection-- for a total of $132. Pretty expensive for me-- $1,584 annually
Anyone have a better (cheaper) solution?
Granted, with DigitalOcean I pay about $10/month ($120/year) for a tiny server... if I ran my own, the server itself would be much cheaper for the quality (i.e. several GB of RAM & storage space). It's just that the internet service is much more expensive :(
> I was able to work around most of the cooling-related issues by mounting the laptop vertically...
Have you happened to measure the thermal difference vertically mounting your laptop? I also vertically mounted my home server laptop for the same reasons, but noticed the fans seemed much louder so I returned it to be horizontal like normal. It's as if they were designed so that the horizontal surface acts to attenuate their high-frequency sounds. It's still annoying, but less so. I'm wondering if going vertical again, but throttling the CPU as you've done is the way to go.
I don't have any measurements other than the general observations that I shared in the post. It should work better than having it lie down on a wooden desk, for example, as that greatly limits the cooling potential. You only have a tiny gap where the air can move in.
It might be that your fan might be old and worn out. As someone who uses older laptops and has given out T420/X230 type machines to family members, I've noticed that sometimes tilting the laptop under use results in vibrations coming from the fan, likely from it being worn out and the blades touching the heatsink and/or plastic.
The ThinkPad T430 is quite repairable in that regard. I've replaced the screen in 20 minutes, replaced the CPU, all the SSD-s are easily accessible, and there's plenty of spare parts out there due to the fact that it's a model that was once popular and bought in bulk by many businesses.
But yes, some models probably don't have the same level of second-hand parts market, making repair difficult.
Otherwise just restore the backup to a 5 year old laptop.
If you laptop server can run for 5 or 10 years, there is a good chance that during that time, you spilled coffee in the keyboard of another laptop, making it a good candidate for the new server.
I have a 12 year old laptop, that has been a server for 8 years.
I have a 2010 Macbook Pro sitting on my network as an airprint server for my aging Dell Laser printer (purely because the drivers no longer work on modern OS's). It's hacky but cheaper than buying a new printer and the damn thing keeps on chugging as long as it's plugged in (the battery is long dead and lasts less than five minutes).
I use an old Wyse thin client that I picked up from ebay for ~£25. The built in hard drive was something silly like 32Gb so I just boot from an external SSD which gives me 1tb to play with.
Low power, fanless and reliable.
I have it running home assistant, nightly offsite backups from my co-located servers, some uptime monitoring and various other CRONy bits.
My Wyse has served me well for years before I got a rack server, and even today it acts as my fallback server as well as a secondary file backup. I've modified the inside to fit a SATA SSD and have external HDDs connected over USB for file storage.
I’ve used laptops for this purpose in the past but I always disliked having my HDDs connected over USB.
I was concerned that all my storage felt like it was active all the time even when not being accessed, I was always concerned my drives would burn out faster but I never had any data to back it up.
I ran a D620 24/7 as a server from 2010-ish to 2019. For some messed up reason i loaded it with Windows server 2008R2 or some crap like that. Never bothered to re-provision it. When I eventually replaced it with 2 rPIs, the closet where it lived dropped quite a bit in temperature.
I've been using a 2007 17" MacBook Pro as my main machine at my desk for the last couple weeks. Honestly it's great. The 17" screen is still bright and high enough resolution for me (1920x1200), the CPU is perfectly fine for web browsing and some light VS Code (2.5GHZ Core 2 Duo). I have Mac OS 10.13 on it which is still supported and receiving updates by every App I use. The keyboard on these old MacBook Pros is fantastic as well.
I even loaded up Final Cut and dropped in some 1080p h.264 footage and it edits it just fine.
I think the one good thing about Intel's CPU monopoly for all those years is that these old machines are still powerful enough today. That is likely going to change now that CPUs are getting significant performance boost every year again. A dual core CPU that is usable today I suspect will be unbearably slow for the web in ~5 years.
> A dual core CPU that is usable today I suspect will be unbearably slow for the web in ~5 years.
I will go out and say that it's not CPUs which are ageing poorly, it's the web that is getting too bloated for everyone. Hacker news is the signature example of how much utility you can get out of simple HTML/CSS. I really see little need for the behemoths that are Facebook's and Twitter's modern interfaces (which in turn make old laptops feel slow, even when they're not).
Most of those web devs have fast workstations and fiber, so the sites feel responsive to them in testing. It's easy (but incorrect) to assume that your site will run well for everyone if it runs well in your environment.
How? Apple officially supports macOS High Sierra (10.13) on Mid 2010 or newer MacBook Pros. Even this patcher [0] to install it on older machines only supports 2008 and newer Macs.
Same here, but with a 2009 17”. And I run Zorin OS (Linux) on it just out of simplicity and overhead since I’m not really using it graphically. It was my main work machine for years and still plugs along just fine.
Massive is a pretty strong word. Unpatched vulnerabilities, perhaps, but AFAIK most of those are of the "you can potentially read memory if you already have local execution" sort, which is unlikely to matter for an internal server (i.e. not running untrusted code, or likely even being exposed to untrusted clients). Depends on your threat model, of course.
I'm fairly certain that the T430 got a bios update for Spectre, there's Intel microcode updates that can be loaded at boot, and other general mitigations for CPU vulnerabilities in the Linux kernel as well.
Nothing's impossible, but it would have to be a pretty targeted attack to exploit the CPU.
That said it's still a good idea keeping stuff behind a VPN if you don't need to share stuff publicly with the entire internet.
- I have a scanner that saves scans to a network share. Having a linux-based SMB server on the local network is convenient and privacy-friendly, especially when the disks are encrypted. Doubt you can get privacy like that with a cloud service to which your printer can connect.
- If you follow the 3-2-1 style of backups, having a fileserver at home is convenient for a solid backup regime across all devices.
- You can use a server for serving local media to watch on the TV or any other device
- In some cases, you can use the same server for advanced security, privacy, ad-blocking, and combining multiple internet connections in order to gain fail-over redundancy and increased bandwidth.
I am able to do all of the above with a really simple home server setup [1].
There is no necessary reason to be alive, yet here we are.
I like hosting at home, gives me peace of mind because it's all mine and controlled by me. No latency across LAN and cheap storage (one could say it's at cost price) are other reported advantages.
It's not necessary, sure I could host my email with google like the masses and it would be functionally the same, but it's my preference.
It runs CentOS 7, installed well before Red Hat did the CentOS Stream thing. It has a ZTE MF667 GSM USB modem I use to receive SMSes for 2FA that local banks still use.
It doesn't have any desktop environment, therefore I only use it via SSH. It updates its own IP address via a free subdomain on freedns.afraid.org using a cron job.
EDIT: Typo.