Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: A low power 1U Raspberry Pi cluster server for inexpensive colocation (github.com/pawl)
175 points by postpawl on July 17, 2021 | hide | past | favorite | 119 comments



I dunno.. a macmini M1 idles 7watt. This cluster does 36watt idle, 43watt normal use, or 48watt idle with the fan, and probably 62watt peak load with high fan.

The M1 uses 26 under load and 39watt peak load (16GB with the 2TB ssd version according to apple). Measured with kill-a-watt, I've read 4watt idle and 20watt full load. So I think the 26watt is probably a more realistic value.

16GB version: $899 1TB usb-c ssd: $109 1U rackmount: $199

This will give you 16GB ram and 1.2TB ssd for $1207. It can fit two mac minis. As the purpose was 1U with max 1A/120V, you'd be able to run 3 mac minis in 1U (or 4 if you make sure they don't exceed 30watt).

In terms of performance the pi does 295single and 830multi. 5 rpis is 295single (does not increase with 5 rpis!) and 4150multi. The M1 does 1748single and 7660multi. So you get about twice the cpu performance (ignoring the super fast 256gb ssd, and other hardware features the M1 has, and the overhead of running a 5 node cluster).

A rackshelf for 4 mac-minis is $359. If you'd actually put in 3 (because of the 1A limit), you'd have 6x the cpu performance for somewhere between $3000 and $3300 (depending on the storage you want). All still in 1U.

If power usage can be higher, you could actually fit up to 8 mac minis (36" depth) for 2.6A @120V. The comparison table in the article compares this setup with a Dell using 2.08A ("not at peak?"). But if you consider a max of about 20watt per M1, you can host 4-6 M1s with 1A.

Also, you won't have to deal with breaking sd cards, the heat, noise, the wiring mess, and it has many hardware features that an rpi doesn't have. You won't have to deal with the storage, memory and cpu overhead of running something distributed.

Still it's fun to tinker around with rpis. Although I stopped using them, as storage is really flakey because of the SSDs. Other SBCs offer built-on ssd storage.


Yes, M1 minis are super attractive in this regard. I wonder if it’d be possible to simply run them as Linux servers instead of MacOS. They even come with 10G networking. If they boot Linux, they’d make super efficient Linux servers that satisfy many use cases very nicely.


There is a Port of the Linux kernel for M1, so it shouldn't take much longer for distros to be available https://github.com/corellium/linux-m1


I don't have direct experience, but I'm under the impression that MacOS' hypervisor is pretty good.

Edit: I was thinking of Boot Camp, which requires an Intel processor. M1 is still in early days and definitely not the right choice if you want to run existing x86 code on an OS other than Mac.


As an avid Docker user on M1, it’s actually not bad. Docker let’s you choose between Apple’s virtualization and qemu. Apple’s performs better in terms of CPU, but qemu is miles ahead in terms of I/O performance. Neither solution is anything to write home about, but it works well enough. Better than a Pi cluster, though? I think it’d be close. If you’re goal is to run Linux, M1 might not be best solution for now.

That assumes you’re running ARM images—I haven’t tried emulating x86, but I hear the performance is abysmal.


I was surprised the M1 was only 6x or so the RPI in performance. I thought it would be much more. I guess the RPI 4 is more advanced than I thought.


I use arm servers in production and regularly benchmark them.

On my application the M1 is 11x the speed of a rpi4 in single core performance.

Ampere Altra running at 3ghz is a bit above 4x a rpi4. Single core.

I'm moving to Ampere instances in the cloud.


Multi core it’s more. And this is just the cpu. Not the gpu, or memory bandwidth and latency or many of the accelerations


The 4 and 400 are among the few cheap SBCs I've seen that have a Cortex-A72. That's probably a big factor in the Pi 4/400 punching above their weight.


That's why I hope that the pi 5 has built-on ssd storage + m.2 slot and gets a significant performance boost to come more close to M1 speeds.


> to come more close to M1 speeds.

M1 is on 5nm tech while Rpi4 is on 28nm. i.e. multiple chip nodes behind. Still..could be close depending on how far out the rpi5 is


The point is that the Raspberry Pi -- any version -- has always been underpowered and overhyped. Wishing for the next version to be better is nonsense. Pick any recent SBC, and it'll be better bang for the buck. This has been true every since the first RPi.


I think you underestimate community support. Other SBCs have no where near as much community support as pi has. There are so many things you can do with the pi, thx to community efforts, that it is not worth (in many cases) to switch to a less supported but more powerful SBC. And there are an ever increasing number of pi hats, that allows you extend pis possibilities for your use-case. To get some inspiration I can recommend https://www.jeffgeerling.com/blog


Other SBCs are starting to catch up now that there are opensource drivers for ARM GPUs. Once all the drivers are in place the raspberry pis is actually an evolutionary dead end because of its video core based architecture. They'll have to redo the RPi 5 SoC entirely and probably get rid of 32bit raspbian.


parent wasn't comparing rpis to smaller/worse products I think


The real power of the pi is not computing prowess but price and access to hardware.


> The point is that the Raspberry Pi -- any version -- has always been underpowered and overhyped.

This sort of comment is extremely short-sighted and totally oblivious to . A raspberry pi was always, from the very first day, an inexpensive (even expendable) ubiquitous computing platform aimed at being the infrastructure for educational projects.

Do you want to toy around with a robot? You spend 20 or 30€, a few actuators, perhaps a camera, slap in some python code, and you're good to go.

Do you want to toy around with LEDs to make them blink? You spend 20 or 30€, add a few LEDs, slap some python code and you're good to go.

Do you want to toy around with distributed systems? You spend 60 or 80€ on three or four RPIs, plug them into a router and you're good to go.

Do you want to toy around with wifi mesh networks? You spend 60 or 80€ on three or four RPIs, configure them, and you're good to go.

Do you want to toy around with a media center? You spend 20 or 30€ on a RPI, configure it, and you're good to go.

What´s the common denominator? Enough pocket money to get your hands on a RPI, and you do not care that it gets blown up because you can get another one just as easily.

Raspberri Pis flatten the learning curve to the point where kids in middleschool can go at it without any concern. It's just a computer running linux that does not have to deal with arcane devkits or resource constraints.

Do you know what would cost to do the same thing prior to Raspberry Pi creating this whole sector? Between 300 and 500€ for a single devboard with similar spects.


> You spend 60 or 80€ on three or four RPIs,

No. I agree with you that the RPI is a great experiment board, but usually you don't get them for below 30€ (20 for the zeros). Then you need an SD card (10€) and probably power supply. Add on a case and some electronics if you're fancy, then add 5€ shipping.

I like the RPi, don't get me wrong. But something like a RPI cluster easily ends up in the 150-200€ range, even in it's most minimal form. The GP is right that, unless you're going specifically for having a cluster or GPIO, other options are simply more performance for your money.


> No. I agree with you that the RPI is a great experiment board, but usually you don't get them for below 30€ (20 for the zeros).

You should really look up those prices because your assertions don't match reality.

I am, right now, looking at the page of a distributor listed in raspberry pi which sells raspberry pi 3+ for 28€ plus 4€ shipping. Amazon sells 16GB SD cards for 5€, and a charger is really the least of your budget problems.

The same store sells Raspberry Pi Zero W for 11€ a pop, and Raspberry Pi Zeros for 5€.

Hell, a while ago there were magazines giving away raspberry pi zeros with the mag.


I'd like to see that reseller. A german comparison site list 37+4 Euros for the 3B+ [0] and 39+4 for the 4 2GB [1]. Reichelt, an official reseller, list 37 Euros for the 3 [2]. Amazon lists 66 Euros for the 4 4GB [4].

[0] https://geizhals.de/raspberry-pi-3-modell-b-a1785657.html

[1] https://geizhals.de/raspberry-pi-4-modell-b-a2081127.html

[2] https://www.reichelt.de/de/de/raspberry-pi-3-b-4x-1-2-ghz-1-...

[3] https://www.amazon.de/-/en/Raspberry-ARM-Cortex-A72-WLAN-ac-...

> Amazon sells 16GB SD cards for 5€, and a charger is really the least of your budget problems.

A 5 port USB charger is 25 Euro: https://www.amazon.de/-/en/Anker-PowerPort-Port-Charger-Sing...

You can go a bit cheaper, but not too much in order to avoid brown-outs (very possible with the Pi 3/4, but 1A charger struggle even with the older models).

You can save on the SD cards, if disk performance does not matter, I give you that.

Still, these small amounts add up quickly. When we're talking about a sub-100 Euro project, throwing in 20 euro for the charger and another 20 for SD cards quickly drains your budget. Even with your pricing, a Pi 3+ cluster would run us (28+5)*[3/4]+4+20 = 123/156 Euros - close to double the original estimate and without any cables whatsoever.

I don't mean to say the Pi is overly expensive - it's good value for what it is. But these small project do quickly add up and when you don't play it's strengths, you end up in a situation where you pay more money for the Pi project than more powerful hardware would have cost you in the beginning.


Well said. Yes, the Raspberry Pi family has its foibles, but let's not let the perfect be the enemy of the good. (My main beef: the techbro "let's change your configuration behind your back" shenanigans with the Microsoft repository, but that's a software thing.) They've been upfront about their educational mission since day one, and it's just a bonus that it took off with hobbyists. On the other hand, the widespread hobbyist adoption is undoubtedly a big factor in its success.

If nothing else, it gave ARM board makers a boot in the behind and got them to build boards that don't cost an arm and a leg. We wouldn't have sub-$100 Jetson Nanos if the Pi hadn't blazed a trail.


Which SBC did you settle on? Didn’t realise some had built in ssd storage..


the odroid H2 has an nvme (or ssd, i forget) m.2 slot, takes SODIMM, and has built in gigabit. for a fully decked out H2 though it's about $300 or so, which starts to creep into "NUC" territory. The good thing about the H2 is it's intel (J5 iirc), which may or may not matter depending on the use case. For a cheaper option, the Atomic Pi has built in MIMO wifi/bt, 1gbit ethernet, a single USB3 port, and <10W guaranteed TDP. Fully loaded the api is about $55. The H2 can use 12-20VDC, the api demands 5VDC, which makes the PSU for the api more expensive.


Also, the RPi doesn't support suspend (= sleeping in low power mode). See https://github.com/raspberrypi/linux/issues/1281

But for the 1U colocation use case, the Mini is not relevant. You could start a project for housing it in 1U I guess, would be an interesting post.



I'm really hoping a blade like the Uptime.Lab concept [1] makes it to production, because you could cram 15-20 Compute Module 4s, with native NVMe boot, into a 1U chassis, for around 100 ARM cores, plus over 100 GB of RAM and whatever amount of SSD storage you can afford.

I have been in contact with Ivan, the creator of the board, and I'm currently testing his beta '0.6' version. It really makes racking Pis for simple clusters easy, compared to trying to mash Pi 4s into a rack.

There's also the Scalenode [2], but I haven't heard much about it since the first post to GitHub.

[1] https://www.instagram.com/p/CRZMXmRLRma/

[2] https://github.com/antmicro/scalenode


Is there any more content on the RPi rack that’s not on Instagram? Seems I can’t even view odd pictures on their mobile site without signing up :(


He also posts a lot on his Twitter:

https://twitter.com/Merocle


Thank you


In addition to his Twitter (which has some, but not all, the Instagram pics), I hope to post a video on my channel featuring the board soon. I've been messing with it for a few days. Has some interesting features that I think some people will really like, that make it a great board to deploy remote / colocated.


I’ve really enjoyed your videos about all the new boards coming out for the CM4. I can’t wait until it’s easier to get CM4s! I checked newark today and it said they’re backordered out to 2022-02-28.


It is a shame that CM4 with its BCM2711 SoC does not support the optional ARMv8 crypto extension. Could make a lot of difference in that kind of application.

I always wished for more SBCs on the Marvell Armada 3700 platform but nothing much materialized beyond the maligned ESPRESSObin. Instead it was a race to the bottom as usual.

At least some Rockchip and Allwinner SoCs have them, in exchange for other warts.


People have been terminating session encryption at the load balancer for a couple of decades now. A little heterogeneity isn't a crime.


Been watching progress of the Blade for the last few weeks and am very excited for it to hit Kickstarter soon. My only wish would be for it to include IPMI like functions, or at least remote PSU capabilities (although I guess if you’re doing PoE you have remote PSU).

And Jeff, thanks for your videos, they are always a highlight of my day whenever a new one comes out. Happy to be a patron!


If your wish is not granted and need an add-on kvm, check out one of these:

https://tinypilotkvm.com/

https://pikvm.org/download.html


When we gonna get some hardware accelerated opengl running on the pi. Keep up the amazing work!


This is a nice build. One tiny change I'd make is because I'm paranoid about the pokey power switch lever. It might be at eye height, when someone is wrestling to get a heavy server on rails, or similar.

You can get aircraft-like guards that fit over that design of switch[1], or you can get a less-pokey rocker or pushbutton switches. (Note that rocker and pushbutton switches can be easier to actuate accidentally, such as when a neighboring server is being racked/unracked, but you can get those switches with guards, too.)

[1] Example: https://www.ebay.com/itm/292009956841


Be aware that rocker cover is designed to prevent accidentally turning the switch on. You have to leave the guard up when it's in the on position.

I think I'd go for a typical rocker switch or guarded rocker if worried about eyes or eyes and elbows.

https://www.digikey.com/en/products/detail/e-switch/R6ABLKBL...

https://www.digikey.com/en/products/detail/c-k/D102J12B215PQ...


You can just mount the switch (or cover) the other direction so the cover stays up while the switch is off but is lowered and guarding while it’s on.


That's valid, but I think that operation makes it more of a hazard, however mild, if you leave it powered off when trying to mount the server.

More likely, it's confusing with the server getting mounted with the cover closed, then the person doing this forgets or doesn't understand that it's switched on by default and plugs it in, where it will be immediately powered.

Why not use a less confusing kind of switch, or leave it be with the toggle?


Good points. I should've pointed to an example of other switches, rather than emphasizing an easy mod for the existing switch.


No worries. I think the server would look boss with airplane plane switch covers.

I found a few images of translucent covers and I think that it would render the cover floppy if you were to dremel out the barrier that holds the switch in the off position as there's a pretty large coil spring in there.

I looked around some but couldn't find this style cover that didn't have the off-when-closed function too.

https://sinolec.co.uk/1348-thickbox_default/translucent-red-...

https://www.atomsindustries.com/assets/images/items/asd1637/...


> when someone is wrestling to get a heavy server on rails, or similar

Don't do that !!!!

Save your back, reduce risk of damage to server and rails .... use a server lift.

Seriously.

Trust me, when I made the change from wrestling to lift, it was a real "what the hell was I doing before" moment.

If your employer or facility does not have a lift, do what you need to do to get one. At its most basic, it is, without doubt, a serious health & safety at work issue for your employer or the facility if they don't provide one.


That's if you do that regularly. It you rarely put in servers, there's little chance someone will buy a lift.


> It you rarely put in servers, there's little chance someone will buy a lift.

No excuse I'm afraid. The basic lifts are not that expensive. And you can always rent one, most of the manufacturer resellers have a rental service too.

Its all fun and games thinking you can get away with it because you "rarely do it" ... until you or your colleague are off work with a bad back, or you find you've just thrown $10k worth of server on to the floor.

We might be in agreement that a 1U server is feasible without a lift, as long as it is below head height. Although frankly I still prefer using a lift with 1U servers because even if the weight is not a problem, the rails are so much easier to do when you have a lift supporting the server - especially if you are using the moving target that is telescoping rails !

But once you go above 1U or above head height, you're just asking for trouble.


Is there an official name for this device other than "lift"? I'm looking around for prices but I can't seem to find anything here in Europe on eBay, geizhals.eu, or anywhere else for that matter.


No idea. Perhaps you can use a drywall-lift? For ceiling work


Fair enough. And also it sounds like you have plenty more experience with rack installation than most people. I never had an issue mounting though, but maybe my height helps ! 6'6


Great suggestion! Added it to the list of things for V2.


better yet a push button and a latching flipflop?


A latching relay would be perfect for this kind of thing, good current capacity and they're pretty reliable.


I accidentally powered down a san once by knocking it's rocker switch with my elbow :-)


- redundant power supplies

- redundant ethernet connections

- data across redundant disks

... and all of it bypased by an accidental elbow :)


This is a cool idea, and I was really interested to see it be done, but I wouldn’t actually do this. There are a lot of parts to this just to have cheap colo, and I would be worried about failure of the components like the usb hubs taking down the cluster. you can get a Mac Mini colocated for $54/mo with 24/7 remote hands, a dedicated ip, remote power management, and gigabit uncapped bandwidth, which is much better hosting than that $30/mo offering, so you’re not paying more for nothing. An M1 Mac mini with 16 gb of ram and a 1 tb SSD is $1300 not on sale. From a cursory google of benchmarks, an M1 on geek bench is about 11x more powerful than a raspberry pi 4b, so you are getting much more compute. Also, the 16 gb of ram is directly integrated with the processor, there is no need to manage multiple nodes, the hardware is battle-tested, it has true gigabit Ethernet, and the people at the data center also understand the setup and can repair it for you easily.

I guess what I’m saying is - super cool that you did this (and I have a cluster of pis at home as well to play with, I love them) but if I was going to pay for colo, I would just colo a Mac mini.


Is it possible to run linux (debian?) like normal on an M1 Mac yet? Agreed an M1 Mac mini is great hardware and power for the price. Yeah it looks like you can rent them for $75/month.



I guess the initial support is not "Linux as normal"? Do any regular install medium work?


> the hardware is battle-tested

It's been out for like 6 months, what do you mean "battle tested"?


> Mac Mini colocated for $54/mo

Interesting, who is the provider for this?


I believe MacStadium pricing is in that ballpark.


MacStadium will rent you a Intel Mac Mini for $59/month, or an M1 Mac Mini for $109/month. They do not offer colo.



I sit corrected, I don't know how I missed that on their page.

EDIT: s/stand/sit/


It's not really something as prominent as the other things they offer.


I've built several 1U servers to colocate. One nice thing when you only have one feed and one static IPv4 address is that you can use IPv6 for management.

For example, here's a server I build several years ago (pre Raspberry Pi 4):

https://www.klos.com/~john/anath.jpg

It has an NVIDIA Jetson TK2 as the main system plus a Raspberry Pi 3 that provides serial console and has a relay connected to a GPIO for remote reset for the TK2. The Pi can be reached directly via a static IPv6, so no additional IPv4 required.

I also have a 1U server with an 8 gig Raspberry Pi 4 and two 8 TB drives set up as a mirror. This system is also significantly below 100 watts, both peak and average:

https://twitter.com/AnachronistJohn/status/12876287129810493...


Can you explain your combined use of IPv4 and IPv6? Is the Pi only reachable directly via IPv6, or can IPv4 requests be tunneled through the Jetson to reach the Pi?


I'm astounded to learn that colo goes as low as $30/month. I had no idea.


When I started the build, NextArray was also $30/month for 5x IPs. They've increased their prices recently though. You might occasionally find deals on the webhostingtalk forums though.


Yeah I was really surprised by that as well. Unfortunately, that doesn't seem to be the case where I live. 90 € / month (before tax) for 2U with 4 IPv4 and 100Mbps "best effort".

You can almost rent a small office for that price and just throw the server in there. I've run the numbers and if you need the standard 3U worth of space (a 2U storage and 1U compute server), you're better off renting an office and a residential 600/300Mbps fibre connection.


Where are you based? In a capital city that doesn't sound right but I haven't run the numbers like you have.


It's not a capital city and we've had a decline in business lately (even before covid, but especially now). 150€/mo gets you a 17m2 office pretty much in the centre and there's currently a listing for 15m2 outside the centre for 100€/mo. Both of those are too small for most things and so there isn't much of a market for them (if all you need is a desk and computer, most people can work from home).

Power is fairly cheap as well and you can get a very good Internet connection for 40€ or so. It would probably be a bit over the ~200€ for collocation, but you have an office in a secure building, not just two servers in a rack somewhere, so it might be worth it.


90eu a month is a fraction of the rent I was paying for a shared house room in 1999 in a cheap city. 90eu a month office must be in south east asia or something like that.


This is really neat, and I appreciate how much work the author did to describe the build process in so much detail.

A couple of general questions:

1. Wouldn't compute modules allow a higher density of pi's? I appreciate that the power management would be more tricky though. I don't know whether there are backplanes available that would address this.

2. Do colos commonly accept self-built kit like this? With single strand wire and electrical tape on the gpios, etc.

(No disrespect intended to the author. As I said, I think this is a great piece of work.)


> 2. Do colos commonly accept self-built kit like this?

My (UK) colo agreement says:

    In exercising the rights granted in this Agreement, the Customer agrees to maintain the Apparatus to a standard which ensures that at all times it is safe and complies with all applicable health and safety standards.
So I guess it depends on how you interpret those kinds of provisions. Personally I wouldn't choose to risk installing a homebrew design, because the colo provider would likely attempt to hold me liable for consequential losses if it started a fire.


That's kind of what I thought.

> Personally I wouldn't choose to risk installing a homebrew design, because the colo provider would likely attempt to hold me liable for consequential losses if it started a fire.

Even if a fire occurred and your kit didn't start it, it might be difficult to prove that. And thinking about what happened at ovh, the other colo customers might pile in too.


we put a dishrack with GPUs ziptied to it in our colo and they let it go for several months, then decided it was "a fire hazard" - probably the power usage was the issue more than anything else, if you have a cage/rack(s), what's the difference?


I went through an exercise like this on paper, but it stalled out because the provider had a clause that only "one device" is allowed per 1U. This kind of density violates that clause.

I've also been disappointed in the stability of pi4s under load. Mostly I/O issues. I would only try again with PCIe I/O. No more USB for me.


It's worth figuring out what your colo provider's power usage policy is. Mine were pretty lax (nominally 0.5A at 240V, but I had a PowerEdge 1950 there which would definitely draw m ore at startup!) I think they were mostly concerned about average rather than maximum current draw, and they certainly wouldn't power anything off if I drew a bit more current...


Note (for anyone interested but new to colocation facilities) that they're trying to fit within a 1A power limit (and maybe short-depth rack limit). If anyone wants to build a bigger SBC cluster in a 1U height, you might be able to go 3x to 4x as deep.

Though, one downside of the full-depth hobby servers is that they're less flexible for also using at home, since they don't fit nicely in most furniture (and the fans tend to be crazy-loud). In the past, I've wanted this flexibility for evolving needs, and when I kept a cold spare of a deep 1U colo server at home (in case it needed to be run over to the colo to swap in for a hardware failure).


For home use cases, a even lower-budget option is to just give up full-depth server racks, and buy a bigger power/Internet box then throw the Pi(s) in instead. That way you don't need a chassis or many other components listed in the article. And since usually most of these boxes are installed outside the range of "living space", the fan noises should not be a big concern.

I have a super low-budget setup for my two SBCs (not Pi) in my parents place [0]. It's been there for more than 3 years now, running a Docker cluster. I'd say it's fairly stable for non-production use.

Of course, the box in my case does not allow fan installations. So the cluster runs fairly hot sometimes during summer time. Currently it runs around 60°C (140.00°F) under light~medium load. In the winter, the temperature could drop to as low as around 20°C.

[0]: https://imgur.com/a/bfBlUb4


Some Intel NUC's look to be 38mm tall, short enough to possibly fit in a 1U rack at 44.5mm. There's also the Minisforum for Ryzen's or Odroids H2+.


Yes, the slim -DNKE NUCs do fit into 1U, three of them next to each other; you can find mounts like these to do that: https://www.myelectronics.nl/us/nuc-minipc-19-rackmount-kit-.... If you have 3U to work with, you can fit 12 of them.

For the thick (-DNHE) NUCs, you either need 1.5U for three of them, or you can fit 8 of them into 3U.


Colocation providers seem to charge an additional $5-10 per Ethernet port. That’s part of why I ended up cramming all the Pis into the 1U and using the network switch. There’s a similar rack mount for Pis, but I would have needed to pay for the additional Ethernet ports monthly.


That makes sense. BTW, fun project write up! I'm torn between doing the colo or just upgrading to 1G fiber at home.

To run 3-4 NUC's I'd use 2-3 120V-to-14V DC module's in parallel. Then if one power supply failed, you'd have redundancy! They run off 12V-19V nicely and you can source switches that run off 12V-19V as well. It's even possible to put in a LiFePO4 battery for backup. I'll probably do that at home to skip the UPS. In, all you can have a decent server cluster with top of the line redundancy. Well, stick a pi4 in for remote management.

I use Nerves for my flashing remote devices then using docker or containers to run the software. One project I wanted to toy around with (and it'd work with Pi4's) would be a secure cluster setup without k8s. There's apparently support in KVM to run a VM with hot-swap if you can setup a shared storage. It seems KVM supports Gluster (or Cepher?) for shared storage. I'm a bit surprised it's not already "a thing" for small businesses.


If you want to play with live migrations, try a cluster running Proxmox. It does support both Gluster and Ceph, and for smaller clusters ZFS (yes, you can live migrate VMs with local disks backed by ZFS, once you set up a replication). Also note that distributed filesystems (Ceph at least) want to own entire block device; thus you need something else to boot & run the system from (which is going to be a problem for slim NUCs above, that have only a single M.2 slot).

Ceph and Gluster are not a thing for small businesses, because for them to be practical, they want cluster sizes that are beyond needs of small businesses. It is a gun too big for them.


Thanks I’ll have to check out Proxmox. For the NUC’s I’ve been booting from USB drives using Nerves. It works well! That leaves the entire m.2 for the application.

Nerves targets embedded buts works perfectly for setting up a cluster of small devices. Using ZFS could be a good method, hmmm.

Ceph and gluster aren’t meant for small offices, sure. But small businesses still spend a lot to setup IT servers. A system with reliable replication costs even more. So if one of those file systems could provide redundancy it’d be worth the extra disks. Of course many IT staff won’t know anything about those file systems, much less running them. I’d like one to just have a storage cluster for resilient backups for personal data.


Yeah, running in colo has another constrains than running a homelab. In my case, a switch with appropriate amount of slots was just few Us above, for free ;).

What I was wondering, why unmanaged switch without PoE (OK, I then it would need additional IP address), no PoE hats and then having to play around with relays (their control doesn't require IP?). With PoE managed switch, you would get control of PoE and could turn off/on the Pis.


The NUCs and Minisforum would be fewer cores for price (with 4 cores per $400-500), but I'm sure they'd be way faster. Excellent idea.


They're pretty fast, with a few hexacores. Quite a bit more than a RPi4 but still under 25W each. There's a few NUCs for $350 or so.


They have a small fan, which is quite annoying when the NUC is under load.

Not a problem in colo, but annoying in homelab. Not a deal breaker, but something to think about.


NUCs are a bit difficult to source right now, but I'd be super interested in coloing ~4 of them + a switch + a nas of some sort in a 1/2U chassis running k3s.


I have some passively cooled i3-7100u boxes I recently got, each with an M.2 slot and 2 RAM modules, and they idle at around 2W each when booted into Debian.

I've never felt like Pis are that great for power vs performance for most applications, and make more sense in places where you can make use of all that GPIO and compact size.


A super cool project, but it's so easy to self-host a Raspberry Pi at home now with a public IP - even behind NAT and firewalls.

I'm the creator of inlets (self-hosted tunnels - L4/L7)

https://docs.inlets.dev/#/

But Argo Tunnels from Cloudflare, and reverse SSH (for those with too much time on their hands) also work.

For more on a teeny K3s cluster: https://blog.alexellis.io/state-of-netbooting-raspberry-pi-i...


I remember when there were several providers offering "Pi Colo." I had a Pi at PCextreme in Amsterdam for years before they ended the service. This is pretty clever, I like it.


I used to do colo, but surprisingly it’s usually a better deal to pay monthly for an unmanaged server, with standard hardware replacement warranty.


This is only true if you don't need a lot of storage. Stuffing a few U full of hard disks is still cheaper.


How much storage are we talking about? Hezner storage servers are pretty cost effective.


Are there any providers similar to Hetzner that are a bit closer to the US? I have a view vps instances with them, but the high latency is a major bummer.


Well there's rsync.net but they're something like 5x more expensive.


Well the multi-TB range, but Hetzner has no servers, or competition, in the USA as far as I can tell.


I’m interested to see what breaks first. I’m mainly hoping it’s not the switch or the power supply.


Last time I checked, the RPI to really be unlocked needs an additional PCIe line. IIRC, right now, I think there's just one, and the consumer board uses it for USB. The blade edition makes it available on the connector, but then you have to choose between network or storage. It'd be ideal if there were three PCIe lines, one for usb, one for network, one for storage.


Or you need a PCIe switch on the base-board, and those are practically impossible to get without a giant NDA.

My kingdom for a PCIe switch chip that's as easy to use as the TUSB series of hub chips.


I think the Pi 4 and above don't have that issue any more? I could be wrong


if you use a 2U and much deeper System, you could fit them standing instead of laying down. It should be possible to add a huge amount by that.

May I also suggest to just use a small 3d printed part to hold them in place.

How about the raspberry pi m2 shield? Or how about using a 24x 2.5" chassis to support Hot swap drive replacement?


If I was to do a V2, I'm really interested in Pine64's clusterboard and the Turing Pi 2. However, the clusterboard is out of stock, the Turing Pi 2 isn't out yet, and it's very difficult to get Raspberry Pi CM4's right now. Fitting all the Pi's on one board would make this way easier.

The 3d printed part, 24x drives, and m2 shield all sound like awesome ideas. If I had an use for 2U worth of Pis (that would be a ton!), I'd definitely consider it.


I hope your cage is fireproof. Shoving all that consumer gear in a 1U seems… risky.


Why would a 1U be different than a 2U or 6U?


Raspberry Pis are not the right hardware for this. At a minimum you would want an SBC with onboard storage instead of paying this much to run your storage on thumb drives or SD cards.


What is this actually useful for?


You better have a good liability insurance if someone hosts this for you.


Dear God, those USB controllers are ticking time bombs!

I believe it should be possible to make a sub-120W machine using an ITX or laptop board with more reliable PCIe and SATA connections for the drives.


So much this.

I've used rPi 4 as a home server and it croaks under load. Burns through SD cards if you go that route. The USB, while better, similarly does not operate well under load and I've similarly killed USB controllers when I subject them to too many IOPS. This thing is a maintenance nightmare.

If you're just doing a one off server systems you can pick up older HP thin clients with better specs than a pi 4 and real IO for a similar price point.

T630s can be had for around $60 shipped off eBay with 4GB ram and 8-32GB M.2 SATA upgradable to 64GB RAM (spec sheet says 32GB but some have made 64 work) CPU supports ECC but I haven't tried that yet and as much storage as you want to stuff into the SATA slots (1 22x80 and 1 22x47). You're not going to be able to cram as many in a U1. But you will have an actually stable system. I'd put the CPU at the near equivalent of 2 pi 4s - better if you're doing lots of AES.


with high speed internet even at homes, what benefit is "10 Mbit Ethernet" in todays terms? what kind of projects can you use this speed with? i would be interested to know...

i use a 30mbps home internet connection with somewhat dynamic ipv6. i use duckdns to bypass that and it works wonderfully. i have a couple of pi 3b+ and and zeros in the router USB ports itself from where they get their power from. they get internet via wifi or a small patch cable. this means as long as wifi router has internet, my machines will run. granted they are all dangling and flopping around but still...

i have started into self hosting and other similar stuff and i find the pis on the local network with this power work great. i would be interested in this.

oh, the next big thing^(tm), are there any hosting providers who do managed pi as a service? like they use similar to your setup and like amazon does mac mini, does someone do pi? you get a managed pi for cheap and they can host a tonne of users in a small space for cheap


If you have a CDN in front of your website and a good packet filter / firewall / DDOS protection and you mostly serve cachable (static) content, then 10 Mbit Ethernet can do more than one might think.


I ran an image / file hosting service off an old computer in my parent's basement when I was in high school. You can do more than you might think with ~1mbps upload even.



nice. so you get to use a pi 4 8gb which costs £81.90 on amazon uk and they charge you 125 for a year of storage, power, bandwidth, device. nice


Some ISPs are stricter than others about not running servers on residential internet accounts. Sometimes they just plain block ports, especially for email so they don't have to deal with virus-laden spam relays on customer's PCs. And if you're on cable, you might only be guaranteed 5 mbit/s upload, and actually using that much all the time would contribute to congestion on your node (shared by your neighbors).




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: