I find myself mentioning EKS Anywhere to anyone who mentions "vendor lock in" with AWS EKS. Like, my spidey sense automatically goes "You don't know EKS..." Keep it up! It's extremely useful.
I always liked the Cube form factor. Having the internals so easily accessible is great design.
The LP-179 motherboard is interesting as well. I've been looking for a NUC replacement, but the newer models have lost the small form factor, and this looks like it might be a good alternative.
The Pico-ITX standard is not popular though. It was introduced by VIA way back in 2007, and hasn't had much industry traction. Case in point: I can't find a good case for it. Can someone recommend one? Or maybe I could retrofit my ancient 4"x4" NUC for it...
You might be interested in the latte panda boards [1] or up boards [2]. They're cheaper than the LP-179 but also less powerful and customizable. However they do have standard cases and are more popular than the LP-179.
Thanks! I was familiar with the LattePanda, but was looking for something more powerful. The UP Xtreme looks like it might fit the bill though. It's a shame the NUCs abandoned that form factor, and that the Pico-ITX standard never took off. I just want a SFF PC I can build myself. :)
I had one of these cubes for a few years at job back then. That latched handle, that smoothness of the the extraction of the core, for something that no ordinary user would ever have need to actuate... so good. The static power switch though, something that every user would use and misuse by accidentally brushing it.. so bad.
I can't deny how cute this is but I just made a 4 node kubernetes cluster in a drawer with 2x Asus PN51, 1x Asus PN50, and an Intel Nuc i3 for MUCH LESS money than 6000 dollars. Jesus...
But it's like saying that you've just bought a Toyota Corolla that runs as fast as a 26-wheel limo [1] while also being rather cheaper and easier to drive.
With so many LEDs all over the place, computation is not the point of the "cubernetes" device.
Yeah, the overwhelming majority of the cost was SBCs that are way overboard for what seems to be the intended use of this thing as a "learning" tool. Could have shaved $4,500+ off the price right in one whack. There are a few other places where some cost could be trivially shaved in that parts list too.
What I like best about this project, besides it being in a G4 Cube, is that it's like a physical metaphor for Kubernetes. A classic monolith was cracked into a bunch of microservices which from the outside doesn't look like any kind of improvement over the original.
Interesting project though it might be easier to just have 3-4 Dell Optiplex Micros stacked. I've seen some decent ones go for ~$300 with 9th/10th gen chips and 8-16GB RAM each.
For rackmount, Dell R210 II and later R2x0 seem nice short-depth 1U, sometimes for even less money.
For quieter and less Watt-hungry rackmount in the living room, I ended up moving towards Atom-based ones, currently from Supermicro, with fans replaced with Noctua. Plus a 4U chassis for GPU, with a consumer motherboard (non-ECC RAM).
I recently again almost redid a home K8s production cluster using RasPi 4 8GB (and also looked at products from Pine64 and Turing Pi and others), but product availability is terrible right now. Meanwhile, used x86-64 hardware is easily available, and easily runs official Debian Stable perfectly.
Thanks for the info. I wasn't aware firefox would block the images. I don't have any specific tracking on the images, but I'll look into why that might be a problem.
I have tracking protection set up on the stricter blocklist and that tends to block certain Cloudfront URLs, probably because a lot of them are used for tracking scripts. Others are reporting that the default settings work fine, so it must be because I altered the tracking protection settings.
It's probably not something you should be necessarily worrying about.
That's a pretty cool setup, but $6310 is about as much as I make in 3 months (Latvia), so it's definitely out of my price point. That said, it's pretty great that you can run containers and orchestrators (at least in some configuration) even on Raspberry Pi's or other SoC boards.
Right now I just have two local servers that have x86 200 GE's (35W TDP), some value RAM and a bunch of Seagate HDDs because that kind of storage is way cheaper than cloud offerings, and a few VPSes in the cloud, alongside WireGuard.
Though personally I'd most likely go for something boring like Ubuntu LTS, K3s and Portainer/Rancher for management. Actually Docker Swarm also works decently.
As long as you're settled on the OCI standard, it's a pretty great landscape out there!
> I know there are cheaper ways to bulid a home lab Kubernetes cluster. My goal wasn’t to build a cheap cluster—see my privous post for a 4 node cluster under $100. [0]
Speaking of which... the author's other post talks minikube - does anyone have any idea what the recommended minimum system requirements might look like for EKS Anywhere? Can't find it on their downloads/docs pages.
I work on EKS Anywhere and familiar with those options but my answer will be biased
EKS Anywhere provides a CLI, packaged Cluster API, and other tools (CNI, GitOps) on top of raw Kubernetes. K8s, k3s, k0s are binaries you have to manage and are similar to EKS Distro [1] which we publish and build on top of.
EKS Anywhere is designed to give you clusters you can manage long term using Cluster API and a full suite of tools for how we thing Kubernetes clusters should be run based on our experience running EKS. It is a closer comparison to Rancher's RKE or VMware Tanzu for provisioning clusters, but some features and implementation details are different.
I don't use EKS (I use ECS, so similar) but Fargate is a great feature of both - I don't need to worry about individual nodes or whatever, I can just tell AWS how many I want and it deals with provisioning, patching etc.
I'd previously thought about using a Turing Pi board for this but never got around to it. Plus of course your limitation of not using ARM makes that a non-starter at the moment (and the official compute modules do not reach this level of performance but the Turing modules get close).