Hacker News new | past | comments | ask | show | jobs | submit login
My £60 ARM server (ewanleith.com)
141 points by EwanToo on May 23, 2012 | hide | past | favorite | 71 comments



Personally I wouldn't trust a microSD card for any production role beyond being a bootloader. You'll be paged one day only to find you had to make a time-consuming visit to your co-lo, and that all your data was irretrievably gone.

The solution to this is to have one "real" PC acting as a NAS and serving iSCSI. You would have to divide the extra cost of this PC across all the ARM servers you have.


Micro SD are notoriously bad for server usage, they are designed to store infrequently changed files for a while, like mp3 collection, cameraphone snapshots, apps etc... Going with a brand name might help.

In order to extend their lifespan, you might want to minimize the writes by disabling some logs and the "last access" updates on files (`noatime` for "no access times").

Some examples: http://tombuntu.com/index.php/2008/09/04/four-tweaks-for-usi...


Instead of disabling the logs, you can put them on a tmpfs. You'll lose them if the machine crashes or reboots, but it's still better than not having them at all.


I run a Dreamplug pretty much 24/7, as my "homebrew" backup and NAS solution; the SD is used to boot and then mount an external RAID enclosure connected via eSATA, which holds the real data. Works quite well, in a fraction of the space that a "real" pc (or even a MacMini) would need.


I love these Marvell plugs, my only gripe is that the community wiki is just spam now :(

What site do you get your OS images from?


Usually from links on http://www.newit.co.uk/forum/


You could also get an "industrial grade" SD card; they claim a lifetime of years instead of months. They're usually using SLC Flash and (possibly) more advanced wear leveling compared to consumer cards.


i wouldn't count on that, google's study of hard drive lifetime/durability indicated that "enterprise" hard drives had almost identical lifetimes and performance characteristics to consumer gear (baring of course stuff like 15k RPM drives).

In other words, "enterprise" is more or less exactly the same stuff, but at a premium. Price discrimination strikes again.


In the Flash case there's a big physical difference between MLC and SLC. You're probably about as likely to get struck down by firmware failure in both cases, though.


You're probably right, it's certainly the component I'd be least surprised about if it failed.

I do nightly backups off the box, and if the machine does die, I'll post down a newly configured micro-SD card for insertion before doing much investigation.


SD is my main concern with these hacker boards, I mentioned it in a previous item. They shouldn't really be relying on this sort of basic flash memory, but I know why... it's accessible and cheap - plus many uses won't be 24/7. Time will tell, but it feels like a time bomb that will affect people who don't know about it (i.e. those that don't know about configuring these devices for minimal writes).


"[...] ARM servers are a big part of the future [...]"

Has anyone actually compared x86 and ARM doing some sort of comparable task, and measured how much power they both consume performing the same task? I know ARM uses a lot less power than x86, but if it uses 10 times as little power but takes 20 times as long to do the same task as an x86 server chip, it still uses more power. I'm not saying it isn't more power efficient than x86 (it certainly has that potential, because of all the old cruft in x86), but I just haven't seen any evidence that supports this claim.

Or am I missing the point? Is power efficiency not the main reason for ARM servers?


In many server applications your not CPU bound but rather IO bound. In those applications using the lowest power CPU makes sense. These are the applications ARM seems to be targeting first. Examples: $5 month static + php only website hosting market, front end load balancing, static file serving, etc.

The other possible target is embarrassingly parallel work loads where you really just want to cram as many cores into a given space as possible. Usually the limit of how many cores you can put in a rack is either power or cooling, not space.

Where you probably won't see ARM in the near term is on work loads that are highly single threaded and performance critical. I think this is exactly the type of situation your describing.


See Google's paper 'Brawny cores still beat wimpy cores, most of the time'. It concludes that 'Slower but energy efficient “wimpy” cores only win for general workloads if their single-core speed is reasonably close to that of mid-range “brawny” cores.'

http://research.google.com/pubs/archive/36448.pdf

HTML version at http://bit.ly/KOhENi


ARM boards are generally more power efficient for any given compute unit you want to benchmark, but it's not a huge advantage

And note that there are Intel solutions in the "cheap, low power" market too. There are ~$80 Cedar Trail boards (I forget the part numbers off-hand) which take 2GB of DDR3 memory and run on 15W or so at full CPU utilization.


It all depends on the workload, but one public comparison (from the vendor and in a press release, but with some figures):

http://www.low-latency.com/blog/cantor-evaluating-calxeda-ar...


The Ubuntu team was talking at SELF a few years back about using ARM clusters as web servers because of the low cost, high efficiency, and high core density you can achieve with the low-heat ARM chips.


I also use a small ARM server (SheevaPlug) since about two years and it can hold up easily to VPS hosting. It runs everything I would run on a "normal" server (Lighty, Dovecot-IMAP etc.), but needs nearly no power in comparison.


The SheevaPlug is cool, but it seems like the newest ones have gone a slightly different route and thrown everything you could possibly want at it - wifi, bluetooth, audio, etc?

That's all nice stuff, but just adds to the power and price for what should be a dirt cheap, almost disposable, piece of equipment.

Though I would definitely appreciate gigabit ethernet on the beaglebone...


You can disable most of the extra stuff... I think all the options make these devices much more flexible.

If you wanted a dirt-cheap, more modular component you'd be looking at Arduino or Raspberry Pi, not a plug.


The am3359 SoC on the BeagleBone supports (dual) gigE, but the PHY in use is 10/100 only.


I think so too - you normally don't need wifi etc. on such a device. The one I use is the original first one and I wouldn't use the newer ones. If I remember correctly they even need active cooling...


I build a couple of ARM "servers" out of a PandaBoard (dual core + 1 GB RAM) and a self-made case (lasercutter)

http://www.flickr.com/photos/rmoriz/sets/72157629535790241

http://www.thingiverse.com/thing:18645

They're running Linaro, the Ubuntu-ARM-development-fork by ARM/Linaro.org


Whoa! It's like a micro-server and a makerbot mated. Nice!



Another site that is unreadable on a mobile device because the options to share the content are hiding the content itself.

Is there any browser/browser & extension to remove that... stuff? Maybe an adblock filter, a greasemonkey script or something similar?


Someone else pointed that out, it's not good is it?

I've disabled the AddThis bar while I look into it (the caches might keep it alive for a while though), hopefully there's a suitable solution.


I plan to do a very similar thing with my Raspberry Pi.


I want to replace my router with a Raspberry Pi, but seems impossible to buy one


Just to make you feel bad, I've just been notified mine got shipped and I'll get it soon :)

They are getting sent, just slowly.


Yeah. I've been on the waiting list for months :(


Anyone else blocked by the stupid cloudflare scam about not having 3rd party cookies enabled meaning you have virus on your computer? And preventing them from reading the article?


Huh? I browse constantly with third party cookies disabled in Safari and Chrome and never have a problem. I can see that site perfectly. And that's certainly not how CloudFlare decides that your machine is likely to be infected.

What are you seeing when you visit the site?


I was browsing with firefox with 3rd party cookies disabled, and I saw this message a few times. What irks me and triggered my kvetching is that I "played the game" and did submitted the captcha 3 times and it still didn't work. Twice in Firefox and once in Safari in private browsing mode.

Here is a screenshot, with my I.P. address at the bottom:

http://imgur.com/n9xFG

It looks scam-y too with the "google pays me $x a day" and "automated profit package" ads.

P.S.: I remember seeing it on a blog post on jgc.org at least once, as well as on joyoftech a ago but not today.


You are not seeing that page because your cookies are blocked.

You are seeing that page because CloudFare believes your IP address is behaving badly, for whatever reasons. ( see https://www.google.com/webhp#q=202.72.107.83 )

Instead of outright blocking all traffic from known bad IP addresses, they have a mechanism to let actual users go in. That mechanism relies on a captcha flow, and on setting a cookie in the user's browser to bypass the IP block.

Disclaimer: I am inferring all of this from your screenshot. CloudFare's actual process and intent may vary.


Thanks. I'll take this up with the folks in San Francisco and see why you were seeing that.


Thank you for looking into it.


I didn't know Cloudflare was that harsh, that's frustrating, sorry about that :(


I had to read your fine post from the google cache, missing on the pretty pictures:

http://webcache.googleusercontent.com/search?q=cache:hUwlhkh...


Hmm, I might need to disable cloudflare then, frustrating really as it's a clever service.


Set the CloudFlare security level to low. I found anything above low results in too many false positives which ends up frustrating legit users like nico. I don't know why 3rd party cookies would matter though.


Cool for a personal server, but there's one large downside—everything's soldered on. Upgrading a server like this isn't really possible. And what if something breaks? Unlike commodity hardware of the past, you can't just take the server out, fix the broken component, and stick it back. Before ARM servers take off, manufacturers will have to make components replaceable.


I think as long as the price remains under $100, the manufacturers will consider the whole device a replaceable part on it's own.

It'd be a bit like sending out a hard drive component for an engineer to replace, when the drive itself costs $100, it's just not worth the effort for 99% of installations.


Perhaps, but for now, it's cheaper to make a data center out of traditional commodity hardware.

Let's say that a hard drive (or whatever storage device) has a failure rate of 1% over 1 year. With traditional hardware, for every 100 computers you have, you'll replace 1 hard drive. A new hard drive will cost a fraction of the machine's initial price, let's say 1/5. So maintenance costs for storage devices are 1/500 the initial costs per year, maybe a bit more if you factor in the cost of the labor.

Now if you make the same cluster of servers out of ARM hardware, let's say you'll need 4x the number of machines to get the same processing power. That's 400 machines. If the storage devices on these machines have the same failure rate, you'll need to replace 4 machines per year. However, since you're buying a new machine, you're don't get to pay for just the storage device. It costs you 4/400, or 1/100 the initial cost to maintain the storage devices for your ARM cluster per year.

A huge assumption here is that both types of hardware have similar initial costs. So the point here is that it'll always be cheaper to compartmentalize your losses, unless ARM devices are much cheaper than traditional commodity hardware. Intuitively, throwing away a whole machine when something breaks is going to be a lot more expensive than just replacing the broken part. I don't imagine that it's extremely difficult to make parts replaceable on ARM boards, and it would definitely save some money, so I don't see why it shouldn't be done.

(Also, I realize that the Beaglebone uses SD cards, which are cheap and replaceable. But imagine that instead of storage devices, I'd used memory for the example)


I use my two Open Pandora's as build servers. They work just like any other PC in the house, with the difference that they can be easily unplugged and pocketed, as well. Fantastic little machines, and once you get a development environment up and running, they function extremely well.


I'd love to see a Aubrey Isle like processor based on ARM cores - with many ARM cores sharing a 2 GB pool of fast memory.

The Intel cores on the Aubrey Isle chip are fairly large and take up most of the silicon on the die. An ARM-based design would be much smaller and cheaper to manufacture.

Now that I said it, I wonder how much more expensive would be a RAM chip incorporating an ARM core versus pure memory. It would be interesting to have "smart memory" that could do things like "sha-384 this range and ping me back when you're done". Assuming other threads are not using that same component for other activities, it could be done basically "for free".


Best thing about ARM? It's cheap, so you can just pick and choose between distributed or shared memory models.


If it had a lot more memory that would be interesting.

Though really you can get a 256Mb VPS for £5/month http://prgmr.com/xen/ and you don't have a setup charge or unreliable hardware.


ARM has strange limits on the amount of memory. The Versatile architecture supports up to 256 MB max. The Versatile Express architecture has a 2GB limit (and no PCI!)

Then you have to remember that ARM is presently 32 bit, so you're going to have trouble going over 4 GB at all, which is currently a very small amount for a serious server. 64 bit ARM isn't going to be widely available for at least another 2 years, and at what cost we just don't know (but it's not likely to be £60/server).


The Cortex-A15 can address 1TB of ram, its still 32bit so there is the requisite per process limit. Last I heard Cortex-A15 SoC's will be available in volume this year.


I was thinking about this as well... maybe not in comparison to cheap providers whose performance are variable, but for £12/month from Linode you can get a 512mb VPS with 20GB of disk space. How does the performance of this ARM box compare to a Linode?


Well, for a ballpark figure:

md5sum of 3.4 kernel.tar.gz (average of three runs)

512Mb Slicehost 2.2Ghz Quad-Core AMD Opteron(tm) Processor 2374 HE = 0.275 seconds

512Mb BeagleBoard XM 1GHz Cortex A8 = 2.023 seconds


where was the file stored on the beagle? that could explain a good part of the difference in results you're getting.


time dd if=linux-3.4.tar.bz2 of=/dev/null

is 0.7 seconds on the beagleboard and slicehost is more variable but comes to down to 0.5 seconds after a few goes from (1 second). I'd expect it to be read from the page cache anyway.

However, I remembered that the beagleboard is using busybox and really the systems are very different from each other, different distribution for example.

Update: busybox md5sum is about 20% slower than regular md5sum.


There have been mentions on the OpenStack mailing list of various experiments NTT is doing with ARM servers. I don't think there is much information public about it yet though.


HP has their Project Moonshot which is a rack server solution using multiple Calxeda ARM servers attached to a main board for power and networking.

[HP] http://h17007.www1.hp.com/us/en/iss/110111.aspx

[Engadget] http://www.engadget.com/2011/11/02/hp-and-calxedas-moonshot-...


On a related note, VIA makes small motherboards with embedded CPU's. They're not as cheap, but they act more like "real" pc hardware, are i386/686 compatible and use very little power.

I've run a fan-less EPIA board at home for a couple of years, and it's nice. Took some time to find the right board though..

http://www.viaembedded.com/en/products/boards/


Checkout TonidoPlug2 (http://www.tonidoplug.com/tonido_plug.html). It has 512 MB DDR3 RAM, Gigabit Ethernet, WiFi b/g/n and in-built enclosure for SATA II HDD. You can boot it directly from the SATA instead of internal flash. All the right things you need in a personal server. Also it runs debian squeeze.


I run a GuruPlug with filesystems on a LVM volume built from e-sata and usb hard drives. It's very fine for my personal needs, except the GP's crazy heat output. What I hope is that some manufacturer built ARM boards with loads of SATA controllers. That way one could build a very affordable large NAS's.


If someone (looking at SanDisk) were to make a range of SD cards for servers and lightweight ARM desktops that would be of real interest. Because the current stuff out there for phones and cameras is of variable quality and is about the worst thing about these new lightweight arm systems.


Interesting article but pls fix website for mobile readers http://twitpic.com/9o9q35

I would have commented on your blog but on Firefox the "about me" floats over the disqus column so I can't.


I've noticed most HN posts from blogs suffer this issue... I was thinking of a "Your Site Doesn't Work On Mobile" notification service...


YES DO IT (I would help if I had spare time, I'll just wish you good luck instead)


Ouch that's ugly isn't it?

It's obviously not good enough. Will see about sorting it, thanks.


"Micro-SD card performance is highly variable, and generally rubbish"

Agree with that. I use a pandaboard as a build server and I felt it is a lot efficient to use NFS than running off an SD card.

Ewan, have you tried USB HDD instead of SD card?


I think that's probably the next experiment, I'm not sure if I'd got for a HDD or a very small USB SSD to keep the power usage down - at the moment it's under 0.5W, a HDD will be several times that.


Pandaboards run much faster with USB hard drives instead of SD cards. That's what I hear from lots of knowledgeable people, anyway. I've not actually run any benchmarks though.


Is anyone eyeing the new Atom for servers? Thinking about a bunch for a modest low power cluster. http://goo.gl/fWdwd


Very cool, I've been wanting to do something similar with a personal server for some time.


Excellent idea!


use Marvell's sheevaplug is better off for this. beaglebone is really for industrial control, beagleboard are for generic purpose while pandaboard is for mobile computing. SD cards are made for consumer class cameras, but you can buy industrial level SLC SD, for $50 per 4GB, in that case I will just go with SSD.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: