Personally I wouldn't trust a microSD card for any production role beyond being a bootloader. You'll be paged one day only to find you had to make a time-consuming visit to your co-lo, and that all your data was irretrievably gone.
The solution to this is to have one "real" PC acting as a NAS and serving iSCSI. You would have to divide the extra cost of this PC across all the ARM servers you have.
Micro SD are notoriously bad for server usage, they are designed to store infrequently changed files for a while, like mp3 collection, cameraphone snapshots, apps etc... Going with a brand name might help.
In order to extend their lifespan, you might want to minimize the writes by disabling some logs and the "last access" updates on files (`noatime` for "no access times").
Instead of disabling the logs, you can put them on a tmpfs. You'll lose them if the machine crashes or reboots, but it's still better than not having them at all.
I run a Dreamplug pretty much 24/7, as my "homebrew" backup and NAS solution; the SD is used to boot and then mount an external RAID enclosure connected via eSATA, which holds the real data. Works quite well, in a fraction of the space that a "real" pc (or even a MacMini) would need.
You could also get an "industrial grade" SD card; they claim a lifetime of years instead of months. They're usually using SLC Flash and (possibly) more advanced wear leveling compared to consumer cards.
i wouldn't count on that, google's study of hard drive lifetime/durability indicated that "enterprise" hard drives had almost identical lifetimes and performance characteristics to consumer gear (baring of course stuff like 15k RPM drives).
In other words, "enterprise" is more or less exactly the same stuff, but at a premium. Price discrimination strikes again.
In the Flash case there's a big physical difference between MLC and SLC. You're probably about as likely to get struck down by firmware failure in both cases, though.
You're probably right, it's certainly the component I'd be least surprised about if it failed.
I do nightly backups off the box, and if the machine does die, I'll post down a newly configured micro-SD card for insertion before doing much investigation.
SD is my main concern with these hacker boards, I mentioned it in a previous item. They shouldn't really be relying on this sort of basic flash memory, but I know why... it's accessible and cheap - plus many uses won't be 24/7. Time will tell, but it feels like a time bomb that will affect people who don't know about it (i.e. those that don't know about configuring these devices for minimal writes).
"[...] ARM servers are a big part of the future [...]"
Has anyone actually compared x86 and ARM doing some sort of comparable task, and measured how much power they both consume performing the same task? I know ARM uses a lot less power than x86, but if it uses 10 times as little power but takes 20 times as long to do the same task as an x86 server chip, it still uses more power. I'm not saying it isn't more power efficient than x86 (it certainly has that potential, because of all the old cruft in x86), but I just haven't seen any evidence that supports this claim.
Or am I missing the point? Is power efficiency not the main reason for ARM servers?
In many server applications your not CPU bound but rather IO bound. In those applications using the lowest power CPU makes sense. These are the applications ARM seems to be targeting first. Examples: $5 month static + php only website hosting market, front end load balancing, static file serving, etc.
The other possible target is embarrassingly parallel work loads where you really just want to cram as many cores into a given space as possible. Usually the limit of how many cores you can put in a rack is either power or cooling, not space.
Where you probably won't see ARM in the near term is on work loads that are highly single threaded and performance critical. I think this is exactly the type of situation your describing.
See Google's paper 'Brawny cores still beat wimpy cores, most of the time'. It concludes that 'Slower but energy efficient “wimpy” cores only win for general workloads if
their single-core speed is reasonably close to that of mid-range “brawny” cores.'
ARM boards are generally more power efficient for any given compute unit you want to benchmark, but it's not a huge advantage
And note that there are Intel solutions in the "cheap, low power" market too. There are ~$80 Cedar Trail boards (I forget the part numbers off-hand) which take 2GB of DDR3 memory and run on 15W or so at full CPU utilization.
The Ubuntu team was talking at SELF a few years back about using ARM clusters as web servers because of the low cost, high efficiency, and high core density you can achieve with the low-heat ARM chips.
I also use a small ARM server (SheevaPlug) since about two years and it can hold up easily to VPS hosting. It runs everything I would run on a "normal" server (Lighty, Dovecot-IMAP etc.), but needs nearly no power in comparison.
The SheevaPlug is cool, but it seems like the newest ones have gone a slightly different route and thrown everything you could possibly want at it - wifi, bluetooth, audio, etc?
That's all nice stuff, but just adds to the power and price for what should be a dirt cheap, almost disposable, piece of equipment.
Though I would definitely appreciate gigabit ethernet on the beaglebone...
I think so too - you normally don't need wifi etc. on such a device. The one I use is the original first one and I wouldn't use the newer ones. If I remember correctly they even need active cooling...
Anyone else blocked by the stupid cloudflare scam about not having 3rd party cookies enabled meaning you have virus on your computer? And preventing them from reading the article?
Huh? I browse constantly with third party cookies disabled in Safari and Chrome and never have a problem. I can see that site perfectly. And that's certainly not how CloudFlare decides that your machine is likely to be infected.
I was browsing with firefox with 3rd party cookies disabled, and I saw this message a few times. What irks me and triggered my kvetching is that I "played the game" and did submitted the captcha 3 times and it still didn't work. Twice in Firefox and once in Safari in private browsing mode.
Here is a screenshot, with my I.P. address at the bottom:
Instead of outright blocking all traffic from known bad IP addresses, they have a mechanism to let actual users go in. That mechanism relies on a captcha flow, and on setting a cookie in the user's browser to bypass the IP block.
Disclaimer: I am inferring all of this from your screenshot. CloudFare's actual process and intent may vary.
Set the CloudFlare security level to low. I found anything above low results in too many false positives which ends up frustrating legit users like nico. I don't know why 3rd party cookies would matter though.
Cool for a personal server, but there's one large downside—everything's soldered on. Upgrading a server like this isn't really possible. And what if something breaks? Unlike commodity hardware of the past, you can't just take the server out, fix the broken component, and stick it back. Before ARM servers take off, manufacturers will have to make components replaceable.
I think as long as the price remains under $100, the manufacturers will consider the whole device a replaceable part on it's own.
It'd be a bit like sending out a hard drive component for an engineer to replace, when the drive itself costs $100, it's just not worth the effort for 99% of installations.
Perhaps, but for now, it's cheaper to make a data center out of traditional commodity hardware.
Let's say that a hard drive (or whatever storage device) has a failure rate of 1% over 1 year. With traditional hardware, for every 100 computers you have, you'll replace 1 hard drive. A new hard drive will cost a fraction of the machine's initial price, let's say 1/5. So maintenance costs for storage devices are 1/500 the initial costs per year, maybe a bit more if you factor in the cost of the labor.
Now if you make the same cluster of servers out of ARM hardware, let's say you'll need 4x the number of machines to get the same processing power. That's 400 machines. If the storage devices on these machines have the same failure rate, you'll need to replace 4 machines per year. However, since you're buying a new machine, you're don't get to pay for just the storage device. It costs you 4/400, or 1/100 the initial cost to maintain the storage devices for your ARM cluster per year.
A huge assumption here is that both types of hardware have similar initial costs. So the point here is that it'll always be cheaper to compartmentalize your losses, unless ARM devices are much cheaper than traditional commodity hardware. Intuitively, throwing away a whole machine when something breaks is going to be a lot more expensive than just replacing the broken part. I don't imagine that it's extremely difficult to make parts replaceable on ARM boards, and it would definitely save some money, so I don't see why it shouldn't be done.
(Also, I realize that the Beaglebone uses SD cards, which are cheap and replaceable. But imagine that instead of storage devices, I'd used memory for the example)
I use my two Open Pandora's as build servers. They work just like any other PC in the house, with the difference that they can be easily unplugged and pocketed, as well. Fantastic little machines, and once you get a development environment up and running, they function extremely well.
I'd love to see a Aubrey Isle like processor based on ARM cores - with many ARM cores sharing a 2 GB pool of fast memory.
The Intel cores on the Aubrey Isle chip are fairly large and take up most of the silicon on the die. An ARM-based design would be much smaller and cheaper to manufacture.
Now that I said it, I wonder how much more expensive would be a RAM chip incorporating an ARM core versus pure memory. It would be interesting to have "smart memory" that could do things like "sha-384 this range and ping me back when you're done". Assuming other threads are not using that same component for other activities, it could be done basically "for free".
ARM has strange limits on the amount of memory. The Versatile architecture supports up to 256 MB max. The Versatile Express architecture has a 2GB limit (and no PCI!)
Then you have to remember that ARM is presently 32 bit, so you're going to have trouble going over 4 GB at all, which is currently a very small amount for a serious server. 64 bit ARM isn't going to be widely available for at least another 2 years, and at what cost we just don't know (but it's not likely to be £60/server).
The Cortex-A15 can address 1TB of ram, its still 32bit so there is the requisite per process limit. Last I heard Cortex-A15 SoC's will be available in volume this year.
I was thinking about this as well... maybe not in comparison to cheap providers whose performance are variable, but for £12/month from Linode you can get a 512mb VPS with 20GB of disk space. How does the performance of this ARM box compare to a Linode?
is 0.7 seconds on the beagleboard and slicehost is more variable but comes to down to 0.5 seconds after a few goes from (1 second). I'd expect it to be read from the page cache anyway.
However, I remembered that the beagleboard is using busybox and really the systems are very different from each other, different distribution for example.
Update: busybox md5sum is about 20% slower than regular md5sum.
There have been mentions on the OpenStack mailing list of various experiments NTT is doing with ARM servers. I don't think there is much information public about it yet though.
On a related note, VIA makes small motherboards with embedded CPU's. They're not as cheap, but they act more like "real" pc hardware, are i386/686 compatible and use very little power.
I've run a fan-less EPIA board at home for a couple of years, and it's nice. Took some time to find the right board though..
Checkout TonidoPlug2 (http://www.tonidoplug.com/tonido_plug.html). It has 512 MB DDR3 RAM, Gigabit Ethernet, WiFi b/g/n and in-built enclosure for SATA II HDD. You can boot it directly from the SATA instead of internal flash. All the right things you need in a personal server. Also it runs debian squeeze.
I run a GuruPlug with filesystems on a LVM volume built from e-sata and usb hard drives. It's very fine for my personal needs, except the GP's crazy heat output. What I hope is that some manufacturer built ARM boards with loads of SATA controllers. That way one could build a very affordable large NAS's.
If someone (looking at SanDisk) were to make a range of SD cards for servers and lightweight ARM desktops that would be of real interest. Because the current stuff out there for phones and cameras is of variable quality and is about the worst thing about these new lightweight arm systems.
I think that's probably the next experiment, I'm not sure if I'd got for a HDD or a very small USB SSD to keep the power usage down - at the moment it's under 0.5W, a HDD will be several times that.
Pandaboards run much faster with USB hard drives instead of SD cards. That's what I hear from lots of knowledgeable people, anyway. I've not actually run any benchmarks though.
use Marvell's sheevaplug is better off for this. beaglebone is really for industrial control, beagleboard are for generic purpose while pandaboard is for mobile computing.
SD cards are made for consumer class cameras, but you can buy industrial level SLC SD, for $50 per 4GB, in that case I will just go with SSD.
The solution to this is to have one "real" PC acting as a NAS and serving iSCSI. You would have to divide the extra cost of this PC across all the ARM servers you have.