Hacker News new | past | comments | ask | show | jobs | submit login

They aren't, as VLM points out they just aren't something you see a lot of quantity of. For some applications you can achieve 100% uptime with networks and clusters, for others you still use doubly or triply redundant processor networks and various manufacturers have specialty chips for those markets (generally Health, Life, Safety (HLS) type systems).

Sometimes people developed redundant but not non-stop systems. I talked with an VP of Citibank when working at NetApp and they had a number of systems which ran on schedules of alternate days, so one would process transaction records for a while, then another would take over and repeat. They had three identical systems where one was eseentially a hot standby for the other two, and new versions of code would be deployed on one which would run the same transaction records and they would check for the same output, so they could do a 'walking' upgrade of software. Back when Tandem's and big Sun iron ruled the roost those machines were too expensive to have an extra one which was essentially a spare. These days however its much more economical to do that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: