Hacker News new | past | comments | ask | show | jobs | submit login

"continuous deployment" is not a panacea to the issue. There's two primary ways of doing it, from what I understand:

continuous deployment to a dev/test environment is the easiest for most organisations to move to. Due to the live environment being mission-critical, they can't afford the risk of any degradation of service. So you push regularly (after passing test suites) to a test environment, get a small subset of users working, and at some point then push out to live from that. But I suspect that isn't what you are referring to, as this is too similar to typical change management.

the alternative definition of continuous deployment is that of constantly pushing to live, initially for a subset of users then rolling it out from there gradually, but always on the live environment. In many large, often 'cloud-y' solutions, that subset might be all users. Except you can't have a public transport ticketing system fail at peak times. A hospital patient record system must stay available for staff, and give sufficient notice before any possible impact to service to allow manual processes to be used. Payroll, accounts, HR systems... all of these have failure modes that have financial penalties at best.

Hence continuous deployment only makes sense when you can afford to risk service, possibly with significant impact.




It helps a lot though. One of many small incremental changes, or one big monster change, which do you think is likely to break a system? And which is going to be easiest to diagnose and fix?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: