Everything is preventable (or more correctly, predictable) when you know everything about the system. If we knew everything about the quanta of the universe, we'd know when solar flares and neutrinos and alpha particles were going to affect our satellites and other electronics.
Since we don't know everything (not even everything about our own creations like stock exchange markets and software systems), it becomes risk management. After writing software for decades, I still have no idea whether these circumstances were predictable let alone preventable, but perhaps they could have spent more time considering the minutia of the system writing more tests or whatever. Yes, the mistake was the fault of the humans designing and implementing the system. But that could be said of any (non-natural, human-made) system that fails.
At some point, someone has to decide that code should just be shipped. We're not perfect enough as people to build bug-free systems. Risk comes with releasing "unproven" code. And yes, it's embarrassing when the fit hits the shan in front of the whole world. But I try to avoid becoming one of those jaded people that can't tolerate mistakes on behalf of others nor can admit their own.
I think I came off a bit too grumpy. I full-well know that every bug can't be eradicated before a launch, and I don't hold any ill-will against BATS for it, I was more trying to comment on the persistence of the idea that computers are magic which sometimes rise up to act against their creators. As another comment said, I guess my interpretation of "backfire" is a bit too narrow.
Heisenberg doesn't say that we wouldn't have absolute predictability or control if we knew everything. He says we can't know everything. He's correct, but that doesn't negate my statement.
And it's compounded by quantum effects, multi-body problems, and emergent phenomena.
Your requirement is that we 1) have absolute knowledge of a state of the universe, and that 2) all later states can be predicted from this a priori state.
Heisenberg says "you can never have absolute knowledge".
Numerous other elements argue that even where absolute knowledge is available, it's not possible to predict future states with certainty, or in less than real time.
So, no, everything is not foreseeable.
Mind: some risks are predictable in a probabilistic way, though generally these are good for saying that "in T' period of time, there's P probability of X event occurring", though that's a far cry from saying that event X will happen at time T. Much of my life revolves around clarifying the distinction between these two statements.
An instance which comes to mind: Schneier's blog has mentioned that the 9/11 attacks were statistically probable given terrorism trends. As has been the absence of similar-scale follow-on attacks since. Though with time, a similar magnitude attack becomes a near certainty.
Since we don't know everything (not even everything about our own creations like stock exchange markets and software systems), it becomes risk management. After writing software for decades, I still have no idea whether these circumstances were predictable let alone preventable, but perhaps they could have spent more time considering the minutia of the system writing more tests or whatever. Yes, the mistake was the fault of the humans designing and implementing the system. But that could be said of any (non-natural, human-made) system that fails.
At some point, someone has to decide that code should just be shipped. We're not perfect enough as people to build bug-free systems. Risk comes with releasing "unproven" code. And yes, it's embarrassing when the fit hits the shan in front of the whole world. But I try to avoid becoming one of those jaded people that can't tolerate mistakes on behalf of others nor can admit their own.