> Many great solutions come from people with "crazy" thinking and I would expect they could have caused great damage (or perhaps have - jet engines) but otherwise we would be moving very slowly as a planet?
I agree. There's something I like to call Asimov's principle[1]: knowledge almost never does harm; the answer to poor or incomplete knowledge is almost always more knowledge (corrections, extensions, alternatives) instead of forgetting.
And there is the possibility of trying to forget in case a knowledge would be so absolutely harmful -- so the downside is practically bounded while the upside is practically unbounded. If you try to hide something harmful it's always possible it will be rediscovered later in very poor timing and the benefit of countermeasures won't be available. If we refrain from discussing AI safety in fear of derailing some holy discussion by the wise sages (what criteria would make anyone good enough to disrupt it?), it seems it'll be more likely when potent general AI emerges we're not ready.
If Einstein tried to hide the mass-energy equivalence, or say all of his theories because of mass-energy equivalence, then when someone later discovered it could be much worse -- if atomic bombs were discovered in the cold war (discoveries usually start unilaterally), one side could very well have started WW3 (in fact the US nearly started a war with USSR in the short period they were the sole possessors of the bomb). The fact is it is extremely hard to predict the impact of any individual action, while it seems quite safe to say that in general thoughtful action is usually benign -- this suggests a strong benefit to discussion and discovery of knowledge versus hiding in fear.
An important principle I would suggest instead is commitment to truth. You can make poor arguments, you can be wrong, but as long as you're committed to truth even the incorrect arguments might prove useful -- they might lead to stronger counterarguments, elucidation of fundamentals, etc.
To exemplify, one of the individuals that perhaps most advanced our understanding of Quantum Mechanics was again Einstein, which was a great critic of it -- his criticisms turned out to be all wrong, but they were so strong (intuitively seemingly right) they brought to light the most interesting features, 'weirdness' of the theory. Even for relativity one of the most useful ways of grasping the theory and its implications is by examining "paradoxes" -- which are essentially failed counterarguments.
This failure of commitment to truth is where climate change deniers ("""skeptics""") go wrong -- it's not trying to prove consensus wrong that's harmful, it is failure to adjust in face of mountains of evidence. Reasonable skepticism probably wasn't so harmful when its impact was less clear -- we've improved our models, measurements, etc to address it.
[1] He makes essentially this argument as a foreword to some of his short stories, can't remember which one exactly. I believe it was more or less along the lines of: science and technology (brought by knowledge, discovery) can often be used for great good or great harm, but to reliably avoid the great harm (that could also come from inaction) we usually need more knowledge, more discussion.
And note how my examples are for very impactful work in the brink of wars and political stability, and still discussion and knowledge seems to have been positive. How many people really should worry about the risk of triggering catastrophe from their daily jobs? Some of the points in the article might be applicable, in very restrict cases -- basically if you're dealing with catastrophic scenarios (how many people routinely are exposed to that?). If you ever find a flashing red button while alone in a power station, or find a break for a major cryptographic protocol, you want to triple check with specialists and be very careful. Otherwise it can turn into futile paralization by fear (which is harmful to yourself and others).
I agree. There's something I like to call Asimov's principle[1]: knowledge almost never does harm; the answer to poor or incomplete knowledge is almost always more knowledge (corrections, extensions, alternatives) instead of forgetting.
And there is the possibility of trying to forget in case a knowledge would be so absolutely harmful -- so the downside is practically bounded while the upside is practically unbounded. If you try to hide something harmful it's always possible it will be rediscovered later in very poor timing and the benefit of countermeasures won't be available. If we refrain from discussing AI safety in fear of derailing some holy discussion by the wise sages (what criteria would make anyone good enough to disrupt it?), it seems it'll be more likely when potent general AI emerges we're not ready.
If Einstein tried to hide the mass-energy equivalence, or say all of his theories because of mass-energy equivalence, then when someone later discovered it could be much worse -- if atomic bombs were discovered in the cold war (discoveries usually start unilaterally), one side could very well have started WW3 (in fact the US nearly started a war with USSR in the short period they were the sole possessors of the bomb). The fact is it is extremely hard to predict the impact of any individual action, while it seems quite safe to say that in general thoughtful action is usually benign -- this suggests a strong benefit to discussion and discovery of knowledge versus hiding in fear.
An important principle I would suggest instead is commitment to truth. You can make poor arguments, you can be wrong, but as long as you're committed to truth even the incorrect arguments might prove useful -- they might lead to stronger counterarguments, elucidation of fundamentals, etc.
To exemplify, one of the individuals that perhaps most advanced our understanding of Quantum Mechanics was again Einstein, which was a great critic of it -- his criticisms turned out to be all wrong, but they were so strong (intuitively seemingly right) they brought to light the most interesting features, 'weirdness' of the theory. Even for relativity one of the most useful ways of grasping the theory and its implications is by examining "paradoxes" -- which are essentially failed counterarguments.
This failure of commitment to truth is where climate change deniers ("""skeptics""") go wrong -- it's not trying to prove consensus wrong that's harmful, it is failure to adjust in face of mountains of evidence. Reasonable skepticism probably wasn't so harmful when its impact was less clear -- we've improved our models, measurements, etc to address it.
[1] He makes essentially this argument as a foreword to some of his short stories, can't remember which one exactly. I believe it was more or less along the lines of: science and technology (brought by knowledge, discovery) can often be used for great good or great harm, but to reliably avoid the great harm (that could also come from inaction) we usually need more knowledge, more discussion.
And note how my examples are for very impactful work in the brink of wars and political stability, and still discussion and knowledge seems to have been positive. How many people really should worry about the risk of triggering catastrophe from their daily jobs? Some of the points in the article might be applicable, in very restrict cases -- basically if you're dealing with catastrophic scenarios (how many people routinely are exposed to that?). If you ever find a flashing red button while alone in a power station, or find a break for a major cryptographic protocol, you want to triple check with specialists and be very careful. Otherwise it can turn into futile paralization by fear (which is harmful to yourself and others).