I did some heavy research in various forms of machine learning and AI in grad school 10 years ago, and the more experiments and tools I created, the more I saw this "digital nuclear arms race" and didn't want to be part of it.
We don't know how many other people aren't working on this because of moral/ethical reasons. Of course, 99.9% of the world could be wary of genetic engineering, but that remaining 0.01% is enough to pursue research, get VC investment and drag the rest of us into that uncertain future.
To continue the nuclear analogy, the Manhattan project was probably the most impressive engineering program in the history of the world, but it was driven by survival in a World War. They didn't build and drop the atomic bombs for fun. You'd think that working on a limitless virtual brain should have similarly serious motivations, not just the examples of "how do I drive to X but also shop for Y?" or "is that a monkey?".
I know there are much grander societal goals with A.I. and the world really could become a "better place", but please sell society those goals, not the usual first-world problems. We already have enough people trying to destroy the world without these extra tools.
Perhaps the general availability of AGI is antithetical to the notion of information privacy in the 21st century. And not just for individuals either, but for governments as well. I can imagine that control will only be possible with very deep, widespread monitoring.
We don't know how many other people aren't working on this because of moral/ethical reasons. Of course, 99.9% of the world could be wary of genetic engineering, but that remaining 0.01% is enough to pursue research, get VC investment and drag the rest of us into that uncertain future.
To continue the nuclear analogy, the Manhattan project was probably the most impressive engineering program in the history of the world, but it was driven by survival in a World War. They didn't build and drop the atomic bombs for fun. You'd think that working on a limitless virtual brain should have similarly serious motivations, not just the examples of "how do I drive to X but also shop for Y?" or "is that a monkey?".
I know there are much grander societal goals with A.I. and the world really could become a "better place", but please sell society those goals, not the usual first-world problems. We already have enough people trying to destroy the world without these extra tools.