While I'm not on this "who's-who" panel of experts, I call bullshit.
AI does present a range theoretical possibilities for existential doom, from teh "gray goo" and "paperclip optimizer" scenarios to Bostrom's post-singularity runaway self-improving superintelligence. I do see this as a genuine theoretical concern that could even potentially even be the Great Filter.
However, the actual technology extant or even on the drawing boards today is nothing even on the same continent as those threats. We have a very vast ( and expensive) sets of probability-of-occurrence vectors that amount to a fancy parlor trick that produces surprising and sometimes useful results. While some tout the clustering of vectors around certain sets of words as implementing artificial creation of concepts, it's really nothing more than an advanced thesaurus; there is no evidence of concepts being weilded in relation to reality, tested for truth/falsehood value, etc. In fact, the machines are notorious and hilarious for hallucinating with a highly confident tone.
We've created nothing more than a mirror of human works, and it displays itself as an industrial-scale bullshit artist (where bullshit is defined as expressions made to impress without care one way or the other for truth value).
Meanwhile, this panel of experts makes this proclamation with not the slightest hint of what type of threat is present that would require any urgent attention, only that some threat exists that is on the scale of climate change. They mention no technological existential threat (e.g., runaway superintelligence), nor any societal threat (deepfakes, inherent bias, etc.). This is left as an exercise for the reader.
What is the actual threat? It is most likely described in the Google "We Have No Moat" memo[0]. Basically, once AI is out there, these billionaires have no natural way to protect their income and create a scaleable way to extract money from the masses, UNLESS they get cooperation from politicians to prevent any competition from arising.
As one of those billionaires, Peter Theil, said: "Competition is for losers" [1]. Since they have not yet figured out a way to cut out the competition using their advantages in leading the technology or their advantages in having trillions of dollars in deployable capital, they are seeking a legislated advantage.
While I'm not on this "who's-who" panel of experts, I call bullshit.
AI does present a range theoretical possibilities for existential doom, from teh "gray goo" and "paperclip optimizer" scenarios to Bostrom's post-singularity runaway self-improving superintelligence. I do see this as a genuine theoretical concern that could even potentially even be the Great Filter.
However, the actual technology extant or even on the drawing boards today is nothing even on the same continent as those threats. We have a very vast ( and expensive) sets of probability-of-occurrence vectors that amount to a fancy parlor trick that produces surprising and sometimes useful results. While some tout the clustering of vectors around certain sets of words as implementing artificial creation of concepts, it's really nothing more than an advanced thesaurus; there is no evidence of concepts being weilded in relation to reality, tested for truth/falsehood value, etc. In fact, the machines are notorious and hilarious for hallucinating with a highly confident tone.
We've created nothing more than a mirror of human works, and it displays itself as an industrial-scale bullshit artist (where bullshit is defined as expressions made to impress without care one way or the other for truth value).
Meanwhile, this panel of experts makes this proclamation with not the slightest hint of what type of threat is present that would require any urgent attention, only that some threat exists that is on the scale of climate change. They mention no technological existential threat (e.g., runaway superintelligence), nor any societal threat (deepfakes, inherent bias, etc.). This is left as an exercise for the reader.
What is the actual threat? It is most likely described in the Google "We Have No Moat" memo[0]. Basically, once AI is out there, these billionaires have no natural way to protect their income and create a scaleable way to extract money from the masses, UNLESS they get cooperation from politicians to prevent any competition from arising.
As one of those billionaires, Peter Theil, said: "Competition is for losers" [1]. Since they have not yet figured out a way to cut out the competition using their advantages in leading the technology or their advantages in having trillions of dollars in deployable capital, they are seeking a legislated advantage.
Bullshit. It must be ignored.
[0] https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...
[1] https://www.wsj.com/articles/peter-thiel-competition-is-for-...