This is a breathless, half-baked take on "AI Risk" that does not cast the esteemed signatories in a particularly glowing light.
It is 2023. The use and abuse of people in the hands of information technology and automation has now a long history. "AI Risk" was not born yesterday. The first warning came as early as 1954 [1].
The Human Use of Human Beings is a book by Norbert Wiener, the founding thinker of cybernetics theory and an influential advocate of automation; it was first published in 1950 and revised in 1954. The text argues for the benefits of automation to society; it analyzes the meaning of productive communication and discusses ways for humans and machines to cooperate, with the potential to amplify human power and release people from the repetitive drudgery of manual labor, in favor of more creative pursuits in knowledge work and the arts. The risk that such changes might harm society (through dehumanization or subordination of our species) is explored, and suggestions are offered on how to avoid such risk
Dehumanization through abuse of tech is already in an advanced stage and this did not require, emergent, deceptive or power-seeking AI to accomplish.
It merely required emergent political and economic behaviors, deceptive and power seeking-humans applying whatever algorithms and devices were at hand to help dehumanize other humans. Converting them into "products" if you absolutely need a hint.
What we desperately need is a follow-up book from Norbert Wiener. Can an LLM model do that? Even a rehashing of the book in modern language would be better than a management consultancy bullet list.
We need a surgical analysis of the moral and political failure that will incubate the next stage of "AI Risk".
This topic clearly touches a nerve with the HN community, but I strongly agree with you.
To be honest, I've been someone disappointed with the way AI/DL research has proceeded in the last several years and none of this really surprises me.
From the beginning, this whole enterprise has been detached from basic computational and statistical theory. At some level this is fine — you don't need to understand everything you create — but when you denigrate that underlying theory you end up in a situation where you don't understand what you're doing. So you end up with a lot of attention paid to things like "explainability" and "interpretability" and less so to "information-theoretic foundations of DL models", even though the latter probably leads to the former.
If you have a community that considers itself above basic mathematical, statistical, and computational theory, is it really a surprise that you end up with rhetoric about it being beyond our understanding? In most endeavors I've been involved with, there would be a process of trying to understand the fundamentals before moving on to something else, and then using that to bootstrap into something more powerful.
I probably come across as overly cynical but a lot of this seems sort of like a self-fulfilling prophecy: a community constituting individuals who have convinced themselves that if it is beyond their understanding, it must be beyond anyone's understanding.
There are certainly risks to AI that should be discussed, but it seems these discussions and inquiries should be more open, probably involving other people outside the core community of Big Tech and associated academic researchers. Maybe it's not that AI is more capable than everyone, just that others are maybe more capable of solving certain problems — mathematicians, statisticians, and yes, philosophers and psychologists — than those who have been involved with it so far.
> mathematicians, statisticians, and yes, philosophers and psychologists — than those who have been involved with it so far.
I think mathematicians and statisticians are hard to flummox but the risk with non-mathematically trained people such as philosophers and psychologists is that they can be sidetracked easily by vague and insinuating language that allows them to "fill-in" the gaps. They need an unbiased "interpreter" of what the tech actually does (or can do) and that might be hard to come by.
I would add political scientists and economists to the list. Not that I have particular faith in their track record solving any problem, but conceptually this is also their responsibility and privilege: technology reshapes society and the economy and we need to have a mature and open discussion about it.
Do you have any stories of how AI/DL has ignored foundational scientific problems?
I do know that my old EECS professors who have pivoted toward AI and coming from adjacent/tangential research areas, are specifically interested in theoretical and scientific clarification properties of neural networks. One of them basically has been trying to start Theoretical Machine Learning as a new discipline and approach that is sorely needed.
i think if AI figures took their "alignment" concept and really pursued it down to its roots -- digging past the technological and into the social -- they could do some good.
take every technological hurdle they face -- "paperclip maximizers", "mesa optimizers" and so on -- and assume they get resolved. eventually we're left with "we create a thing which perfectly emulates a typical human, only it's 1000x more capable": if this hypothetical result is scary to you then exactly how far do you have to adjust your path such that the result after solving every technical hurdle seems likely to be good?
from the outside, it's easy to read AI figures today as saying something like "the current path of AGI subjects the average human to ever greater power imbalances. as such, we propose <various course adjustments which still lead to massively increased power imbalance>". i don't know how to respond productively to that.
It is 2023. The use and abuse of people in the hands of information technology and automation has now a long history. "AI Risk" was not born yesterday. The first warning came as early as 1954 [1].
The Human Use of Human Beings is a book by Norbert Wiener, the founding thinker of cybernetics theory and an influential advocate of automation; it was first published in 1950 and revised in 1954. The text argues for the benefits of automation to society; it analyzes the meaning of productive communication and discusses ways for humans and machines to cooperate, with the potential to amplify human power and release people from the repetitive drudgery of manual labor, in favor of more creative pursuits in knowledge work and the arts. The risk that such changes might harm society (through dehumanization or subordination of our species) is explored, and suggestions are offered on how to avoid such risk
Dehumanization through abuse of tech is already in an advanced stage and this did not require, emergent, deceptive or power-seeking AI to accomplish.
It merely required emergent political and economic behaviors, deceptive and power seeking-humans applying whatever algorithms and devices were at hand to help dehumanize other humans. Converting them into "products" if you absolutely need a hint.
What we desperately need is a follow-up book from Norbert Wiener. Can an LLM model do that? Even a rehashing of the book in modern language would be better than a management consultancy bullet list.
We need a surgical analysis of the moral and political failure that will incubate the next stage of "AI Risk".
[1] https://en.wikipedia.org/wiki/The_Human_Use_of_Human_Beings