Hacker News new | past | comments | ask | show | jobs | submit login

> First, I’m inclined to think that longtermism is an invalid and harmful ideology

It is, but (aside from as a sort of sociological explanation of the belief in AGI risk) that’s mostly beside the point when discussing the problems AGI x-risk. The problem of AGI x-risk is that it is an entirely abstract concern which does not concrete flow from any basis in material reality, cannot be assessed with the tools of assessing material reality, and exists as a kind of religious doctrine surrounded by rhetorical flourishes.

> The external factors, such as corporate interests and 1% wealthy interests/prejudices, are not a good reason to dismiss AGI concerns.

They are way of understanding why people who seem (largely because they are) intelligent and competent are trying to sell such hollow arguments as AGI x-risk. They aren’t, you are correct, a logical rebuttal to AGI risk, nor are they intended as that; the only rebuttal is the complete absence of support for the proposition. They are, however, a tool that operates outside the realm of formalized debate that addresses the natural and useful cognitive bias that itself is outside of the realm of formalized debate that says “smart, competent people don’t tend to embrace hollow positions”.

> Second, even from just a reading comprehension level: one side says AGI is a problem, then the other side cannot just say, “No, AGI is a false problem and here a the real problems”.

1. If they couldn’t, it wouldn’t be a “reading comprehension issue”, and

2. They can, for the simple reason that there is no material support for the “AGI is a real problem” argument.

> Bender et. al are just sidestepping the moot point,

A point being moot in the sense that AGI x-risk is is a reason to sidestep it. (The danger of using auto-antonyms.)

> I think honest experts could simply say, ultimately we don’t really know what will happen.

To the extent that is accurate, that is exactly what the Bender/Gebru/Mitchell group does. The problem is thinking “we don’t have any information to justify any belief on that” is one sided and means that utility of AGI is somewhere between 0 and the negative infinity that the x-risk crowd calculates from (some unspecified non-zero finite probability) times (infinite cost), whereas the reality is that we have as much reason to believe that AGI is the only solution to an otherwise certain existential calamity as to suppose it will lead to one. The utility is somewhere between positive infinity and negative infinity.




A point being moot is a reason to agree to disagree, if they can't agree on the premise. But they need to say that. If I were writing a letter, I would say it because that's just being sensible.

This isn't about logical debate. This is about reasonable, non-sophistic writing at the college level or higher. And there are basic standards, like if they don't know the future then they must explicitly acknowledge that. Not rhetorically "do that" in the essay. Literally write it out in sentences. They didn't.

I can think of 3 examples where such explicitness was done. Chomsky's letter gave explicit reasons why AGI is a false issue (and he was pilloried for it). My computer science professors literally, in their deep learning class and in their theoretical machine learning research seminars, have literally acknowledge that we don't know almost anything about the fundamentals nor the future. That scientific humility and level of intellectual conscientiousness is needed. That is absent in this discourse between the experts. And note, by that, I also include the 22-word "letter" which doesn't actually explain why Hinton and the rest of the signatories think AGI is an existential risk, what their specific reasons (your "material evidence") for that are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: