Hacker News new | past | comments | ask | show | jobs | submit login

She’s contributed to many academic papers on large language models and has a better technical understanding of how they work and their limitations than most signatories of this statement, or the previous widely hyped “AI pause” letter, which referenced one of her own papers.

Read her statement about that letter (https://www.dair-institute.org/blog/letter-statement-March20...) or listen to some of the many podcasts she’s appeared on talking about this.

I find her and Timnit Gebru’s arguments highly persuasive. In a nutshell, the capabilities of “AI” are hugely overhyped and concern about Sci-Fi doom scenarios is disingenuously being used to frame the issue in ways that benefits players like OpenAI and diverts attention away from much more real, already occurring present-day harms such as the internet being filled with increasing amounts of synthetic text spam.




Thanks for the link, I read it with interest but philosophically it is a flawed argument (IMHO). It's a nonsequitur or something, for the following two reasons.

First, I'm inclined to think that longtermism is an invalid and harmful ideology, but also acknowledge that AGI / existential risk is something that needs looking at seriously. The external factors, such as corporate interests and 1% wealthy interests/prejudices, are not a good reason to dismiss AGI concerns. I'd like to imagine there's a reasonable way to address short-term issues as well as long-term issues. It's not an either-or debate.

Second, even from just a reading comprehension level: one side says AGI is a problem, then the other side cannot just say, "No, AGI is a false problem and here are the real problems". The reasonable argument is to say, AGI is a false problem because <of such and such reasons>. Bender et. al are just sidestepping the moot point, and rhetorically this is not an okay move. I think honest experts could simply say, ultimately we don't really know what will happen. But that would be boring to say because it would require acknowledging multiple issues being valid.

(There's a well known chart, the hierarchy of disagreements. The most sophisticated intellectual disagreements point out what's wrong with the argument. Less sophisticated disagreements do things like point out alternatives, without pointing out what the critical mistake is. The critical mistake in this case hinges on whether the premise of AGI is true or not. That's the crux of disagreement. Substituting that with short-term issues, which are valid in themselves, are the example of a lower-level of argumentation. And of course even lower levels of argumentation are bad-faith readings and so forth, I forget but the chart had several levels. It's funny that professional academics nevertheless don't practice this and so get into endless, intellectually unsatisfactory debates.)

So I think this is actually an example of different factional experts constantly talking past each other. It's funny that famous intellectuals/experts constantly do this, let their egos get the better of themselves and having a real intellectual conversation rather than make basic debate mistakes like nonsequiturs that any college student should be able to point at.


> First, I’m inclined to think that longtermism is an invalid and harmful ideology

It is, but (aside from as a sort of sociological explanation of the belief in AGI risk) that’s mostly beside the point when discussing the problems AGI x-risk. The problem of AGI x-risk is that it is an entirely abstract concern which does not concrete flow from any basis in material reality, cannot be assessed with the tools of assessing material reality, and exists as a kind of religious doctrine surrounded by rhetorical flourishes.

> The external factors, such as corporate interests and 1% wealthy interests/prejudices, are not a good reason to dismiss AGI concerns.

They are way of understanding why people who seem (largely because they are) intelligent and competent are trying to sell such hollow arguments as AGI x-risk. They aren’t, you are correct, a logical rebuttal to AGI risk, nor are they intended as that; the only rebuttal is the complete absence of support for the proposition. They are, however, a tool that operates outside the realm of formalized debate that addresses the natural and useful cognitive bias that itself is outside of the realm of formalized debate that says “smart, competent people don’t tend to embrace hollow positions”.

> Second, even from just a reading comprehension level: one side says AGI is a problem, then the other side cannot just say, “No, AGI is a false problem and here a the real problems”.

1. If they couldn’t, it wouldn’t be a “reading comprehension issue”, and

2. They can, for the simple reason that there is no material support for the “AGI is a real problem” argument.

> Bender et. al are just sidestepping the moot point,

A point being moot in the sense that AGI x-risk is is a reason to sidestep it. (The danger of using auto-antonyms.)

> I think honest experts could simply say, ultimately we don’t really know what will happen.

To the extent that is accurate, that is exactly what the Bender/Gebru/Mitchell group does. The problem is thinking “we don’t have any information to justify any belief on that” is one sided and means that utility of AGI is somewhere between 0 and the negative infinity that the x-risk crowd calculates from (some unspecified non-zero finite probability) times (infinite cost), whereas the reality is that we have as much reason to believe that AGI is the only solution to an otherwise certain existential calamity as to suppose it will lead to one. The utility is somewhere between positive infinity and negative infinity.


A point being moot is a reason to agree to disagree, if they can't agree on the premise. But they need to say that. If I were writing a letter, I would say it because that's just being sensible.

This isn't about logical debate. This is about reasonable, non-sophistic writing at the college level or higher. And there are basic standards, like if they don't know the future then they must explicitly acknowledge that. Not rhetorically "do that" in the essay. Literally write it out in sentences. They didn't.

I can think of 3 examples where such explicitness was done. Chomsky's letter gave explicit reasons why AGI is a false issue (and he was pilloried for it). My computer science professors literally, in their deep learning class and in their theoretical machine learning research seminars, have literally acknowledge that we don't know almost anything about the fundamentals nor the future. That scientific humility and level of intellectual conscientiousness is needed. That is absent in this discourse between the experts. And note, by that, I also include the 22-word "letter" which doesn't actually explain why Hinton and the rest of the signatories think AGI is an existential risk, what their specific reasons (your "material evidence") for that are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: