Hacker News new | past | comments | ask | show | jobs | submit login

OK, I completely agree that if you feel that I invoked "obviousness" in an attempt of browbeating you and the GP with what in fact is a social taboo, you should be extra skeptical (I'm not sure that was the point the GP was trying to make though).

> If your argument consists primarily of "everyone knows that" then this is a good indication you might be wrong.

It doesn't though, does it? There's strong empirical evidence that AI systems are making rapid progress in many domains that previously only humans were good at, and a pace that basically surprised almost everyone. I gave a list of arguments in another thread why AI is uniquely powerful and dangerous. Which of these do you disagree with and why?




I didn't see your other post I think, but here's my response to the list of AI risks on the website we're discussing:

https://news.ycombinator.com/item?id=36123082#36129011

Arguments like yours are very subjective. What is "rapid". What is "surprising". I don't find them particularly surprising myself - cool and awesome - but I was being amazed by language modelling ten years ago! The quality kept improving every year. It was clear that if that kept up eventually we'd have language models that could speak to like people.

So the idea of a surprising change of pace doesn't really hold up under close inspection. LLM capabilities do seem to scale linearly, with the idea of emergent abilities coming under robust attack lately. To the extent big LLMs are surprising to a lot of people this has happened primarily due to throwing a previously implausible quantity of money at building them, and OpenAI releasing one of them from their lab prison that other companies were keeping them in, not due to any major new breakthrough in the underlying tech. The progress is linear but the visibility of that progress was not. The transformers paper was 5 years ago and GPT-4 is basically an optimization of that tech combined with RL, just executed very carefully and competently. Transformers in turn were an improvement over prior language models that could speak like a human, they just weren't as good at it.

> It doesn't though, does it?

It does. Arguments that consist of "everyone knows that" are also called rumours or folk wisdom. It's fine to adopt widely held beliefs if those beliefs rest on something solid, but what we have here is a pure argument from authority. This letter is literally one sentence long and the only reason anyone cares is the list of signatories. It's very reliant on the observer believing that these people have some unique insight into AI risk that nobody else has, but there's no evidence of that and many signers aren't even AI researchers to begin with.


> Arguments like yours are very subjective. What is "rapid". What is "surprising".

https://twitter.com/heyBarsee/status/1654825921746989057

2/3 deep learning Turing Price winners (Hinton and Benigo) are sufficiently shell-shocked by the rate of progress to be thrown into existential doubts (Hinton is very explicit about the fact that progress is much faster than he thought just a few years ago, Benigo speaks of how an "unexpected acceleration" in AI systems has radically shifted his perspective). Plenty of knowledgable people in the field who were not previously AI doomers are starting to sound a lot more concerned very recently.

As to the "oh it's just linear scaling of out-of-sight tech" line, well of course that itself was suprising. Gwern pushed the scaling hypothesis earlier than many and from what I remember even got pretty nasty attacks from AI insiders from it. Here's what he wrote 3 years ago: "To the surprise of most (including myself), this vast increase in size did not run into diminishing or negative returns, as many expected, but the benefits of scale continued to happen as forecasted by OpenAI.".

So sure there's some subjectivity involved here, but I'd like to see your propose some reasonable operationalization of "surprise at progress" that didn't class most laymen and insiders as suprised.

>> It doesn't though, does it?

> It does.

We seem to be miscommunicating, what I was trying to express is that my argument does not really require any appeal to authority. Trusting your lying eyes (to evaluate the progress of stuff like midjourney) and judging the quality of arguments should be enough (I spelt some reasons out here https://news.ycombinator.com/item?id=36130482, but I think hackinthebochs makes the point better here: https://news.ycombinator.com/item?id=36129980).

In fact I would still be pretty concerned even if most top AI guys were like LeCun and thought there is no real risk.

I will not deny, of course, that the fact that well known reasearchers like Hinton and Benigo are suddenly much more alarmed than they previously were and the ones like LeCun who are not seem to mostly make exceptionally terrible arguments doesn't exactly make me more optimistic.


I agree these statements from these long term researchers about them being surprised by the rate of progress are surprising.

To clarify my own thinking here, it's totally reasonable to me that people are surprised if:

1. They weren't previously aware of AI research (surely 99% of the population?)

2. They were but had stopped paying attention because it was just a long series of announcements about cool tech demos nobody outside big corps could play with.

3. They were paying attention but thought scaling wouldn't continue to work.

My problem is that people like Sam Altman clearly aren't in any of those categories and Hinton shouldn't have been in any, although maybe he fell into (3). I personally was in (2). I wasn't hugely surprised that ChatGPT could exist because I'd seen GPT-2, GPT-1, I'd seen surprising AI demos at Google years earlier and so on. The direction things were going in was kinda clear. I was a bit surprised by its quality, but that's because I wasn't really paying close attention as new results were published and the last InstructGPT step makes such a big difference to how the tech is perceived. Actual knowledge doesn't change much but once it's housetrained, suddenly it's so much easier to interact with and use that it makes a step change in how accessible the tech is and how it's perceived.

I think I was more surprised by the joining of LLMs with generators and how well AI art works. It does feel like that happened fast. But, maybe I just wasn't paying attention again.

So I guess where we differ is that I don't take their surprise at face value. The direction was too clear, the gap between the threats they talk about in the abstract and the concrete proposals are too large and too lacking in obvious logical connection; it feels like motivated reasoning to me. I'm not entirely sure what's really going on and perhaps they are genuine in their concerns but if so it's hard to understand why they struggle so much to make a convincing case given they are certainly intellectually equipped to do so.

The two posts you linked are interesting and much better argued than the website this thread is about, so I'll reply to them directly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: