Hacker News new | past | comments | ask | show | jobs | submit login

> Arguments like yours are very subjective. What is "rapid". What is "surprising".

https://twitter.com/heyBarsee/status/1654825921746989057

2/3 deep learning Turing Price winners (Hinton and Benigo) are sufficiently shell-shocked by the rate of progress to be thrown into existential doubts (Hinton is very explicit about the fact that progress is much faster than he thought just a few years ago, Benigo speaks of how an "unexpected acceleration" in AI systems has radically shifted his perspective). Plenty of knowledgable people in the field who were not previously AI doomers are starting to sound a lot more concerned very recently.

As to the "oh it's just linear scaling of out-of-sight tech" line, well of course that itself was suprising. Gwern pushed the scaling hypothesis earlier than many and from what I remember even got pretty nasty attacks from AI insiders from it. Here's what he wrote 3 years ago: "To the surprise of most (including myself), this vast increase in size did not run into diminishing or negative returns, as many expected, but the benefits of scale continued to happen as forecasted by OpenAI.".

So sure there's some subjectivity involved here, but I'd like to see your propose some reasonable operationalization of "surprise at progress" that didn't class most laymen and insiders as suprised.

>> It doesn't though, does it?

> It does.

We seem to be miscommunicating, what I was trying to express is that my argument does not really require any appeal to authority. Trusting your lying eyes (to evaluate the progress of stuff like midjourney) and judging the quality of arguments should be enough (I spelt some reasons out here https://news.ycombinator.com/item?id=36130482, but I think hackinthebochs makes the point better here: https://news.ycombinator.com/item?id=36129980).

In fact I would still be pretty concerned even if most top AI guys were like LeCun and thought there is no real risk.

I will not deny, of course, that the fact that well known reasearchers like Hinton and Benigo are suddenly much more alarmed than they previously were and the ones like LeCun who are not seem to mostly make exceptionally terrible arguments doesn't exactly make me more optimistic.




I agree these statements from these long term researchers about them being surprised by the rate of progress are surprising.

To clarify my own thinking here, it's totally reasonable to me that people are surprised if:

1. They weren't previously aware of AI research (surely 99% of the population?)

2. They were but had stopped paying attention because it was just a long series of announcements about cool tech demos nobody outside big corps could play with.

3. They were paying attention but thought scaling wouldn't continue to work.

My problem is that people like Sam Altman clearly aren't in any of those categories and Hinton shouldn't have been in any, although maybe he fell into (3). I personally was in (2). I wasn't hugely surprised that ChatGPT could exist because I'd seen GPT-2, GPT-1, I'd seen surprising AI demos at Google years earlier and so on. The direction things were going in was kinda clear. I was a bit surprised by its quality, but that's because I wasn't really paying close attention as new results were published and the last InstructGPT step makes such a big difference to how the tech is perceived. Actual knowledge doesn't change much but once it's housetrained, suddenly it's so much easier to interact with and use that it makes a step change in how accessible the tech is and how it's perceived.

I think I was more surprised by the joining of LLMs with generators and how well AI art works. It does feel like that happened fast. But, maybe I just wasn't paying attention again.

So I guess where we differ is that I don't take their surprise at face value. The direction was too clear, the gap between the threats they talk about in the abstract and the concrete proposals are too large and too lacking in obvious logical connection; it feels like motivated reasoning to me. I'm not entirely sure what's really going on and perhaps they are genuine in their concerns but if so it's hard to understand why they struggle so much to make a convincing case given they are certainly intellectually equipped to do so.

The two posts you linked are interesting and much better argued than the website this thread is about, so I'll reply to them directly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: