> It's not clear at all that we have an avenue to super intelligence
AI already beats the average human on pretty much any task people have put time into, often by a very wide margin and we are still seeing exponential progress that even the experts can't really explain, but yes, it is possible this is a local maximum and the curve will become much flatter again.
But the absence of any visible fundamental limit on further progress (or can you name one?) coupled with the fact that we have yet barely begun to feel the consequences of the tech we already have (assuming zero breakthroughs from now on) makes we extremely wary to conclude that there is no significant danger and we have nothing to worry about.
Let's set aside the if and when of a super intelligence explosion for now. We are ourselves an existence proof of some lower bound of intelligence, that if amplified by what computers can already do (like perform many of the things we used to take intellectual pride in much better, and many orders of magnitude faster with almost infinitely better replication and coordination ability) seems already plenty dangerous and scary to me.
> The scary doomsday scenarios aren't possible without an AI that's capable of both strategic thinking and long term planning. Those two things also happen to be the biggest limitations of our most powerful language models. We simply don't know how to build a system like that.
Why do you think AI models will be unable to plan or strategize? Last I checked languages models weren't trained or developed to beat humans in strategic decision making, but humans already aren't doing too hot right now in games of adversarial strategy against AIs developed for that domain.
I dispute this. What appears to be exponential progress is IMO just a step function that made some jumps as the transformer architecture was employed on larger problems. I am unaware of research that moves beyond this in a way that would plausibly lead to super-intelligence. At the very least I foresee issues with ever-increasing computational requirements that outpace improvements in hardware.
We’ll see similar jumps when other domains begin employing specialized AI models, but it’s not clear to me that these improvements will continue increasing exponentially.
Right, and if someone can join the two, that could be something genuinely formidable. But does anyone have a credible path to joining the different flavors to produce a unity that actually works?
Even if someone will, I don't think it's an "existential risk". So, yes, I'm willing to make the bet. I'm also willing to make the bet that Santa never delivers nuclear warheads instead of presents. It's why I don't cap my chimney every Christmas Eve.
Between Covid, bank failures, climate change, and AI, it's like everyone is looking for something to be in a panic about.
AI already beats the average human on pretty much any task people have put time into, often by a very wide margin and we are still seeing exponential progress that even the experts can't really explain, but yes, it is possible this is a local maximum and the curve will become much flatter again.
But the absence of any visible fundamental limit on further progress (or can you name one?) coupled with the fact that we have yet barely begun to feel the consequences of the tech we already have (assuming zero breakthroughs from now on) makes we extremely wary to conclude that there is no significant danger and we have nothing to worry about.
Let's set aside the if and when of a super intelligence explosion for now. We are ourselves an existence proof of some lower bound of intelligence, that if amplified by what computers can already do (like perform many of the things we used to take intellectual pride in much better, and many orders of magnitude faster with almost infinitely better replication and coordination ability) seems already plenty dangerous and scary to me.
> The scary doomsday scenarios aren't possible without an AI that's capable of both strategic thinking and long term planning. Those two things also happen to be the biggest limitations of our most powerful language models. We simply don't know how to build a system like that.
Why do you think AI models will be unable to plan or strategize? Last I checked languages models weren't trained or developed to beat humans in strategic decision making, but humans already aren't doing too hot right now in games of adversarial strategy against AIs developed for that domain.