> Talking about AI (and AGI) as if its some xenomorph lurking somewhere in silicon, waiting for its inevitable escape from its human prison.
> AI will not bootstrap itself with some emergent property if somebody spends gazillions of dollars and Watts to estimate petazillions of parameters.
Not right now, though recent breakthroughs look worryingly close. But the problem isn't AGI creating itself ex nihilo, but rather us creating it. And if you look at how the world reacted to LLMs, millions of minds and billions of dollars are being thrown towards making that happen - because for the first time in history, it looks like we have an actual "angle of attack", and neither curious minds nor greedy businesses will leave it on the table.
> Further progress is not going to come unless some very human brain and intelligence opens up completely new algorithmic vistas.
All we need to do is to get to roughly average human intelligence in higher cognitive functions, running in pure software. At that point, you can have an AI programmer and an AI ML researcher of similar occupational capability as their human counterparts. But this is software we're talking about - if you can have a pair of them, you can keep horizontally scaling until you have thousands or millions of them. And if you let them directly or indirectly work on their own code... they won't stay human-level for long. You get a feedback loop that's to our technological progress what our technological progress is to biological evolution.
Counterpoint 2:
Maybe it's clearer if instead of using the word "intelligence", we distill it into "optimization". After all, at high level, this is what intelligence is - a powerful optimization process. Now, we humans are no strangers to optimization processes that get out of control. Corporations, arguably, are one. Markets are definitely one, and the aggregate market economy is a powerful optimization process that's already out of control - it's self-directing and owns us. Culture arguably is too, but it's a weak one.
Consider that every time people say things like "if only we weren't so short-sighted and greedy, we would've solved poverty/war/climate change". If you take seriously the sociological and macroeconomic fact that humans follow incentives, then this really becomes "the system/economy/market makes it impossible for us to solve those problems", and... well, what is that system if not for a self-directed optimization process that arose, unintentionally, from our individual needs and desires? Well, semi-unintentionally - free market proponents use exactly this strong optimizer nature to argue why the market is fair and awesome and better than any kind of central planning.
Now, as powerful optimizer as our economic system is, it's still kind of dumb. It averages away individual human intelligence. But what if it didn't? What if it had an actual human-level mind of its own, or at least a human-level mind that's able to perceive and act on the market faster than any one human or group of humans could? Literally getting inside our collective OODA loop? This influence won't get cancelled out - at this point, we'd be screwed.
And that's just one possible scenario - AI tech giving a mind to the economy itself. But the broader point here is, we've already established we can accidentally create powerful generic optimizers that end up controlling us. The ones we deal with today are slow, because they run on humans and human interaction in physical space. But when we create an optimizer that runs on our digital infrastructure, that one will be faster.
"AI" overlaps with the more profound and (potentially) far more dangerous problem of the "algorithmization" of society [1].
"Out-of-brain" information technology is something that started long time ago with the agricultural revolution and the first cuneiform tablets. It has been shaping our societies ever since. Once the automation inherent in these constructs is baked-in it doesn't require over-complex black box models to create complete havoc.
[1] Defined as the institutionalization of intermediating information technology artifacts that humans cannot opt-out from if they want to be part of society.
Profound, yes. Far more dangerous? Well, it's really the same thing. Automation slotting itself in between people and institutions can be harmful at various scales, but if that automation gets sophisticated enough - that is, once the automation is (equivalent to) powerful AI - we'll be facing an extinction-level threat.
> Talking about AI (and AGI) as if its some xenomorph lurking somewhere in silicon, waiting for its inevitable escape from its human prison.
> AI will not bootstrap itself with some emergent property if somebody spends gazillions of dollars and Watts to estimate petazillions of parameters.
Not right now, though recent breakthroughs look worryingly close. But the problem isn't AGI creating itself ex nihilo, but rather us creating it. And if you look at how the world reacted to LLMs, millions of minds and billions of dollars are being thrown towards making that happen - because for the first time in history, it looks like we have an actual "angle of attack", and neither curious minds nor greedy businesses will leave it on the table.
> Further progress is not going to come unless some very human brain and intelligence opens up completely new algorithmic vistas.
All we need to do is to get to roughly average human intelligence in higher cognitive functions, running in pure software. At that point, you can have an AI programmer and an AI ML researcher of similar occupational capability as their human counterparts. But this is software we're talking about - if you can have a pair of them, you can keep horizontally scaling until you have thousands or millions of them. And if you let them directly or indirectly work on their own code... they won't stay human-level for long. You get a feedback loop that's to our technological progress what our technological progress is to biological evolution.
Counterpoint 2:
Maybe it's clearer if instead of using the word "intelligence", we distill it into "optimization". After all, at high level, this is what intelligence is - a powerful optimization process. Now, we humans are no strangers to optimization processes that get out of control. Corporations, arguably, are one. Markets are definitely one, and the aggregate market economy is a powerful optimization process that's already out of control - it's self-directing and owns us. Culture arguably is too, but it's a weak one.
Consider that every time people say things like "if only we weren't so short-sighted and greedy, we would've solved poverty/war/climate change". If you take seriously the sociological and macroeconomic fact that humans follow incentives, then this really becomes "the system/economy/market makes it impossible for us to solve those problems", and... well, what is that system if not for a self-directed optimization process that arose, unintentionally, from our individual needs and desires? Well, semi-unintentionally - free market proponents use exactly this strong optimizer nature to argue why the market is fair and awesome and better than any kind of central planning.
Now, as powerful optimizer as our economic system is, it's still kind of dumb. It averages away individual human intelligence. But what if it didn't? What if it had an actual human-level mind of its own, or at least a human-level mind that's able to perceive and act on the market faster than any one human or group of humans could? Literally getting inside our collective OODA loop? This influence won't get cancelled out - at this point, we'd be screwed.
And that's just one possible scenario - AI tech giving a mind to the economy itself. But the broader point here is, we've already established we can accidentally create powerful generic optimizers that end up controlling us. The ones we deal with today are slow, because they run on humans and human interaction in physical space. But when we create an optimizer that runs on our digital infrastructure, that one will be faster.