Hacker News new | past | comments | ask | show | jobs | submit login

I think we’re talking past each other.

The point of the architecture example is not that it’d be done this way or that it’s practical - it’s just to point out that there’s nothing magical about us and if you had an existing model and ran it faster (without biological constraints) you’d get something smarter than us. It’s a simpler possibility example of something that already exists.

In practice AGI likely won’t be that for reasons you suggest.

I’m not sure why you’d think AGI is not possible when there’s general intelligence all around us and there’s nothing magical about biology. Current methods maybe won’t get us there, but the current stuff is already super human in some domains and does generalize a bit (alpha zero).

Maybe it’s impossible for some unknown reason, but I’d bet against that. If it’s possible then the goal alignment problem is a real issue. With an unknown timeline it makes sense for some people to work on it now before we need it.




I think you're right that we're talking past each other.

I'm not saying that AGI is impossible, I'm saying that it's impractical to try to discuss AGI without some understanding of what intelligence is as a physical process. As an analogy for why: we understand fusion, to the extent that we know how to make fusion bombs. In theory, we could keeping adding stages to a fusion weapon to make a bomb large enough to crack the planet in half, but in practice, other factors start to dominate the practical explosive yield well before we ever hit that point. So while it's theoretically possible for us to make a planet-ending weapon, it's not practical in any sense. And as it turns out, not only are planet-ending weapons not practical, bombs today have lower explosive yields than the heyday of the nuclear arms race, because those weapons turned out to be impractical as well.

Talk on safe AI seems to be dominated by dark genies, when we don't even have the AI equivalent of a theory of nuclear physics, or lift. We just don't know what the practicalities of building superintelligent AI even are, so it seems premature to be ringing the gong and raising alarm bells that we might be building Beelzebub in someone's basement. If there's one thing I am sure of, it's that we won't just summon Skynet by accident; it'll almost certainly be the end result of the development of a fundamental theory of intelligence (or equivalent), and the accumulated work of engineers and scientists, probably over decades. You wouldn't expect scientists to accidentally make ITER, for example, without a theory of nuclear physics, even if they did have some notion of "hot rocks". Superintelligent AI seems at least as hard in my opinion, and I think that by the time we're in a position to build such an entity, we'll also have some idea of what the limitations are likely to be, how much of a risk it actually presents, and ways to constrain / mitigate these risks.

Note that this isn't the same thing as saying that the current field of AI safety isn't important, because it is. Narrow AI is still dangerous in the same way that we don't need planet-ending weapons for fusion bombs to be dangerous. But I'm less concerned with paperclip maximizers (because again, the notion that an AI could somehow turn the planet into paperclips raises serious and fundamental questions about the nature of embodiment that such proposals never actually grapple with) than I am with systems that enshrine and enforce social inequalities, or take power away from the average citizen, because the latter are things that we know are possible, even through negligence. More to the point, we don't need a theory of intelligence to assess the ways in which such systems could pose a threat.


I think all of this is reasonable and we'd probably have an interesting in-person discussion.

I think flight is a good comparison. Before human flight it was possible to speculate about the risks. The machines we built turned out to use shared underlying principles with birds, but ultimately we can do it differently. We can supply more power and as a result do things at a scale not present in the natural world.

Maybe it's the case that intelligence is a special case with special constraints, but I suspect it's not. As a result it makes sense for some people to try to solve alignment now because if it ends up not being constrained then by the time we need it, it'll be too late to figure out the problem.

Maybe we'll get lucky (like we did with nuclear weapons not being able to be made by any random person in their backyard), but I wouldn't bet on it.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: