Hacker News new | past | comments | ask | show | jobs | submit login

There's no false dichotomy, but a very real one. One problem is current and pressing, the other is a fantasy. I don't need to support that with any argument: the non-existence of superintelligent AGI is not disputed, nor is any of the people crying doom claim that they, or anyone else, know how to create one. It's an imaginary risk.



I agree that superintelligent AGI does not exist today, and that fortunately, nobody presently knows how to create one. Pretty much everyone agrees on that. Why are we still worried? Because the risk is that this state of affairs could easily change. The AI landscape is already rapidly changing.

What do you think your brain does exactly that makes you so confident that computers won't ever be able to do the same thing?


>> Because the risk is that this state of affairs could easily change.

Well, it couldn't. There's no science of intelligence, let alone artificial intelligence. We aren't any more likely to create AGI by mistake without having any relevant science than we would have been able to create the atomic bomb without physics.

I kind of understand why people are all excited, but ChatGPT is not a breakthrough in AI, or even in language modelling. People hear that it's an "AI" and that it has "learned language" but they misunderstand those terms in the common sense, when they are really trade terms that don't have the meaning people give them. For example, "Large Language Models" are models of text, not language.

Another thing to keep in mind is that we have basically gone through the same cycle of hype ever since 2012 and the ImageNet results, except this time the hype has gone viral and reached people who had no idea where the technology was before. Inevitably those people are surprised and misunderstand the capabilities of the technology.

Here's how close we are to AGI. You cannot, today, take a robot, stick an untrained neural net in its "brain", give it a good shove out in the world, and expect that it's going to learn anything at all. First, because most events it observes will happen once, and neural nets don't learn that way. Second because neural nets don't learn that way: you have to carefully train them "in the lab" first, and then use the trained model in whatever environment (i.e. dataset) you choose. And you can't change the environment where the neural net operates or it will simply break.

That's a level of capability as far from AGI as it's from the capability of a simple animal, like a cricket or a cockroach. Spider-level intelligence is right out of the question. Small furry rodent intelligence is a complete fantasy. Everything else is just twitter fodder.

So, no, we're not going to "easily change" the current state of affairs, because we haven't got a clue, we don't have the science and we don't have the tech to do it.

>> What do you think your brain does exactly that makes you so confident that computers won't ever be able to do the same thing?

Who said I think that? Cause I sure didn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: