Hacker News new | past | comments | ask | show | jobs | submit login

Its depressing to go over all this ultra-shallow chit-chat that has short-circuited any intelligent discussion about the role and trajectory of information technology (let alone any more serious problem or opportunity of the current times).

Talking about AI (and AGI) as if its some xenomorph lurking somewhere in silicon, waiting for its inevitable escape from its human prison.

AI will not bootstrap itself with some emergent property if somebody spends gazillions of dollars and Watts to estimate petazillions of parameters.

Further progress is not going to come unless some very human brain and intelligence opens up completely new algorithmic vistas.

The future of AI is literally tied to the future development of human mental (mathematical) models around information, knowledge and its digital representation.

If not intuitively obvious, the history of mathematical thought development is crushing evidence that it follows its own dynamic over timescales that span centuries.




"We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.

...

The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered."

http://www.incompleteideas.net/IncIdeas/BitterLesson.html


I'm a little bit tired of the Bitter Lesson copypasta.

It doesn't seem to me like point 1 to 3 have been proven to be true about ConvNets. Sure, scaling convnets further help, I don't think anyone would argue that training a big model on more data would not at least see some improvement.

I also think there are irrefutable lower bounds of computation that it would be important to consider when building these meta-methods. If your problem requires N steps of computation for a problem of N size, but your method can only always perform at most K steps of computation, no amount of data or compute you throw at it will result in the right solution.


I don't think the parent comment is irreconcilable with the Bitter Lesson. I don't think transformers are a path to AGI, but that doesn't mean we need to go back to symbolic AI. We need to iterate on the meta-methods we use to search for and encapsulate the complexity of intelligence.


I think the bitter lesson has a lot more to do with the physical limits of von neumann computation and the structure of computing as we currently understand it than the direction that AI research should go.

That the emergent deep learning truthers are on the ascent should not be an excuse to completely discount knowledge based approaches. We're currently grappling with a number of limitations in the latest generation of ML models, particularly surrounding the cost of computation required to train and run them, and the fact that they have absolutely no way to verify the quality or correctness of their output. Clearly there are drawbacks to relying on a system we barely understand to produce our programs for us.


Do you think the human brain has bootstrapped itself with an emergent property in a way that is impossible in silicon?


There is an entirely theoretical possibility for this to happen in some universe. It doesn't particularly concern me. Its arguing about how many angels can dance on a pin's head.

The messianic prophecy of the imminence of the in-silico birth of the AGI is based on extreme, simplistic mental models: the brain is a computer, everything is input and output, there is some neural net black box "learning" inside etc.

The stance reminds me of the arrogant ignorance displayed by Laplace's demon [1]. Laplace postulated that if someone (the demon) knows the precise location and momentum of every atom in the universe, their past and future values for any given time are entailed - they can be calculated from the laws of classical mechanics.

Alas, he completely missed quantum mechanics, the very basis of how the universe works. It was discovered a hundred years after he died.

In fact Laplace's overconfidence sin is probably orders of magnitude smaller than what the breathless AGI enthusiasts currently commit: Classical mechanics and quantum mechanics share some formal mathematical similarities [2].

It feels very unlikely that anything resembling AGI will share much mathematical form with current machine learning models. The evolutionary, in-principle agnostic and general purpose "wiring" of the human brain, assuming it can abstracted somehow, lives on a different mathematical plane.

[1] https://en.wikipedia.org/wiki/Laplace%27s_demon

[2] In fact the process of conjuring up the formal apparatus required for quantum mechanics is one of the most dramatic examples of the versatile abstraction powers of the human brain.


The answer actually might be, no, why do you assume silicon can actually do what is required?

Considering we've never seen a brain only made from silicon do much, then it's likely not the right stuff.

Maybe we can brute force things that act like a brain in silicon, but considering that a bee and an ant are intelligent, communicate and solve problems in groups and run on almost zero energy compared to an entire datacenter, you have to admit, something isn't quite there.

I know this is a sensitive topic for many, so I'm not being hostile, I genuinely mean it though.

In my honest opinion, if ants or bees had thumbs and hands, they'd evolve to do some amazing things.


Counterpoint 1:

> Talking about AI (and AGI) as if its some xenomorph lurking somewhere in silicon, waiting for its inevitable escape from its human prison.

> AI will not bootstrap itself with some emergent property if somebody spends gazillions of dollars and Watts to estimate petazillions of parameters.

Not right now, though recent breakthroughs look worryingly close. But the problem isn't AGI creating itself ex nihilo, but rather us creating it. And if you look at how the world reacted to LLMs, millions of minds and billions of dollars are being thrown towards making that happen - because for the first time in history, it looks like we have an actual "angle of attack", and neither curious minds nor greedy businesses will leave it on the table.

> Further progress is not going to come unless some very human brain and intelligence opens up completely new algorithmic vistas.

All we need to do is to get to roughly average human intelligence in higher cognitive functions, running in pure software. At that point, you can have an AI programmer and an AI ML researcher of similar occupational capability as their human counterparts. But this is software we're talking about - if you can have a pair of them, you can keep horizontally scaling until you have thousands or millions of them. And if you let them directly or indirectly work on their own code... they won't stay human-level for long. You get a feedback loop that's to our technological progress what our technological progress is to biological evolution.

Counterpoint 2:

Maybe it's clearer if instead of using the word "intelligence", we distill it into "optimization". After all, at high level, this is what intelligence is - a powerful optimization process. Now, we humans are no strangers to optimization processes that get out of control. Corporations, arguably, are one. Markets are definitely one, and the aggregate market economy is a powerful optimization process that's already out of control - it's self-directing and owns us. Culture arguably is too, but it's a weak one.

Consider that every time people say things like "if only we weren't so short-sighted and greedy, we would've solved poverty/war/climate change". If you take seriously the sociological and macroeconomic fact that humans follow incentives, then this really becomes "the system/economy/market makes it impossible for us to solve those problems", and... well, what is that system if not for a self-directed optimization process that arose, unintentionally, from our individual needs and desires? Well, semi-unintentionally - free market proponents use exactly this strong optimizer nature to argue why the market is fair and awesome and better than any kind of central planning.

Now, as powerful optimizer as our economic system is, it's still kind of dumb. It averages away individual human intelligence. But what if it didn't? What if it had an actual human-level mind of its own, or at least a human-level mind that's able to perceive and act on the market faster than any one human or group of humans could? Literally getting inside our collective OODA loop? This influence won't get cancelled out - at this point, we'd be screwed.

And that's just one possible scenario - AI tech giving a mind to the economy itself. But the broader point here is, we've already established we can accidentally create powerful generic optimizers that end up controlling us. The ones we deal with today are slow, because they run on humans and human interaction in physical space. But when we create an optimizer that runs on our digital infrastructure, that one will be faster.


"AI" overlaps with the more profound and (potentially) far more dangerous problem of the "algorithmization" of society [1].

"Out-of-brain" information technology is something that started long time ago with the agricultural revolution and the first cuneiform tablets. It has been shaping our societies ever since. Once the automation inherent in these constructs is baked-in it doesn't require over-complex black box models to create complete havoc.

[1] Defined as the institutionalization of intermediating information technology artifacts that humans cannot opt-out from if they want to be part of society.


Profound, yes. Far more dangerous? Well, it's really the same thing. Automation slotting itself in between people and institutions can be harmful at various scales, but if that automation gets sophisticated enough - that is, once the automation is (equivalent to) powerful AI - we'll be facing an extinction-level threat.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: