Hacker News new | past | comments | ask | show | jobs | submit login

This is one of the most delusional and speculative books I've ever read. The author comes up with elaborate analytical models resting on slippery, loosely-defined terms. Being smart with algebra while totally disconnected from technological grounds. It's the kind of stuff VP execs and Bill Gates like to read, and one of the reasons for the current bubble.



My annoyance is that _anyone_ would write a book on a technical subject who knows absolutely nothing about the subject. LLMs aren't a philosophical concept; they're a software mechanism with myriad constraints and design limitations built in. Understanding their future demands a deep understanding of those mechanisms. So why on earth would an academic who knows zero about engineering, software, or AI techniques have the temerity to write a book suggesting he can see farther into the evolution of LLMs than, say, a carpenter or bricklayer? At least those skills know something about physical mechanisms and engineering constraints. But not Bostrom.

The continued interest in a book of bold uninformed argumentation that's so obviously insubstantial just goes to show how bad humans are at telling the difference between useful knowledge and wild speculation. It's almost as silly as caring whether the prognostications of Rod Brooks (or worse still, Ray Kurzweil) come true. As if guessing right actually meant something...


> LLMs aren't a philosophical concept

There aren’t any non-philosophical computer science concepts.


I've re-skimmed it recently as well, and found it to be extremely zeerusted and needlessly alarmist in retrospect. A lot of it is written from the perspective of "a handful of scientists build brain in a bunker a la Manhattan project" that is so far from our actual reality that 90% of the concerns don't even apply.

Exponential runaway turned out to not be a thing at all, progress is slow (on the order of years), competitors are aplenty, alignment is easy, everything is more or less done in the open with papers being published every day. We're basically living out the absolute best possible option out of all the ones outlined in the book.


Looks like the real-world risks of AI are, predictably, AI being used to avoid responsibility/liability/regulation or plainly copyright-laundering (which likewise predictably is only a temporary loophole until laws catch up) and companies like Google reversing all progress they made in reducing their emissions by doubling down on resource-intense AI.

"Avoiding regulation" as a Service of course has a huge market potential for as long as it works, just like it did for crypto and the gig economy. But it is by definition a bubble because it will deflate as soon as the regulations are fixed. GenAI might have an eventual use but it will in all likelihood look nothing like what it is used for at the moment.

And yeah, you could complain that what I said mostly applies to GenAI and LLMs but that's where the current hype is. Nobody talks about expert systems because they've been around for decades and simply work while being very limited and "unsexy" because they don't claim to give us AGI.


Corporate needs you to find the differences between this picture:

- layout of an expert system's components

and this picture:

- an agentic framework that uses an LLM as its reasoning system

They're the same picture :)


The problem starts with talking about "AGI" and LLMs/GenAI in the same breath. LLMs are not and can not be AGI. They are impressive, but they are glorified autocomplete. When ChatGPT lets you "correct" it, it doesn't backtrack, it takes your response into consideration along with what it said before and generates what its model suggests could come next in the conversation. It's more similar to a Markov chain than to an expert system.


> it doesn't backtrack

The UI doesn't let you do that*, the underlying model does. (And so would an actual Markov chain).

* EDIT: not in the middle of a response at least, but it does allow you to backtrack to a previous message and go again from there.


Im going to make play devils advocate here, but I think LLMs are the closest we can get to AGI, because AGI is a silly concept. Literally nobody on earth has “General Intelligence,” nobody is generally capable in all things, so why do we expect software to be?

Still, the average cutting edge LLM does a hell of a lot better at a great many things than a great many humans. I know, it’s just computer, but what is the average skill level? We just keep moving the goal posts.


You have to distinguish between intelligence and knowledge/experience.

Nobody knows everything or has experienced everything, but we do have "general intelligence", which is the ability to reason over whatever we DO know and HAVE experienced in order to make successful plans/predictions around it and actually use this knowledge rather than merely recall it.

Of course some people are more intelligent than others, but nonetheless our brain reflects the nature of our species as generalists who can apply our intelligence to a wide/unlimited number of areas.

There are at least two things fundamentally missing from LLMs that disqualify it from deserving of the AGI label.

1) LLMs have extremely limited ability to plan and reason, even over the fixed knowledge (training set) that they have. This is a limitation of the simplistic transformer neural network architecture they are based on, which is just a one-way conveyor belt of processing steps (transformer layers) from input to output. No looping/iteration, working memory, etc - they just don't have the machinery to be able to reason/plan in an open ended way.

2) LLMs can't learn. They are pre-trained and just have a fixed set of knowledge and processing templates. Perhaps we should regard them as having a limited type of intelligence ("crystalized intelligence") over what they do know, but it can't be described as general intelligence when it excludes novel reasoning/planning ("fluid intelligence"), as well as the ability to learn anything new.

We will eventually design human-like "general" intelligence (there's no magic about it that prevents us from doing it), so LLMs are not as good as it gets, but LLMs (and upcoming enhanced LLMs) may be as good as it gets for a while - AGI may well require a brand new architecture based around the ability to learn continuously. This isn't going to happen in next 5-10 years.


Personally, I think you are wrong about both 1 and 2.

They maybe cannot reason as well as a programmer or a mathematician, but they can do so better than a LOT of humans I know.

Also, they can learn, we’d just have to feed data to do so and we don’t… we just don’t.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: