The reality is that o1 is a step away from general intelligence and back towards narrow ai. It is great for solving the kinds of math, coding and logic puzzles it has been designed for, but for many kinds of tasks, including chat and creative writing, it is actually worse than 4o. It is good at the specific kinds of reasoning tasks that it was built for, much like alpha-go is great at playing go, but that does not actually mean it is more generally intelligent.
AGI currently is an intentionally vague and undefined goal. This allows businesses to operate towards a goal, define the parameters, and relish in the “rocket launches”-esque hype without leaving the vague umbrella of AI. It allows businesses to claim a double pursuit. Not only are they building AGI but all their work will surely benefit AI as well. How noble. Right?
It’s vagueness is intentional and allows you to ignore the blind truth and fill in the gaps yourself. You just have to believe it’s right around the corner.
"If the human brain were so simple that we could understand it, we would be so simple that we couldn’t." - without trying to defend such business practice, it appears very difficult to define what are necessary and sufficient properties that make AGI.
That doesn’t seem accurate, even if you limit it to mental tasks. For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?
Another thought: The way human perform tasks is affected by involuntary aspects of the respective individual mind, in a way that the involuntariness is relevant (for example being repulsed by something, or something not crossing one’s mind). If it is involuntary for the AGI as well, then it can’t perform tasks in all the different ways that different humans would. And if it isn’t involuntary for the AGI, can it really reproduce the way (all the ways) individual humans would perform a task? To put it more concretely: For every individual, there is probably a task that they can’t perform (with a specific outcome) that however another individual can perform. If the same is true for an AGI, then by your definition it isn’t an AGI because it can’t perform all tasks. On the other hand, if we assume it can perform all tasks, then it would be unlike any individual human, which raises the question of whether this is (a) possible, and (b) conceptually coherent to begin with.
The biggest issue with AGI is how poorly we've described GI up until now.
Moreso, I see an AI that can do any (intelligence) task a human can will be far beyond human capabilities because even individual humans can't do everything.
One AI being able to do every task every human can do would be superhuman. But it is much more likely that at least at first AIs would be customized to narrower skill sets like Mathematician or programmer or engineer due to resource limitations.
> For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?
Do you mind sharing the kinds of descriptive criteria for these behaviors that you are envisioning for which there is overlap with the general assumption of them occurring in a machine? I can foresee a sort of “featherless biped” scenario here without more details about the question.
> For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?
How would you know if it could? How do you know that other human beings can? You don’t.
> For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?
...Yes. This is what I think 'most' people consider a real AI to be.
You’re right that ‘Beingness’ and ‘meditation’ are hard to define with precision, but the essence of meditation isn’t about external markers—it’s about an inner, subjective awareness of presence that can’t be fully reduced to objective measures.
Lots of debate since ChatGPT and Stable Diffusion can be summarised as A: "AI cheated by copying humans, it just mixes the bits up really small like a collage" B: "So like humans learning from books and studying artists?" A: "That doesn't count, it's totally different"
Even though I am quite happy to agree that differences exist, I have yet to see a clear answer as to what about people even mean when asserting that AI learning from books is "cheating" given that it's *mandatory* for humans in most places.
I just think that language is a big part of the puzzle, but it is not the only one. Simply generating tokens may sometimes look like thought, but as you feed the output back at itself, it quickly devolves into repeating nonsense and looks nothing like introspection. Self-sufficiency would reliably form new ideas and angles.
it must be wonderful to live life with such supreme unfounded confidence. really, no sarcasm, i wonder what that is like. to be so sure of something when many smarter people are not, and when we dont know how our own intelligence fully works or evolved, and dont know if ANY lessons from our own intelligence even apply to artificial ones.
Social media doesn't punish people for overconfidence. In fact social media rewards people's controversial statements by giving them engagement - engagement like yours.
Technically, the models can already learn on the fly. Just that the knowledge it can learn is limited to the context length. It cannot, to use the trendy word, "grok" it and internally adjust the weights in its neural network yet.
To change this you would either need to let the model retrain itself every time it receives new information, or to have such a great context length that there is no effective difference. I suspect even meat models like our brains is still struggling to do this effectively and need a long rest cycle (i.e. sleep) to handle it. So the problem is inherently more difficult to solve than just "thinking". We may even need an entire new architecture different from the neural network to achieve this.
All words only gain meaning through common use: where two people mean different things by some word, we influence each other until we're in agreement.
Words about private internal state don't get feedback about what they actually are on the inside, just about what they look like on the outside* — "thinking" and "understanding" map to what AI give the outward impression of, even if the inside is different in whatever ways you regard as important.
* This is also how people with aphantasia keep reporting their surprise upon realising that scenes in films where a character is imagining something are not merely artistic license.
I understand the hype. I think most humans understand why a machine responding to a query like never before in the history of mankind is amazing.
What you’re going through is hype overdose. You’re numb to it. Like I can get if someone disagrees but it’s a next level lack of understanding human behavior if you don’t get the hype at all.
There exists living human beings who are still children or with brain damage with comparable intelligence to an LLM and we classify those humans as conscious but we don’t with LLMs.
I’m not trying to say LLMs are conscious but just saying that the creation of LLMs marks a significant turning point. We crossed a barrier 2 years ago somewhat equivalent to landing on the moon and i am just dumb founded that someone doesn’t understand why there is hype around this.
The first plane ever flies, and people think "we can fly to the moon soon!".
Yet powered flight has nothing to do with space travel, no connection at all. Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech. No matter how you try.
And yet, the moon was reached a mere 66 years after the first powered flight. Perhaps it's a better heuristic than you are insinuating...
In all honesty, there are lots of connections between powered flight and space travel. Two obvious ones are "light and strong metallurgy" and "a solid mathematical theory of thermodynamics". Once you can build lightweight and efficient combustion chambers, a lot becomes possible...
Similarly, with LLMs, it's clear we've hit some kind of phase shift in what's possible - we now have enough compute, enough data, and enough know-how to be able to copy human symbolic thought by sheer brute-force. At the same time, through algorithms as "unconnected" as airplanes and spacecraft, computers can now synthesize plausible images, plausible music, plausible human speech, plausible anything you like really. Our capabilities have massively expanded in a short timespan - we have cracked something. Something big, like lightweight combustion chambers.
The status quo ante is useless to predict what will happen next.
>By that metric, there are lots of connections between space flight and any other aspect of modern society.
Indeed. But there's a reason "aerospace" is a word.
>No plane, relying upon air pressure to fly, can ever use that method to get to the moon
No indeed. But if you want to build a moon rocket, the relevant skillsets are found in people who make airplanes. Who built Apollo? Boeing. Grumman. McDonnell Douglas. Lockheed.
I feel like aeronautics and astronautics are deeply connected. Both depend upon aerodynamics, 6dof control, and guidance in forward flight. Advancing aviation construction techniques were the basis of rockets, etc.
Sure, rocketry to LEO asks more in strength of materials, and aviation doesn’t require liquid fueled propulsion or being able to control attitude in vacuum.
These aren’t unconnected developments. Space travel grew straight out of aviation and military aviation. Indeed, look at the vertical takeoff aircraft from the 40s and 50s, evolving into missile systems with solid propulsion and then liquid propulsion.
I thought your point was terrible about aerospace. And since you're insisting I follow you further into the analogy, I think it's terrible here.
LLMs may be a key building block for early AGI. The jury is still out. Will a LLM alone do it? No. You can't build a space vehicle from fins and fairings and control systems alone.
O1 can reach pretty far beyond past LLM capabilities by adding infrastructure for metacognition and goal seeking. Is O1 the pinnacle, or can we go further?
In either case, planes and rocket-planes did a lot to get us to space-- they weren't an unrelated evolutionary dead end.
> Yet powered flight has nothing to do with space travel, no connection at all.
The relationships you are describing are why airflight/spaceflight and AI/AGI are a good comparison.
We will never get AGI from an LLM. We will never fly to the moon via winged flight. These are examples of how one method of doing a thing, will never succeed in another.
Citing all the similarities between airflight and spaceflight makes my point! One may as well discuss how video games are on a computer platform, and LLMs are on a computer platform, and say "It's the same!", as say airflight and spaceflight are the same.
Note how I was very clear, and very specific, and referred to "winged flight" and "low/high pressure", which will never, ever, ever get one even to space. Nor allow anyone to navigate in space. There is no "lift" in space.
Unless you can describe to me how a fixed wing with low/high pressure is used to get to the moon, all the other similarities are inconsequential.
Good grief, people are blathering on about metallurgy. That's not a connection, it's just modern tech, has nothing to do with the method of flying (low/high pressure around the wing), and is used in every industry.
I love how incapable everyone has been in this thread of concept focus, incapable of separating the specific from the generic. It's why people think, generically, that LLMs will result in AGI, too. But they won't. Ever. No amount of compute will generate AGI via LLM methods.
LLMs don't think, they don't reason, they don't infer, they aren't creative, they come up with nothing new, it's easiest to just say "they don't".
One key aspect here is that knowledge has nothing to do with intelligence. A cat is more intelligent than any LLM that will ever exist. A mouse. Correlative fact regurgitation is not what intelligence is, any more than a book on a shelf is intelligence, or the results of Yahoo search 10 years ago were.
The most amusing is when people mistake shuffled up data output from an LLM as "signs of thought".
Your point is good enough any spaceflight, despite some quibbling from commenters.
But I haven't seen where you make a compelling argument why it's the same thing in AI/AGI.
In your old analogy, we're all still the guys on the ground saying it'll work. You're saying it won't. But nobody has "been to space" yet. You have no idea if LLMs will take us to AGI.
I personally think they'll be the engine on the spaceship.
No amount of compute will generate AGI via LLM methods.
LLMs don't think, they don't reason, they don't infer, they aren't creative, they come up with nothing new, it's easiest to just say "they don't".
One key aspect here is that knowledge has nothing to do with intelligence. A cat is more intelligent than any LLM that will ever exist. A mouse. Correlative fact regurgitation is not what intelligence is, any more than a book on a shelf is intelligence, or the results of Yahoo search 10 years ago were.
The most amusing is when people mistake shuffled up data output from an LLM as "signs of thought".
From where I sit, I don't even see LLMs as being some sort of memory store for AGIs even. The knowledge isn't reliable enough. An AGI would need to ingress and then store knowledge in its own mind, not use an LLM as a reference.
Part of what makes intelligence, intelligent, is the ability to see information and learn on the spot. And further to learn via its own senses.
Let's look at bats. A bat is very close to humans, genetically. Yet if somehow we took "bat memories", and were able to implant them in humans, how on earth would that help? How do you use bat memories of using sound for navigation, to "see" work? Of flying? Of social structure?
For example, we literally don't have them brain matter to see spatially the same way bats do. So when access those memories, they would be so foreign, that their usefulness is greatly reduced. They'd be confusing, unhelpful.
Think of it. Ingress of data and information is sensorially derived. Our mental image of the world depends upon this data. Our core being is built upon this foundation. An AGI using an LLM as "memories" would be experiencing something just as foreign.
So even if LLMs were used to allow an AGI to query things, it wouldn't be used as "memory". And the type of memory store that LLMs exhibit, is most certainly not how intelligence as we know it stores memory.
We base our knowledge upon directly observed and verified fact, but further based upon the senses we have. And all information derived from those senses is actually filtered, and processed by specialized parts of our brains, before we even "experience" it.
Our knowledge is so keyed in and tailored directly to our senses, and the processing of that data, that there is no way to separate the two. Our skill, experience, and capabilities are "whole body".
An LLM is none of this.
The only true way to create an AGI via LLMs would be to simulate a brain entirely, and then start scanning human brains during specific learning events. Use that data to LLM your way into an averaged and probabilistic mesh, and then use that output to at least provide full sense memory input to an AGI.
Even so, I suspect that may be best used to create a reliable substrate. Use that method to simulate and validate and modify that substrate so it is capable of using such data, thereby verifying that it stands solid as a model for an AGI's mind.
Then wipe and allow learning to begin entirely separately.
Yet to do even this, we'd need to ensure that sensor input at least to a degree enables the same sort of sense input. I think that Neuralink might be best in play to enable this, for as it works at creating an interface for, say, sight, and other senses... it could then use this same series of mapped inputs for a simulated human brain.
This of course works best with a physical form to also taste the environment around it, and who also is working on an actual android for day to day use?
You might say this focuses too much on creating a human style AGI, but frankly it's the only thing we can try to make and work into creating a true AGI. We have no other real world examples of intelligence to use, and every brain on the planet is part of the same evolutionary tree.
So best to work with something we know, something we're getting more and more apt at understanding, and with brain implants of the calibre and quality that neurolink is devising, something we can at least understand in far more depth than ever before.
> The first plane ever flies, and people think "we can fly to the moon soon!".
Yet powered flight has nothing to do with space travel, no connection at all.
You eventually said winged flight much later-- trying to make your point a little more defensible. That's why I started explaining to you the very big connections between powered flight and space travel ;)
I pretty much completely disagree with your wall of text, and it's not a very well reasoned defense of your prior handwaving. I'm going to move on now.
Yet powered flight has nothing to do with space travel, no connection at all. Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech. No matter how you try.
Winged flight == "low/high pressure" flight, it's how an airplane wing works and provides lift.
Maybe you just said what you wanted to say extremely poorly. Like "wing technology doesn't get you closer to space." I mean, of course, fins and distribution of pressure are important, but a relatively small piece.
On the other hand, powered flight and the things we started building for powered flight got us to the moon. "Powered flight" got us to turbojets, and turbomachinery is the number one key space launch technology.
Maybe you just said what you wanted to say extremely poorly.
Or maybe you didn't read closely? You claimed I didn't mention winged flight, yet I mentioned that and the method of winged flight. Typically, that means you say "Oh, sorry, I missed that" instead of blaming others.
I have refuted technology paths in prior posts. Refute those comments if you wish, but just restating your position without refuting mine doesn't seem like it will go anywhere.
And if you don't want a reply? Just stop talking. Don't play the "Oh, I'm going to say things, then say 'bye' to induce no response" game.
You gave a big wall of text. You made statements that can't really be defended. If you'd been talking just about wings, you could have made that clear (and not in one possible reading of a sentence that follows an absolutist one).
> Just debate fairly.
The thing I felt like responding to, you were like "noooo, i didn't mean that at all.
> > > > > Yet powered flight has nothing to do with space travel, no connection at all.
Pretty absolute statement.
> > > > > Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech.
Then, I guess you're saying this sentence is trying to restrict it to "airfoils aren't enough to go to space", and not talk about how powered flight lead directly to space travel... Through direct evolution of propulsion (turbo-machinery), control, construction techniques, analysis methods, and yes, airfoils.
I guess we can stay here debating the semantics of what you originally said if you really want to keep talking. But since you're walking away from what I saw as your original point, I'm not sure what you see as productive to say.
That’s not true. There was not endless hype about flying to the moon when the first plane flew.
People are well aware of the limits of LLMs.
As slow as the progress is, we now have metrics and measurable progress towards agi even when there are clear signs of limitations on LLMs. We never had this before and everyone is aware of this. No one is delusional about it.
The delusion is more around people who think other people are making claims of going to the moon in a year or something. I can see it in 10 to 30 years.
That’s not true. There was not endless hype about flying to the moon when the first plane flew.
I didn't say there was endless hype, I gave an example of how one technology would never result in another... even if to a layperson it seems connected.
(The sky, and the moon, are "up")
People are well aware of the limits of LLMs.
Surely you mean "Some people". Because the point in this thread is that there is a lot of hype, and FOMO, and "OMG AGI!" chatter running around LLMs. Which will never ever make AGI.
You said you didn’t comprehend why there was hype and I explained why there was hype.
Then you made an analogy and I said your analogy is irrelevant because nobody thinks LLMs are agi nor do they think agi is coming out of LLMs this coming year.
Actually, plenty of people think LLMs will result in AGI. That's what the hype is about, because those same people think "any day now". People are even running around saying that LLMs are showing signs of independent thought, absurd as it is.
And hype doesn't mean "this year" regardless.
Anyhow, I don't think we'll close this gap between our assessment.
And yet, the overall path of unconcealment of science and technological understanding definitely traces a line that goes from the Wright brothers to Vostok 1. There is no reason to think a person from the time of the Wright brothers would find it to be a simple one easily predicted by the methods of their times, but I doubt that no person who worked on Vostok 1 would say that their efforts were epochally unrelated to the efforts of the Wright brothers.
This is kind if true. I feel like the reasoning power if O1 is really only truly available on the kinds of math/coding tasks it was trained on so much.