Wait and see. You're not paying attention now, but it's not too late to start.
Go to your favorite programming puzzle site and see how you do against the latest models, for instance. If you can beat o1-pro regularly, for instance, then you're right, you have nothing to worry about and can safely disregard it. Same proposition that was offered to John Henry.
LLMs are rules based search engines with higher dimension vector spaces encoding related topics. There is nothing intelligent about these algorithms, except the trick ones play on oneself interpreting well structured nonsense.
It is stunting kids development, as students often lack the ability to intuitively reason when they are being misled. "How many R's in 'Strawberry'?" is a classic example exposing the underlying pattern recognition failures. =3
I have never understood why the failure to answer the strawberry question has seen as a compelling argument as to the limits of AI. The AIs that suffer from this problem have difficulty counting. That has never been denied. Those AI's also do not see the letters of the words they are processing. Counting the letters in a word is a task that it is quite unsurprising that it fails. I Would say it is more surprising that that they can perform spelling tasks at all. More importantly the models where such weaknesses became apparent are all from the same timeframe where the models advanced so much that those weaknesses were visible only after so many other greater weaknesses had been overcome.
People didn't think that planes flying so high that pilots couldn't breathe exposed a fundamental limitation of flight, just that their success had revealed the next hurdle.
The assertion that an LLM is X and therefore not intelligent is not a useful claim to make without either proof that it is X and proof that X is insufficient. You could say brains are interconnected cells that send pulses at intervals dictated by a combination of the pulses they sense, and there is nothing intelligent about that. The premises must be true and you have to demonstrate that the conclusion follows from those premises. For the record I think your premises are false and your conclusion doesn't follow.
Without a proof you could hypothesise reasons why such a system might not be intelligent and come up with an example of a task that no system that satisfies the premises could accomplish. While that example is unsolved the hypothesis remains unrefuted. What would you suggest as a test that shows a problem that could not be solved by such a machine? It must be solvable by at least one intelligent entity to show that it is solvable by intelligence. It must be undeniable when the problem is solved.
The AIs that suffer from this problem have difficulty counting.
Nope, its not a counting problem. It's a reasoning problem. Thing is, no matter how much hype they get, the AIs have no reasoning capabilities at all, and they can fail in the silliest ways. Same as with Larry Ellison: Don't fall into the trap of anthropomorphizing the AI.
Is that like 80% LLM slop? the allusion for failures to improve productivity in competent developers was cited in the initial response.
The Strawberry test exposes one of the many subtle problems LLMs inherently offer in the Tokenization approach.
The clown car of Phds may be able to entertain the venture capital folks for awhile, but eventually a VR girlfriend chat-bot convinces a kid to kill themselves like last year.
Again, cognitive development like ethics development is currently impossible for LLM as they are lacking any form of intelligence (artificial or otherwise.) People have patched directives into the model, but these weights are likely fundamentally statistically insignificant due to cultural sarcasm in the data sets.
You suspect my words of being AI generated while at the same time arguing that AI cannot possibly reason.
It seems like you see AI where there is not, this compromises your ability to assess the limitations of AI.
You say that LLMs cannot have any form of intelligence but for some definitions of intelligence it is obvious they do. Existing models are not capable in all areas but they have some abilities. You are asserting that they cannot be intelligent which implies that you have a different definition of intelligence and that LLMs will never satisfy that definition.
What is that definition for intelligence? How would you prove something does not have it?
That is a very open-ended detractor question, and is philosophically loaded with taboo violations of human neurology. i.e. It could seriously harm people to hear my opinion on the matter... so I will insist I am a USB connected turnip for now ... =)
"How would you prove something does not have it?"
A Receiver operating characteristic no better than chance, within a truly randomized data set. i.e. a system incapable of knowing how many Rs in Strawberry at the token level... is also inherently incapable of understanding what a Strawberry means in the context of perception (currently not possible for LLM.)
>A Receiver operating characteristic no better than chance, within a truly randomized data set. i.e. a system incapable of knowing how many Rs in Strawberry at the token level... is also inherently incapable of understanding what a Strawberry means in the context of perception (currently not possible for LLM.)
This is just your claim, restated. In short it is saying they don't think because they fundamentally can't think.
There is no support as to why this is the case. Any plain assertion that they don't understand is unprovable because you can't measure directly measure understanding.
Please come up with just one measurable property that you can demonstrate is required for intelligence that LLMs fundamentally lack.
We are at a logical impasse... i.e. failure to understand the noted ROC curve is often a metric that matters in ML development, and LLMs are trivially broken at the tokenization layer:
Note, introducing a straw-man argument and or bot slop in an unrelated topic is silly. My anecdotal opinion does not really matter on the subject of algorithmic performance standards. yawn... super boring like ML... lol
(Shrug) If you're retired or independently wealthy, you can afford that attitude. Hopefully one of those describes you.
Otherwise, you're going to spend the rest of your career saying things like, "Well, OK, so the last model couldn't count the number of Rs in 'Strawberry' and the new one can, but..."
Personally, I dislike being wrong. So I don't base arguments on points that have a built-in expiration date, or that are based on a fundamental misunderstanding of whatever I'm talking about.
Every model is deprecated in time if evidenced Science is done well, and hopefully replaced by something more accurate in time. There is no absolute right/correctness unless you are a naive child under 25 cheating on structured homework.
The point was there is nothing intelligent (or AI) about LLMs except the person fooling themselves.
In general, most template libraries already implement the best possible algorithms from the 1960s, and tuned for architecture specific optimizations. Knowing when each finite option is appropriate takes a bit of understanding/study, but does give results far quicker than fitting a statistically salient nonsense answer. Several study datum from senior developers is already available, and it proves LLMs provide zero benefit to people that know what they are doing.
Note, I am driven by having fun, rather than some bizarre irrational competitiveness. Prove your position, or I will assume you are just a silly person or chat bot. =3
I have no position on whether or not CamperBob is a chat-bot, but they are definitely not being silly. Their point, as I take it, is that it's dangerous to look at the state of "AI" as it is today and then ignore the rate of change. To their stated point from above:
> Otherwise, you're going to spend the rest of your career saying things like, "Well, OK, so the last model couldn't count the number of Rs in 'Strawberry' and the new one can, but..."
That's a very important point. I mean, it's not guaranteed that any form of AI is going to advance to the point that it starts taking jobs from people like us, but when you fail to look forward and project a little bit and imagine what they could do with another year of progress... or two years of progress... or 5 years, etc? I posit that that kind of myopia could leave one very under-prepared for the world one lands in.
> The point was there is nothing intelligent (or AI) about LLMs except the person fooling themselves.
Sure. The "AI Effect". Irrelevant. It doesn't matter how the machine can do your job, or whether or not it's "really intelligent". It just matters that if it can create more value, more cheaply, than you or I, we are going to wind up like John Henry. Who, btw, for anybody not familiar with that particular bit of folklore "[won the race against the machine] only to die in victory with a hammer in hand as his heart gave out from stress."
The limitations of tokenization does not stop with LLMs it seems for bob.
Please don't down-vote the kids karma, as for me it is more important people feel comfortable having conversations (especially when they are almost 99% sure I'm a turnip connected to a USB port.) =3
Where do you see anything about "excitement" about anything? Quit making up bullshit strawman arguments and deal with the issue in a realistic way already. Sheesh.
I'm not arguing for any specific outcome mind you. But a refusal to acknowledge "rate of change" effects and to assume that the future will be like today is incredibly short-sighted and myopic.
Speculative fiction is entertaining, but not based in reality...
"they are definitely not being silly", that sounds like something a silly person would say. =)
" I posit that that kind of myopia could leave one very under-prepared for the world one lands in." The numerous study data analysis results says otherwise... Thus, still speculative hype until proven otherwise.
Not worried... initially suckered into it as a kid too... then left the world of ML years later because it was always boring, lame, and slow to change. lol =3
"they are definitely not being silly", that sounds like something a silly person would say. =)
Ya know, it's fine to disagree with something. But hand-wavy, shallow dismissals of what someone has to say, with no willingness to even attempt to engage with the content on a rational basis, is unbecoming.
I can’t help but have the thought that the Joel_McKay in this conversation is itself an LLM having been prompted to flippantly disregard and downplay mentions of a.i., and LLMs specifically.
I’m not saying it is true , but I am saying it made the tone and content of his messages in this thread seem a lot more self-consistent and explainable when I re-read them with that context in mind. :-)
(@Joel_McKay: apologies for downplaying your sapience - human, LLM or otherwise.)
Then provided instructions on how to present facts, and still await the data.
Then immature folks showed up to try to cow people with troll content.
I don't have to prove anything, as the evidence was already collected and reported in peer-reviewed journals. People just prefer to ignore the cited evidence that proves they are full of steaming piles of fictional hype. =3
There's your "speculative fiction" you seem so fond of.
Now, if your argument is no more than "who cares about benchmarks, 'AI' still isn't 'Real AI'" then all you're doing is repeating the 'AI Effect'[1] thing over and over again and refusing to acknowledge the obvious reality around you.
The AI's we have today, whether "real AI" or not, are highly capable in many important areas (and yes, far from perfect in others). But there is a starting point to talk about, and yes there is reason to think in terms of "rate of change" unless you have some evidence to support a belief that AI progress has reached a maximum and will progress no further.
I don't have to prove anything, as the evidence was already collected and reported in peer-reviewed journals.
Again, evidence for what exactly? What are you even claiming? All I see Bob claiming, and what I support him(?) in is the idea that there is legit reason to worry about the economic impact of AI in the near('ish?) future.
Indeed, I gather you did not comprehend the threads topics, and instead introduced a straw-man arguing at some point in the future LLM proponents will be less full of steaming piles of fictional hype.
Assertion:
1.) LLMs mislabeled “AI” is dangerous to students due to biasing them with subtle nonsense, and a VR girlfriend convincing a kid to kill themselves. Again the self referential nature of arguing the trends will continue toward AGI is nonscientific reasoning (a.k.a. moving the goal post because the “AI” buzzword lost its meaning due to marketing hype), but this is economically reasonable nonsense rhetoric.
2.) Software developers can be improved or replaced with LLMs. This was proven to be demonstrably false for experienced people.
3.) LLMs are intelligent or will become intelligent. This set of arguments show a fundamental misunderstanding of what LLMs are, and how they function.
4.) Joel may be a USB connected turnip. While I can’t disprove this self presented insight, it may be true at some point in the future.
I still like you, even if at some point you were reprogrammed to fear your imagination. =3
Failure to back up the assertion about "AI" existing in LLMs means there is no meaningful conversation to be had, but I offered to wait for a coherent argument in a time-bound manner. =3
Self-driving cars are a social and political problem, not a technical one. If it were a technical problem, it would have been considered largely solved even in the pre-2017 era of ML.
Imagine a tramway train car that branched its rail-line at every intersection, person, animal, lane, or obstacle it detected.
1. What would that rail system look like?
2. How reliable would that service become after (6^(4+p+a+l)) per km branches?
3. Given #2, how much computing power is needed to evaluate that state always-mutating SLAM environment?
4. Ok... now throw out GPS (don't work in cities), Lidar (don't work in direct sunlight), and Machine vision cameras (fooled by weather and environment surfaces)...
5. Peoples wishful thinking tends to hijack common sense when someone else pays the price.
Easier to redefine what "autonomous vehicle" means with "levels", rather than to recognize how difficult the problem actually is inside an unconstrained system.
I would never claim to know anything about the subject, but I did help build working platforms when I was more interested in robotics for a time.
Making them "safe" is a whole different problem domain, =3
Go to your favorite programming puzzle site and see how you do against the latest models, for instance. If you can beat o1-pro regularly, for instance, then you're right, you have nothing to worry about and can safely disregard it. Same proposition that was offered to John Henry.