Hacker News new | past | comments | ask | show | jobs | submit | neta1337's comments login

>But superhuman AI seems now only few years away

Seems unreasonable. You are afraid because marketing gurus like Altman made you believe that a frog that can make bigger leap than before will be able to fly.


Plus it’s not even defined what superhuman AI means. A calculator sure looked superhuman when it was invented. And it is!

Another analogy is breeding and racial biology which used to be all the hype (including in academia). The fact that humans could create dogs from wolves, looked almost limitless with the right (wrong) glasses. What we didn’t know is that wolf had a ton of genes that played a magic trick where a diversity we couldn’t perceive was there all along, in the genetic material, and it we just helped make it visible. Ie a game of diminishing returns.

Concretely for AI, it has shown us that pattern matching and generation are closely related (well I have a feeling this wasn’t surprising to neuro-scientists). And also that they’re more or less domain agnostic. However, we don’t know whether pattern matching alone is “sufficient”, and if not, what exactly and how hard “the rest” is. Ai to me feels like a person who had a stroke, concussion or some severe brain injury, it can appear impressively able in a local context, but they forgot their name and how they got there. They’re just absent.


No, because we have seen massive improvements in AI over the last years, and all the evidence points to this progress continuing at a fast pace.


I think the biggest fallacy in this type of thinking is that it projects all AI progress into a single quantity of “intelligence” and then proceeds to extrapolate that singular quantity into some imagined absurd level of “superintelligence”.

In reality, AI progress and capabilities are not so reducible to singular quantities. For example, it’s not clear that we will ever get rid of the model’s tendencies to just produce garbage or nonsense sometimes. It’s entirely possible that we remain stuck at more incremental improvements now, and I think the bogeyman of “superintelligence” needs to be much more clearly defined rather than by extrapolation of some imagined quantity. Or maybe we reach a somewhat human-like level, but not this imagined “extra” level of superintelligence.

Basically the argument is something to the effect of “big will become bigger and bigger, and then it will become like SUPER big and destroy us all”.


Extrapolation of past progress isn't evidence.


You don't have to extrapolate. There's a frenzy of talent being applied to this problem, it's drawing more brainpower the more progress that is made. Young people see this as one of the most interesting, prestigious, and best-paying fields to work in. A lot of these researchers are really talented, and are doing more than just scaling up. They're pushing at the frontiers in every direction, and finding methods that work. The progress is broadening; it's not just LLMs, it's diffusion models, it's SLAM, it's computer vision, it's inverse problems, it's locomotion. The tooling is constantly improving and being shared, lowering the barrier to entry. And classic "hard problems" are yielding in the process. It's getting hard to even find hard problems any more.

I'm not saying this as someone cheering this on; I'm alarmed by it. But I can't pretend that it's running out of steam. It's possible it will run out of money, but even if so, only for a while.


The AI bubble is already starting to burst. They Sam Altmans' of the world over-sold their product and over-played their hand by suggesting AGI is coming. It's not. What they have is far, far, far from AGI. "AI" is not going to be as important as you think it is in the near future, it's just the current tech-buzz and there will be something else that takes its place, just like when "web 2.0" was the new hotness.


It's gonna be massive because companies love to replace humans at any opportunity and they don't care at all about quality in a lot of places.

For example, why hire any call center workers? They already outsourced the jobs to the lowest bidder and their customers absolutely hate it. Fire those people and get some AI in there so it can provide shitty service for even cheaper.

In other words, it will just make things a bit worse for everyone but those at the very top. usual shit.


This getting too abstract. The core issue of LLMs that others have pointed out is the lack of accuracy; Which is how they are supposed to work because they should be paired with a knowledge representation system in a proper chatbot system.

We've been trying to build a knowledge representation system powerful enough to capture the world for decades, but this is something that goes more into the foundations of mathematics and philosophy that it has to do with the majority of engineering research. You need a literal genius to figure that out. The majority of those "talented" people and funding aren't doing that.


> There's a frenzy of talent being applied to this problem, it's drawing more brainpower the more progress that is made. Young people see this as one of the most interesting, prestigious, and best-paying fields to work in. A lot of these researchers are really talented, and are doing more than just scaling up. They're pushing at the frontiers in every direction, and finding methods that work.

You could have seen this exact kind of thing written 5 years ago in a thread about blockchains.


Yes, but I didn't write that about blockchain five years ago. Blockchains are the exact opposite of AI in that the technology worked fine from the start and did exactly what it said on the tin, but the demand for that turned out to be very limited outside of money laundering. There's no doubt about the market potential for AI; it's virtually the entire market for mental labor. The only question is whether the tech can actually do it. So in that sense, the fact that these researchers are finding methods that work matters much more for AI than for blockchain.


Really, cause I remember an endless stream of people pointing out problems with blockchain and crypto and being constantly assured that it was being worked on and would be solved and crypto is inevitable.

For example, transaction costs/latency/throughput.

I realize the conversation is about blockchain, but I say my point still stands.

With blockchain the main problem was always "why do I need this?" and that's why it died without being the world changing zero trust amazing technology we were promised and constantly told we need.

With LLMs the problem is they don't actually know anything.


Amount of effort applied to a problem does not equal guarantee of problem being solved. If a frenzy of talent was applied to breaking the speed of light barrier it would still never get broken.


Your analogy is valid, for the world in which humans exceed the speed of light on a casual stroll.


And the message behind it still applies even in the universe where they don't.


I mean, a frenzy of talent was applied to breaking the sound barrier, and it broke, within a very short time. A frenzy of talent was applied to landing on the moon and that happened too, relatively quickly. Supersonic travel also happens to be physically possible under the laws of our universe. We know with confidence that human-level intelligence is also physically possible within the laws of our universe, and we can even estimate some reasonable upper bounds on the hardware requirements that implement it.

So in that sense, if we're playing reference class tennis, this looks a lot more like a project to break the sound barrier than a project to break the light barrier. Is there a stronger case you can make that these people, who are demonstrating quite tangible progress every month (if you follow the literature rather than just product launches), are working on a hopelessly unsolvable problem?


I think it looks more like a speed of light than a speed of sound problem.


I do think the Digital realm, where the cost of failure and iteration is quite low, will proceed rapidly. We can brute force with a lot of compute to success, and the cost of each failed attempt is low. Most of these models are just large brute force probabilistic models in any event - efficient AI has not yet been achieved but maybe that doesn't matter.

Not sure if that same pace applies to the physical realm where costs are high (resources, energy, pollution, etc), and the risk of getting it wrong could mean a lot of negative consequences. e.g. I'm handling construction materials, and the robot trips on a barely noticeable rock leaking paint, petrol, etc onto the ground costing more than just the initial cost of materials but cleanup as well.

This creates a potential future outcome (if I can be so bold as to extrapolate with the dangers that has) that this "frenzy of talent" as you put it will innovate themselves out of a job with some may cash out in the short term closing the gate behind them. What's left is ironically the people that can sell, convince, manipulate and work in the physical world at least for the short and medium term. AI can't fix the scarcity of the physical that easily (e.g. land, nutrients, etc). Those people who still command scarcity will get the main rewards of AI in our capital system as value/economic surplus moves to the resources that are scarce and have advantage via relative price adjustments.

Typically people had three different strengths - physical (strength and dexterity), emotional IQ, and intelligence/problem solving. The new world of AI at least in the medium term (10-20 years) will tilt the value away from the latter into the former (physical) - IMO a reversal of the last century of change. May make more sense to get good at gym class and get a trade rather than study math in the future for example. Intelligence will be in abundance, and become a commodity. This potential outcome does alarm me not just from a job perspective, but in terms of fake content, lack of human connection, lack of value of intelligence in general (you will find people with high IQ's lose respect from society in general), social mobility, etc. I can see a potential to the old world where lords that command scarcity (e.g. landlords) command peasants again - reversing the gains of the industrial revolution as an extreme case depending on general AI progress (not LLMs). For people who's value is more in capital or land vs labor, AI seems like a dream future IMO.

There's potential good here, but sadly I'm alarmed because the likelihood that the human race aligns to achieve it is low (the tragedy of the commons problem). It is much easier, and more likely, certain groups use it and target people of value economically now, but with little power (i.e the middle class). The chance of new weapons, economic displacement, fake news, etc for me trumps a voice/chat bot and a fancy image generator. The "adjustment period" is critical to manage; and I think climate change, and other broader issues tells us sadly IMO our likely success in doing this.


Do you expect the hockeystick graph of technological development since the industrial evolution to slow? Or that it will proceed, only without significant advances in AI?

Seems like the base case here is for the exponential growth to continue, and you'd need a convincing argument to say otherwise.


That's no guarantee that AI continues advancing at the same pace, and no one has been arguing against overall technological progress slowing

Refining technology is easier than the original breakthrough, but it doesn't usually lead to a great leap forward.

LLMs were the result of breakthroughs, but refining them isn't guaranteed to lead to AGI. It's not guaranteed (or likely) to improve at an exponential rate.


Which chart are you referencing exactly? How does it define technological development? It's nearly impossible for me to discuss a chart without knowing what axis refer.

Without specifics all I can say is that I don't acknowledge any measurable benefits of AI (in its' current state) in real world applications. So I'd say I am leaning towards latter.


Past progress is evidence for future progress.


Might be an indicator, but it isn't evidence.


Not exactly. If you focus in on a single technology, you tend to see rapid improvement, followed by slower progress.

Sometimes this is masked by people spending more due to the industry becoming more important, but it tends to be obvious over the longer term.


That's probably what every self-driving car company thought ~10 years ago or so, everything was moving so fast for them back then. Now it doesn't seem like we're getting close to solution for this.

Surely this time it's going to be different, AGI is just around a corner. /s


Would you have predicted in summer of 2022 that gpt4 level conversational agent is a possibility in the next 5 years? People have tried to do it in the past 60 years and failed. How is this time not different?

On a side note, I find this type of critique of what future of tech might look like the most uninteresting one. Since tech by nature inspiries people about the future, all tech get hyped up. all you gotta do then is pick any tech, point out people have been wrong, and ask how likely is it that this time it is different.


Unfortunately, I don't see any relevance in that argument, if you consider GPT-4 to be a breakthrough -- then sure, single breakthroughs happen, I am not arguing with that. Actually, same thing happened with self-driving: I don't think many people expected Tesla to drop FSD publicly back then.

Now, chain of breakthroughs happening in a small timeframe? Good luck with that.


We have seen multiple massive AI breakthroughs in the last few years.


Which ones are you referring to?

Just to make it clear, I see only 1 breakthrough [0]. Everything that happened afterwards is just application of this breakthrough with different training sets / to different domains / etc.

[0]: https://en.wikipedia.org/wiki/Attention_Is_All_You_Need


Autoregressive language models, the discovery of the Chinchilla scaling law, MoEs, supervised fine-tuning, RLHF, whatever was used to create OpenAI o1, diffusion models, AlphaGo, AlphaFold, AlphaGeometry, AlphaProof.


They are the same breakthrough applied to different domains, I don't see them as different. We will need a new breakthrough, not applying the same solution to new things.


If you wake up from a coma and see the headline "Today Waymo has rolled out a nationwide robotaxi service", what year do you infer that it is?


Does it though? I have seen the progress basically stop at "shitty sentence generator that can't stop lying".


The evidence I've been seeing is that progress with LLMs have already slowed down and that they're nowhere near good enough to replace programmers.

They can be useful tools ro be sure, but it seems more and more clear that they will not reach AGI.


They are already above average human level on many tasks, like math benchmarks.


They really aren't better than humans at math or logic, they are good at the benchmarks because they are hyper optimized for the benchmarks lol. But if you ask LLMs simple logical questions they still get them wrong all the time


Yes, there are certain tasks they're great at, just as AI has been superhuman in some tasks for decades.


But now they are good or even great at way more tasks than before because they can understand and use natural languages like English.


Yeah, and they're still under delivering to their hype and the improvements have vastly slowed down.


So are calculators …


If you ignore the part where there proofs are meandering drivel, sure.


Even if you don't ignore this part they (e.g. o1-preview) are still better at proofs than the average human. Substantially better even.


But that does not prove anything. We don't know where we are on the AI-power scale currently. "Superintelligence", whatever that means, could be 1 year or 1000 years away at our current progress, and we wouldn't know until we reach it.


50 years ago we could rather confidently say that "Superintelligence" was absolutely not happening next year, and was realistically decades ago. If we can say "it could be next year", then things have changed radically and we're clearly a lot closer - even if we still don't know how far we have to go.

A thousand years ago we hadn't invented electricity, democracy, or science. I really don't think we're a thousand years away from AI. If intelligence is really that hard to build, I'd take it as proof that someone else must have created us humans.


Umm, customary, tongue-in-cheek reference to McCarthy's proposal for a 10 person research team to solve AI in 2 months (over the Summers)[1]. This was ~70 years ago :)

Not saying we're in necessarily the same situation. But it remains difficult to evaluate effort required for actual progress.

[1]: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmo...


> If an elderly but distinguished scientist says that something is possible, he is almost certainly right

- Arthur C. Clarke

Geoffrey Hinton is a 76 year old Turing Award* winner. What more do you want?

*Corrected by kranner


This is like a second-order appeal to authority fallacy, which is kinda funny.


Hinton says that superintelligence is still 20 years away, and even then he only gives his prediction a 50% chance. A far cry from the few year claim. You must be doing that "strawberry" thing again? To us humans, A-l-t-m-a-n is not H-i-n-t-o-n.


> superintelligence is still 20 years away, and even then he only gives his prediction a 50% chance

I don't know the details of Hinton's probability distribution. If his prediction is normally distributed with a mean of 20 years and a SD of 15, which is reasonable for such a difficult and contentious prediction, that puts over 10% of the probability in the next 3 years.

Is 10% a lot? For sports betting, not really. For Mankind's Last Invention, I would argue that it is.


You don't know because he did not say. He said 20 years, which are more than a few.


> Geoffrey Hinton is a 76 year old Nobel Prize winner.

Turing Award, not Nobel Prize


Thanks for the correction; I am undistinguished and getting more elderly by the minute.


Reality has now corrected my error, which was amongst the funniest possible outcomes.


Indeed! Your comment was the first thing I thought of when I heard the news and I thought of replying too but assumed you might not have enabled notifications Hilarious, all in all!


I'd like to see a study on this, because I think it is completely untrue.


When he said this was he imagining an "elderly but distinguished scientist" who is riding an insanely inflated bubble of hype and a bajillion dollars of VC backing that incentivize him to make these claims?


What are you talking about? How would Hinton be incentivized by money?


I'm talking about Altman.


It doesn't quite have the same ring to it: "If a young, distinguished business executive says something is possible, when that something greatly effects his bottom line..."


wrong. i was extremely concerned in 2018 and left many comments almost identical to this one back then. this was based off of the first gtp samples that openai released to the public. there was no hype or guru bs back then. i believed it because it was obvious. it was obvious then and it is still obvious today.


That argument holds no water because the grifters aren't the source of this idea. I literally don't believe Altman at all; his public words don't inspire me to agree or disagree with them - just ignore them. But I also hold the view that transformative AI could be very close. Because that's what many AI experts are also talking about from a variety of angles.

Additionally, when you're talking with certainty about whether transformative AI is a few years away or not, that's the only way to be wrong. Nobody is or can be certain, we can only have estimations of various confidence levels. So when you say "Seems unreasonable", that's being unreasonable.


> Because that's what many AI experts are also talking about from a variety of angles.

Wow, in that case I'm convinced. Such an unbiased group with nothing at all to gain from massive AI hype.


Flying is a good analogy. Superman couldn't fly, but at some point when you can jump so far there isn't much of a difference


There is an enormous difference. Flying allows you to stop, change direction, make corrections, and target with a large degree of accuracy. Jumping leaves you at the mercy of your initial calculations. If you jumped in a way that you’ll land inside a volcano, all you can do in your last moments is watch and wait for your demise.


A volcano can't kill superman. Rebuttal rejected


Why do you have to use it? I don’t get it. If you write your own book, you don’t compete with anyone. If anyone finished The Winds of Winter for R.R.Martin using AI, nobody would bat an eye, obviously, as we already experienced how bad a soulless story is that drifts too far away from what the author had built in his mind.


Same thing hype bros told 2 years ago, won’t happen.


Rarely using it at work, seems you are overestimating


Is there a new drug we need to know about?


Life? I'm just thinking about what we can move on to now that the mundane tasks of life recede into the background. Things like artistry and craftsmanship, and exploration.


Ah yes, just like stocks can only go up. No one will feel like hit by a ton of bricks.


You are in a bubble, hyping this up way too far


Ah, the passionate binge watchers and recreational drug users, who doesn’t know them. Many people have no passion after work.


> Many people have no passion after work.

That’s trained / learned behaviour though. No kid grows up wanting to sit in a cubicle all day


It will be difficult to keep up proper levels of intelligence and education in humanity, because this time it is not only social media and its mostly negative impacts, but also tons of trash content generated by overhyped tools that will impact lots of people in a bad way. Some already stopped thinking and instead consult the chat app under the disguise of being more productive (whatever this means). Tough times ahead!


Buy more shovels buddy


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: