Hacker News new | past | comments | ask | show | jobs | submit login

It's not magic, though. If AI can do work of a human, it can do work of a human. It's a trivial statement, and inability to see it is a hard cope.

Are you gonna to take a bet "AI won't be able to do X in 10 years" for some X which people can learn to do now? If you're unwilling to bet then you believe that AI would plausibly be able to perform any human job, including job of AI researcher.




At the end of the day it can only get as far as the data it has. Let's say you want to make a drug that inhibits a protein. The AI can generate plausible drugs but to see if it actually works you need to test it in the lab and then on an animal etc. now you can have an AI that has a perfect understanding of how a drug interacts with a protein but wesuch data is not available in the first place. Without that you can't just simply scale gpt type models


‘Doing the work of a human’ is something that is very hard to define or quantify in many cases. You sound very confident, but you don’t address this at all; you simply assume it’s a given.

Relevant: https://www.jaakkoj.com/concepts/doorman-fallacy


Yea, I think we're in kind of a vicious recursive cycle of imperfect-metric-reinforcement (reward-hacking I suppose, though often implemented in economics as well as code) rather than one of recursive self-improvement in a more holistic sense. Optimization is really good at turning small problems of this nature into big ones more quickly


I don't claim it's impossible, just that there isn't a clear path from what exists now to that reality, and that the explanation presented by the above commenter (and I suppose OpenAI's website) does not clarify what they think the path is


What will AutoGPT look like if we have 100x more compute and another 10 years of research breakthroughs? It will be pretty damn good. If it can do the cognitive work of an AI researcher, well, there's your recursive self-improvement, at least on the research front (not so much on the hardware/energy front, physical constraints are trickier and will slow down progress in practice).

I don't know the exact path there, because if I did I'd publish and win the Turing Award. But it seems to be a plausible outcome in the medium-term future, at least if you go with Hinton's view that current methods are capable of understanding and reasoning, and not LeCun's view that it's all a dead end.


I won't comment on whether I believe those researchers hold those views as you describe them, but as you describe them, I think both those descriptions of the state of AI research are untrue. The capabilities demonstrated by transformer models seem necessary but not sufficient to understand and reason, meaning that while they're not necessarily a "dead end", it is far from guaranteed that adding more compute will get them there

Of course if we allow for any arbitrary "research breakthrough" to happen then any outcome that's physically possible could happen, and I agree with you that superhuman artificial intelligence is possible. Nonetheless it remains unclear what research breakthroughs need to happen, how difficult they will be, and whether handing a company like OpenAI lots of money and chips will get that done, and it remains even more unclear whether that is a desirable outcome, given that the priorities of that company seem to shift considerably each time their budget is increased (As is the norm in this economic environment, to be clear, that is not a unique problem of OpenAI)

Obviously OpenAI has every reason to claim that it can do this and to claim that it will use the results in a way designed to benefit humanity as a whole. The people writing this promotional copy and the people working there may even believe both of these things. However, based on the information available, I don't think the first claim is credible. The second claim becomes less credible the more of the company's original mission gets jettisoned as its priorities align more to its benefactors, which we have seen happen rather rapidly


We can reason about it without knowing the path. E.g. somebody in 1950s could say "If you have enough compute you can do photorealistic quality computer graphics". If you ask them how to build a GPU they won't know. Their statement is about principal possibility.


Yes, and there are lots of predictions about when that would happen that turned out to be very wrong. Even if there is a clear path and specific people assigned to do a thing, it is famously always more difficult than expected for those people to correctly estimate how long it will take. Forgive me for being skeptical of random laypeople giving me timelines for an unknown unknown based on an objective that is ill-specified for work being done on something for which there is currently a lot of marketing hype




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: