The word "smarter" masks some of the complexity here. If you define it as "higher performance on a task widely considered to require intelligence", then we've had computers that are smarter than humans for at least decades. If you define it as "higher performance on every task widely considered to require intelligence", then I'll take that bet, please.
We have beat humans on every single atari game by at least one order of magnitude, and we do that consistently, and it really only took 5 or so years since the first solution that provided tangible results. It has only been 8 or 9 years since GPGPUs were used for ML research.
We are also seeing models that are able to generate code given prompts.
Given enough representational power, I don't see why a model that learns to solve games can't figure out how to generate good enough subroutines for itself.
So I am taking the other side of this bet.
We will see ML models surpass Humans in every task in 30 or so years.
I will find you and buy you dinner in October of 2051.
> We have beat humans on every single atari game by at least one order of magnitude
There's mechanical skill involved, it's not purely intelligence.
> We are also seeing models that are able to generate code given prompts.
This has been discussed a lot, but the generated code is nowhere close to good enough for large projects where you really need intelligence.
> Given enough representational power, I don't see why a...
Except that it's not linear scaling. The larger NLP models consume absurdly large resources, it's not straightforward to "get enough representational power"
Also, most models fail to adapt to new tasks outside of their narrow training scope, that's a massive problem. Even if you make models large, you will find that getting data covering all edge cases is exponentially expensive.
> This has been discussed a lot, but the generated code is nowhere close to good enough for large projects where you really need intelligence.
> Except that it's not linear scaling. The larger NLP models consume absurdly large resources, it's not straightforward to "get enough representational power"
When allowing maximizers to run wild, just like reinforcement learning, they will find hidden solutions, and when the model can provide an action in the form of a dense representation, it can also use code generation models with much more precision that we do because it can skip the encoding part.
> Also, most models fail to adapt to new tasks outside of their narrow training scope, that's a massive problem. Even if you make models large, you will find that getting data covering all edge cases is exponentially expensive.
We are still 6-7 years in. Deepmind's last paper on general agents has them generalizing to new tasks relatively easily. It's still not there, but we miles ahead than we were 5 years ago.
>> We have beat humans on every single atari game by at least one order of magnitude, and we do that consistently, and it really only took 5 or so years since the first solution that provided tangible results. It has only been 8 or 9 years since GPGPUs were used for ML research.
Actually, only the 57 games in the Aracde Learning Environment, not "every single atari game". It's an impressive achievement and there's no need to oversell it.
If AI surpasses humans at either comedy or film(by total hours of content viewed, or some other metric you propose) by January 2050, I'll buy you a fake meat dinner.
Here's a possibly related bet I have with a friend: "If, in ten year's time (from March 2021) self-driven cars outnumber manually-driven cars on the US roadways, I will buy you dinner. (and vice-versa, he buys if they don't.)"
I think it's a reasonably harder task than playing chess for assessing whatever it is we mean by 'intelligence' and my bias is that the difficulties remain under-appreciated by the technical optimists among us. But I could be wrong.
I think you made an insanely good bet considering the average age of all vehicle types in the US except SUVs is greater than 10 years. Even if every new car in the US were fully self driving then it would be a tight bet assuming current trends hold.
I'd probably be willing to concede to a weaker form of the bet, say if more than 20% of taxi service miles driven were by self-driving cars. But we don't need to tell my friend that.
That sounds like a commercial acceptance bet rather than a technology one but yeah I would take the same bet. But I'd bet by 2031 driving a car will not be the peak of AI.
I take commercial acceptance as a proxy for the acceptance and practicality of artificial intelligence at the 'everyday' task of driving. I would take the absence of commercial acceptance as evidence that artificial intelligence is not up to the 'everyday' task of driving.
> I would take the absence of commercial acceptance as evidence that artificial intelligence is not up to the 'everyday' task of driving.
The most interesting thing Tesla is doing to make "acceptable" FSD possible is to open an insurance company.
So commercial acceptance is proxy for capability, but it is not immune to regulatory moat building (or alternatively, go the other way - like, force 80+ year olds to drive enhanced cars).
There's some ways around it, but that problem isn't a technology one.
Momentum is still a thing regardless of where the technology is.
The current fastest production car in the world (and of all time) is an electric car, but most cars are still not electric. That doesn't mean gas cars are "better cars" than electric ones.
Sure, there are technological and commercial (and regulatory, and, and, and...) aspects to the bet. The thing I'm after in my car bet is that I'm using 'replaces humans at something that requires intelligence' as the bar to clear for what you describe as 'smarter computers in our life time.'
Can you give a purer example of a bet that would demonstrate what you believe here?
The special part is the lived experience of social animals and everything that goes with that. Stuff that we have to work hard to get machines to understand, since they're not conscious biological creatures that have to survive, and thus we have to train or program them in a way that is somewhat different from being born and raised as a human.
In 1979 Douglas Hofstatder wrote Godel, Escher and Bach in which he predicted that computers would eventually be able to beat humans in chess, but that those computers would say, "I'm bored of chess now, I want to talk about poetry." The history of AI has been the hard work of really describing a problem like playing chess or reading letters or recommending songs and then applying AI techniques to it, but I don't think anybody's ever trying to work on a computer that discovers new problems to solve.
We don't know yet if there is anything special about human intelligence, or what the limitations of general intelligence might be. Are animal and plants a slower form of the same intelligence, or is there something qualitatively different? Can rocks and liquids be considered intelligent since it led to life, which eventually led to us?
Current AI/ML does not appear to have any properties of "life intelligence" - for example you can put an animal or even plant into an unfamiliar situation and it will often figure out a way to survive. AI/ML in the evaluation phase is often pretty dumb, and needs new training if anything changes. Reinforcement learning is probably the closest but still seem pretty limited.
I don't think the current approaches will lead to general intelligence. However, I do suspect that when the right theoretical breakthrough is made AI will rapidly become superhuman and humans will not be in control of what it does - it will simply iterate too quickly for us to compete in any way intellectually.
I don't think that's a useful framing. There is nothing special about stars, either, but the chance of human engineering resulting in some power generator that can equal the output of a star any time soon is pretty low. The fact that nature has solved a problem means it can be solved, but it doesn't mean we can trivially figure out how to do it in a manufactured device.
Aside from that, there is no "we" here. Some people reading this are 20 and some are 70. The scope of what it means for something to happen in one's lifetime is quite different for those two groups.
To help improve the framing a little more, with rough orders of magnitude:
* 10^10 Watts: electrical power generation of the Itaipu Dam[0]
* 10^26 Watts: luminosity of the Sun[0]
* 10^10 neurons simulated on the Japanese supercomputer K last year[1]
* 10^10 neurons in the human brain[2]
I'm not claiming that a simulation of the human brain with equivalent capability is just around the corner, just that it is misleading to point to the scale difference between artificial and natural energy sources with the implication that brain simulation is beyond our reach in the same way.
I would not place bet on the replication of the result of a billions years of iterations and refinement, whatever under-optimized was the process. (who knows, it even might be the most efficient)