Hacker News new | past | comments | ask | show | jobs | submit login
AI Is No Match for the Quirks of Human Intelligence (mitpress.mit.edu)
89 points by anarbadalov on Oct 4, 2021 | hide | past | favorite | 81 comments



There's an entire genre of articles which argue that computers can never do X, because X is ill defined. Thing is, half of the time X is only ill defined because the writer neglects to define X well, and for the other half, that the writer can't define X well is no proof that it is impossible to define X well.

Here however, if the take the argument of representations in a technical sense, then since a neural network topology is just data we can fit along with the rest of the network, it's just wrong.

Or if we take a slightly more charitable interpretation, observing robots, from time to time they can get to really surprising solutions, so at least one would need to distinguish between a numerical algorithm finds a surprising solution and 'insight.'

Now, I actually don't want to be too hard on the article, I think it is genuinely interesting hodgepodge of facts about human cognition, I just think it should be rewritten without the computers will never do X thesis.


Completely agree. The history of this dates back to philosopher Hubert Dreyfus, who, observing the halting progress of text recognition research at the time, and made open ended declarations that computers can't do X. He said computers couldn't compete at human levels in chess because chess required "insight." This claim did not age well.

I find these to be confused, hodgepodgey, not clearly centered on any coherent argument, and often just plain shallow and wrong while masquerading as deep insights. I am glad that, at least, the recent advances in AI are repudiating this genre, or at least forcing articles to add new caveats as to why recent advances don't count.


I don't disagree necessarily, but humans seems capable of solving problems even when when they are poorly defined by understanding the some implicit context. Sure if we encode all the context, the computer can solve it too, but isn't that fundamentally what the article is trying to convey? That we haven't really taught computers to read between the lines yet?


In my experience of developing Custom ChatBOT, X = business model or internal process which contain logic or work flow.

For short, X = logic.


>> Psychologists have found that people make different choices when presented with the same alternatives, depending on how the alternatives are described.

Encapsulated into an old joke:

Two monks, both avid smokers, go to the pope for spiritual advice.

The first monk goes in, then after a while comes out looking dejected. The other monk asks him:

"What happened? What did he say?".

"He said no: we are not allowed to smoke during prayer".

The other monk smiles and nods knowingly. "Let me try" he says.

He goes in. After a while he comes out smiling triumphant.

"Wow, he said yes?" asks the other monk, amazed. "What did you say to him?"

"I said: Holy Father, are we allowed to pray while we smoke?".


> different choices when presented with the same alternatives

I've never been comfortable with the assumption from the social sciences that people don't change their preferences when confronted with seemingly unrelated extra options, the principle known as Independence of Irrelevant Alternatives[0].

Now it seems that there is an even weaker assumption, which perhaps should be called Independence of Relevant Alternatives, which is also not a reasonable axiom to hold. The assumption could be expressed as "A person's preference for X over Y guarantees their preference for X' over Y', whenever X' is an equivalent option to X and Y' is an equivalent option to Y."

[0] https://en.wikipedia.org/wiki/Independence_of_irrelevant_alt...


At what point does the current AI hype machine crash?

It feels like these startups all sell solutions to the top 20 or so companies, competing for a very limited market. It's table stakes for everybody else, an "all-or-nothing" achievement for AI.

By contrast, even a small amount of connectivity improved things dramatically for huge swaths of industry. That's how the internet started small and grew up.

AI seems to rely on being everything or being nothing. There's no small victories.


I can't speak to cloud deep learning, but edge NN's have two practical applications: anomaly detection and object defect detection. (Keyword spotting is completely stalled. I've been watching Arm-based companies work on keyword spotting for years and they are still 80% accurate in a real setting, which is like talking to an old person with hearing issues.) Anomaly detection (vibration analysis, audio anomalies) are working fairly well. Same goes for assembly lines where defective parts are booted off, and NN's don't improve that much over traditional methods.

The hype machine is all the big sexy stuff. GPT, Protein folding and Alpha Go are truly amazing. But I haven't seen a big step-function in deployable NNets (github-ready) in years. Which given the number of people researching this I expected a linear progression. What have we seen since UNet, MobileNet, SSD, Yolo, BERT, ResNet150, Inception ... ? That's worrisome, but it also could be the pendulum swing while people figure out exactly why these NN's work the way they do.

I think we need to wait another 20 years, which means lots of hype money will dry up, but I don't think it will vanish. The efficiency of neural nets as a general function approximation might not be super high accuracy, but it is amazing to see such a simple construct perform with such speed.


> edge NN's have two practical applications: anomaly detection and object defect detection

Do you have any links to companies working on this?

I also haven't found any real innovations since adversarial neural networks in ~2016, which roughly fits in with your timeline. Not sure what happened to Geoff Hinton's Capsule networks, that looked like it was going to be a big thing at the time. This is admittedly a very hot take though.


At Embedded World 2020, every embedded semi and FPGA company had some kind of NNet running. One that was really impressive was Xilinx. They had multiple demos set up at Embedded World in 2020 showing NNet's for defects, person detection, and audio vibrational analysis with autoencoders. As did several other FPGA companies. I googled some links but they don't look like what I saw. So I'd recommend googling Xilinx if you are interested in industrial applications.


many thanks!


It won't because actual DL products solve problems that are not sexy but still works objectively better than traditional ML for most tasks.

The best example of this is OCR/Document Intelligence. The old ML approaches of the 1990-2000 are really no match for what we have now. It's gotten to a point where extracting handwritten text is done reliably for very cheap. Same goes for tables and key-values extractions which require a higher level of understanding that simply wasn't available before.

So no, it won't crash and you'll keep seeing these pieces every month that "AI can't do X" until we actually achieve X and then the goalpost is moved.


> It's gotten to a point where extracting handwritten text is done reliably for very cheap.

Is that freeform handwritten documents, or handwritten text filled into boxes on a form?


Even free form handwritten text recognition is pretty good at this point and that's true for most languages. You can try with Google Lens if you have an Android phone.


> ... "AI can't do X" until we actually achieve X and then the goalpost is moved.

This gets said often but I don't think anybody whose credible in the field actually makes statements like this.

Take Chess for instance. In the IBM "Big Blue" documentary from ~2004, they quote journalists as saying "AI can't play Chess as well as humans but if it could, then AI would be solved." Why did the techniques from Big Blue not seem to go anywhere?

I know for a fact that scientists _were not saying this_. The Lighthill Debates on AI specifically talk about why playing games well doesn't really prove anything. [0]

I do agree that OCR has improved greatly by AI but this feels very niche. Somebody mentioned defect anomaly detection in another comment and I was not aware of this. All useful for sure. Still, this doesn't amount to anywhere near the hype that was announced earlier last decade. Moreover, the economics in AI are mostly awful despite everybody's seemingly best efforts. [1]

Even if its useful in some vague sense, it's not necessarily economically useful. Amazon has ~10,000 people working on Alexa. [2] Have they turned a profit on these endeavours? I understand they can absorb the costs but its not clear to me how the economics will work out here.

ML models haven't even been useful in places where statistical methods have reined supreme such as Renaissance Technologies and other hedge funds. No large companies are using neural networks in a significant capacity to my knowledge.

Another big tell for me is the lack of any consumer products in the space. Where did they go? why are they missing? This is what I mean by "everybody is competing for the top 20 or so customers."

This is compounded by the unstructured nature of most data. Most databases are still terrible, especially at the few institutions large enough to have it and large enough for it to make a difference in their business. There should be more focus into this problem if anything. A well tuned and structured database will be many times more useful than a fancy model that needs constant retraining. But I guess its not as cool so nobody cares.

[0]: https://www.youtube.com/watch?v=03p2CADwGF8 -- highly recommended, with many of the arguments still resonating today.

[1]: https://a16z.com/2020/02/16/the-new-business-of-ai-and-how-i...

[2]: https://qr.ae/pGJUKk -- couldn't find a better source offhand.


>> The Lighthill Debates on AI specifically talk about why playing games well doesn't really prove anything.

For a bit of context, that is a televised debate between Sir James Lighthill, comissioned by the UK government to write a report (the "Lighthill Report") on the state of AI research, on the one side, and John McCarthy [1], Donald Michie [2] and Richard Gregory [3], on the other side. The Lighthill Report is widely considered to be a principal cause of the first AI winter, of the 1970's, which killed AI research dead for a good decade or so (until the next winter, of the 1980's). The debate at that point was basically just for show as Lighthill had already submitted his report.

Now, I don't know which part of the televised debate you mean when you say that [the debate] talks about why playing games well doesn't really prove anything, but that sounds very much like Lighthill's opinion. On the other side, we have Donald Michie, of course, creator of MENACE [4], the first reinforcement learning system that played tic-tac-toe and was built out of matchboxes [5] [6]. Reinforcement learning is, of course, considered important today.

John McCarthy himself was critical of AI game playing research, particularly on chess. In his response to the Lighthill report [7], he has this to say:

Lighthill had his shot at AI and missed [8], but this doesn't prove that everything in AI is ok. In my opinion, present AI research suffers from some major deficiencies apart from the fact that any scientists would achieve more if they were smarter and worked harder.

1. Much work in AI has the ``look ma, no hands'' disease. Someone programs a computer to do something no computer has done before and writes a paper pointing out that the computer did it. The paper is not directed to the identification and study of intellectual mechanisms and often contains no coherent account of how the program works at all. As an example, consider that the SIGART Newsletter prints the scores of the games in the ACM Computer Chess Tournament just as though the programs were human players and their innards were inaccessible. We need to know why one program missed the right move in a position - what was it thinking about all that time? We also need an analysis of what class of positions the particular one belonged to and how a future program might recognize this class and play better.

McCarthy absolutely did not think that "playing games well doesn't really prove anything". He believed that getting machines to play games[9] better than humans would illuminate the mechanisms of the human mind that allow humans to play chess, and to do other things besides. Chess was, for him, a model of human thinking, the "drosophila of AI" [10], much like drosophila is a model organism for biology research.

McCarthy would not have been happy with today's achievements in AI game playing, such as AlphaGo and family. He would have considered them symptoms of the "look ma, no hands disease", results with no real scientific significance [11]. Michie, who created the term "Ultra Strong Machine Learning" [12] to describe machine learning that improves the performance of the human user would probably have thought the same about today's uses of reinforcement learning.

However, neither of them would have agreed that "playing games well doesn't really prove anything".

>> Why did the techniques from Big Blue not seem to go anywhere?

Note that Deep Blue, IBMI's chess-playing system that beat Gary Kasparov, did not use machine learning. Only good, old minimax and an opening book of moves compiled by chess grandmasters [13]. Minimax only works for board games, and then two-player, zero-sum games with complete information, and so cannot be used outside of chess, go, and other similar games. This is why it did "not seem to go anywhere". It was the kind of AI that McCarthy blasted as having no scientific value.

_______

[1] Like Donald Michie, but in the US.

[2] Like John McCarthy, but in the UK.

[3] I honestly have no idea. Probably important early pioneer of AI.

[4] The "Matchbox Educable Noughts And Crosses Engine".

[5] Michie didn't have access to a computer.

[6] Great material about MENACE here: https://rodneybrooks.com/forai-machine-learning-explained/

[7] "Review of ``Artificial Intelligence: A General Survey''" http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthi...

[8] Oops.

[9] Read: chess.

[10] http://jmc.stanford.edu/articles/drosophila/drosophila.pdf

[11] https://www.wired.com/2011/10/john-mccarthy-father-of-ai-and...

  "Computer chess has developed much as genetics might have if the geneticists
  had concentrated their efforts starting in 1910 on breeding racing
  Drosophila," McCarthy wrote following Deep Blue's win. "We would have some
  science, but mainly we would have very fast fruit flies."
[12] "Machine learning in the next five years" https%3A%2F%2Fdl.acm.org%2Fdoi%2F10.5555%2F3108771.3108781&usg=AOvVaw0rwP_cc1GnNGNs7dBa7Qao

[13] "AI: A Modern Approach" http://aima.cs.berkeley.edu/ See chapter 5 "Adversarial Search and Games".


I have largely been critical of machine learning for some time, I don't think it will crash though. I feel many people have been misled on what deep learning and machine learning actually do and therefore become confused on its limitations, but once you understand it as a general statistics-fitting paradigm, it is easy to ask yourself - can you solve this problem by fitting a complex pig pen around a dataset? Or, another way, is your solution based around classification in some way? If the answer is no, then modern AI techniques won't be able to get you an answer.

As crude as it may seem, rules engines and long sheets of If this Then that heuristics tend to solve more complex systems that can't be statistically fit, but I feel machine learning is slowly approaching a point where it can deduce those If Then statements. The main problem is that as humans we cannot comprehend the wealth of connections that might lead to a certain heuristic, so it is very difficult to tell a modern AI that it is wrong in a more abstract or pattern-based sense, and these approaches tend to collapse back into statistical data fitting. I can easily communicate with you if your understanding is wrong, but it is hard to do that with an AI, mostly we can either fix the data, or adjust the training parameters and wait. That is why it seems to rely on all-or-nothing, as there isn't much control on the iterative learning process on a network apart from startup options and testing after the fact.

To me, the real breakthrough will come by creating models of abstract thought. Some models exist as heuristic systems, which require an exhaustive set of rules to model various aspects of thinking, and some are probabilistic in nature and can be taught to a machine learning application, but neither of these model everything, nor do they do all of their topics equally well. Whichever paradigm can come up with a robust model first will likely come up top in the AI paradigm wars, but it could be tomorrow, could be next century.


The gold standard for thinking about the limits of AI is the work of Herbert Dreyfus, specifically the books What Computers Can't Do: The Limits of Artificial Intelligence published in 1972, and the revised What Computers Still Can't Do: A Critique of Artificial Reason, published in 1992, and Mind over Machine, from 1986.


Perhaps you meant the fool's gold standard?


If by that you mean that AI hype is fool's gold, yes.


The pattern I've seen over many decades is to say "computers can't do X", then when computers do X, they move the goalposts and say well, X wasn't really that hard, it didn't require the kind of intelligence humans have.

A short list of examples:

Chess

Driving

Go

Human level bipedal locomotion

Facial recognition

The list goes on.


Driving isn't solved though. There are very limited areas where computers can drive. Apart form that, driving is done entirely by humans.

Driving would be a task where AI could become hugely important and help to solve labor shortages (see UK right now). But it's not nearly there yet. Apart from that, I don't see anything in that list that would make a big impact on humanity. Solving Chess and Go was impressive, but it doesn't really change people's lives. And facial recognition seems to have more malicious use cases than ethical ones.

And bipedal locomotion has improved but is it really solved? Are there robots that can reliably perform the range of motion that humans can? There are some very expensive robots that can jump or run but that's only a small part of what humans can do.

From my subjective observations, there's very little automation in the physical world. It's all still done by humans. No matter if it's picking up trash, stocking supermarket shelves, driving buses and trucks or delivering the mail. Many of those (maybe except for driving) are extremely simple tasks for humans, they haven't been reliably solved by machines or are far more expensive than hiring humans.


Humans haven't solved driving either though.


Anything that has state involved in a known boundary of conditions, AI will always best human's.

Human's are really good at generating rules that create dynamics of play. Whenever that set of rules is inscribed in a form of notation, AI will best it.

Human's create notation, AI consume it. This can be mutually beneficial. AI depend on human's illogical-logic for breaking novelty thresholds, and human's depend on AI for doing better at logic. AI solve a very important problem of the human condition that has a huge attack vector on reason and logic. If we can somehow find harmony with AI where it doesn't become a risk of manipulating the source of debugging logic & reason then it'll be a very nice match!


I sometimes try to think about the "fast process" mentioned as intuition which I see as a fast likelihood estimation we do based on our previous knowledge. Which also make me think what role plays memory in our intelligence.

BTW, I don't have much idea about the subject.


If true then robot cars are scarier then I've thought ... AI is going to be able to process the right scenario to avoid a deadly crash within the first decade or more of market adoption or the programmers are going to learn and ship bug fixes for each person their prior code killed?

*I'm always downvoted for this thought but is this not the true way forward with robot cars? Those who downvote me think the first batch of market adopted robot cars and their AI is perfect and is not going perform the wrong scenario ... people arent going to be killed by these things?


It's a legitimate question but the mechanism of how it works is completely wrong. Many people will be killed by any given mass market implementation of self driving cars. Those people will be killed by crashes that a human could not have avoided, people will be killed by errors few humans would make (e.g. "plowing into an overturned truck at full speed") and people will NOT be killed by facets of the system that humans cannot replicate (never tired, never drunk, never angry).

Dozens, hundreds, or even thousands of people will die between "bug fixes". The only metric needed here is to determine if the system as a whole performed better or worse than a fleet of typical people. We know what the accident rate is for a fleet of typical people (for now) and it is pretty bad. People also occasionally drive into obvious trucks for no known reason. People are inattentive. People are emotional. People rush. People ignore things they shouldn't or react too late to things they should. The bar to operate more safely than people is high, but not impossibly high.

In response to market changes, world changes, and continued investment, code will change but also input training samples will change, labeling technology will change. Those changes will result in a measured change in the fleet's performance.

There will be cases where a single death results in some code change. Early on there will be many such cases. But as time goes on, those cases will become less and less frequent, as the cases where specific code points are needed become not only unnecessary but even an impediment to the proper mapping of world features to behavior output.


The important part here is to use the right benchmark for human drivers. Crash statistics are heavily skewed by drivers that are speeding or under the influence. That can be the right benchmark to show how AI driving can help in certain circumstances, but they don't show why most human drivers should be replaced.

Taking those out and considering assistance systems in new cars (which already work quite well), AI has to perform incredibly well to drive safer, especially if it can only drive in easy weather conditions.


Far in the future, we will also see the effects of having full or near-full adoption of self-driving. If all the cars on the road are driven by the "same" or at least similar driver, the edge-cases will drop off significantly.


> and ship bug fixes for each person their prior code killed

My question is always, who is responsible for those deaths? Or will we just have so little care for life that we consider them "for the cause"?


A valid question, and one that deserves an answer.

However, also valid to note that 38k+ people die on US roads each yeah, and as I understand it, most of those are chalked up to being freak, unpreventable "accidents". So maybe there's a step here even ahead of autonomous vehicles where we commit to abandoning this way of thinking and insist that every road death is fully root-caused. Not just back to human error, but in the FAA sense, back to why equipment, processes, and infrastructure were in place that allowed a single moment of human inattention to be so deadly.


> Not just back to human error, but in the FAA sense, back to why equipment, processes, and infrastructure were in place that allowed a single moment of human inattention to be so deadly.

I'm for it - but I'll bet the average person will be against all the weeks of training we will soon require every years before you are allowed to touch a car. It will be even worse once people realize how high the drop out rate is (people who fail and suddenly can't get around)


Yeah. I think most advocates have long ago realized that the only way you could ever hope to have the requirements for vehicle piloting tightened up to where they should be is if you've built a society where 99% of people can drop out of the pool of drivers and survive, and that means some combination of a) robust mass transit, and b) machine pilots.

The current system is obviously the result of a century of symbiosis between car-centric development and driving being seen by most as a requirement to participate in society, and therefore a de facto right.


That's my sentiment on this .... progress will be a killer in this realm of software development and on a large scale. Developers who get into robot cars the deaths their AI makes doesn't or won't bother them? Just analyze the data, learn and ship the fix who cares about johnny, susie and their kid killed by their AI's mistake in which they weren't even driving a robot car just driving alongside one like Uber's pedestrian killer robot car.

Right now the vast majority of developers code we are shipping is just fixing bugs in business and consumer applications where loss of life is almost nil in what we fix and ship.


Paul, some engineers already live in the world where mistakes can cost lives. Including automotive software engineers. In fact, the engineering world is full of people who make compromises knowing that the product could be safer, but they're aiming for safe enough because that's the only real way to move forward.


It depends on the kind of accident. Currently, accidents are also treated very differently depending on intend and the likelihood that this fault would've been made by other drivers. I assume software will be judged under the same circumstances.


I'm always thinking the thing we have to keep in the back of our heads -- "cars" as humans have done them are stupid. Very very stupid. You go to LA and you see all the people going to places, each in separate cars, when trains and subways were invented a long time ago, and it becomes clear.

So, sure, you have an interesting technical question of "robots being able to drive like people," but let's not take any of this too seriously if we're comparing to a concept of "intelligence." The collective stupid is far too overwhelming.


Well, autonomous cars are able to detect obstacles on an narrow lane. That is nowhere near general intelligence.


I wonder how they do on the L.A. freeway with average traffic that goes to heavy and average again where there's a police chase happening in a torrential downpour... im sure more scenarios that happen in life at once can be added on ... yet don't worry the AI knows all the millions of scenarios on top each other and will be able to handle it like an attentive human driver would in such situations.


I am really new to this AI game and I hear it from many people that it is the future and some say it is the hype.

I think the future is on middle ground, but I'm not sure what that middle ground it can any one tell from their experience and/or expertise what that middle ground is ?


We have a way of allowing every variable that we perceive from past to present to affect our thinking(output). AI neural nets are 1 dimensional in comparison


At some point we started calling nonlinear regression AI and now we're spending inane amount of energy explaining that this AI is not really AI.

well, no shit.


"We" in your sentence are not the same groups of people.


If you were to postfix any “AI” criticism with “yet”, it would make the point being made more legitimate.. We don’t know what we don’t know..


and while we will certain learn more later, we may never learn it all

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...


I somewhat understand how applies to math, but is it generalizable to life?


What else is AI than math applied to life.

If Godel binds on math in general, it aught bind especially well to math applied to a (constantly evolving) reality.


Why? We can't predict the future.


It's odd to take an anecdote about what a person did thousands of years ago (assuming Archimedes actually existed, I suppose we have decent evidence) and use that as the gold standard for how the human mind functions.


First, there's plenty established historical evidence that supports Archimedes existence. I'm not sure why you doubt this.

Second, the article uses the story of Archimedes Principle as one of several examples to illustrate in an digestible manner what an "Insight problem" is, how it differs from a "Path problem", and how that difference impedes AI from matching human intelligence. At no point did the author presume that Archimedes "Eureka!" moment is a "gold standard" by which to compare the performative functioning of modern day, digital based AI.

Third, consider that this blogpost / essay is target a wider audience beyond the field of artificial intelligence. As the impressum in the footer of the site reads:

> Illuminating the bold ideas and voices that make up the MIT Press's expansive catalog. We publish thought-provoking excerpts, interviews, and original essays written for a general reader but backed by academic rigor.


That's being disingenuous to the point being made in the article. Archimedes's example is only given as an instance of the insight problem. I am sure you can construct such examples specific to any domain. That particular example was probably chosen because it would be familiar to a general audience, but the overall point is solid.

Humans have made some rather remarkable feats of intellectual achievements. Could AI have discovered relativity or evolution for instance?


Assuming there is a gold standard for intelligence, acting as if we can replicate what it took a universe millions of years to stumble into, by etching into silicon some algorithms that have meaning to us specifically, then seeing that as an autonomous experience acquiring organism is odd.


Chess and Go were beaten, and the AI goalpost had to move /somewhere/...


Some day we may reach the point where we realise humans don’t qualify as AI either.


The "A" in AI stands for "artificial". Intelligence demonstrated by machines.

Human intelligence is natural intelligence.


There is nothing special about human intelligence. We will have smarter computers in our life time and I'm willing to place bets.


The word "smarter" masks some of the complexity here. If you define it as "higher performance on a task widely considered to require intelligence", then we've had computers that are smarter than humans for at least decades. If you define it as "higher performance on every task widely considered to require intelligence", then I'll take that bet, please.


> higher performance on every task

Even just one task: bringing up a child to be a well-adjusted, productive adult.


We have beat humans on every single atari game by at least one order of magnitude, and we do that consistently, and it really only took 5 or so years since the first solution that provided tangible results. It has only been 8 or 9 years since GPGPUs were used for ML research.

We are also seeing models that are able to generate code given prompts.

Given enough representational power, I don't see why a model that learns to solve games can't figure out how to generate good enough subroutines for itself.

So I am taking the other side of this bet.

We will see ML models surpass Humans in every task in 30 or so years.

I will find you and buy you dinner in October of 2051.


> We have beat humans on every single atari game by at least one order of magnitude

There's mechanical skill involved, it's not purely intelligence.

> We are also seeing models that are able to generate code given prompts.

This has been discussed a lot, but the generated code is nowhere close to good enough for large projects where you really need intelligence.

> Given enough representational power, I don't see why a...

Except that it's not linear scaling. The larger NLP models consume absurdly large resources, it's not straightforward to "get enough representational power"

Also, most models fail to adapt to new tasks outside of their narrow training scope, that's a massive problem. Even if you make models large, you will find that getting data covering all edge cases is exponentially expensive.


These two go hand in hand

> This has been discussed a lot, but the generated code is nowhere close to good enough for large projects where you really need intelligence.

> Except that it's not linear scaling. The larger NLP models consume absurdly large resources, it's not straightforward to "get enough representational power"

When allowing maximizers to run wild, just like reinforcement learning, they will find hidden solutions, and when the model can provide an action in the form of a dense representation, it can also use code generation models with much more precision that we do because it can skip the encoding part.

> Also, most models fail to adapt to new tasks outside of their narrow training scope, that's a massive problem. Even if you make models large, you will find that getting data covering all edge cases is exponentially expensive.

We are still 6-7 years in. Deepmind's last paper on general agents has them generalizing to new tasks relatively easily. It's still not there, but we miles ahead than we were 5 years ago.


>> We have beat humans on every single atari game by at least one order of magnitude, and we do that consistently, and it really only took 5 or so years since the first solution that provided tangible results. It has only been 8 or 9 years since GPGPUs were used for ML research.

Actually, only the 57 games in the Aracde Learning Environment, not "every single atari game". It's an impressive achievement and there's no need to oversell it.


I'll offer an even better deal:

If AI surpasses humans at either comedy or film(by total hours of content viewed, or some other metric you propose) by January 2050, I'll buy you a fake meat dinner.


As in the script is generated by the AI?

Or the whole movie?

And comedy, could the AI do standup comedy? Where the jokes are generated by it, rather than the human?


Here's a possibly related bet I have with a friend: "If, in ten year's time (from March 2021) self-driven cars outnumber manually-driven cars on the US roadways, I will buy you dinner. (and vice-versa, he buys if they don't.)"

I think it's a reasonably harder task than playing chess for assessing whatever it is we mean by 'intelligence' and my bias is that the difficulties remain under-appreciated by the technical optimists among us. But I could be wrong.

What sort of bets are you willing to place?


I think you made an insanely good bet considering the average age of all vehicle types in the US except SUVs is greater than 10 years. Even if every new car in the US were fully self driving then it would be a tight bet assuming current trends hold.

https://www.bts.gov/content/average-age-automobiles-and-truc...


I'd probably be willing to concede to a weaker form of the bet, say if more than 20% of taxi service miles driven were by self-driving cars. But we don't need to tell my friend that.


Not to spoil your fund but the average age of US cars is over 12 years. Edit: Which means you already won as on average cars last 24 years.

Ops nm thought you where for it.


He’s on the other side of the bet.


That sounds like a commercial acceptance bet rather than a technology one but yeah I would take the same bet. But I'd bet by 2031 driving a car will not be the peak of AI.


I take commercial acceptance as a proxy for the acceptance and practicality of artificial intelligence at the 'everyday' task of driving. I would take the absence of commercial acceptance as evidence that artificial intelligence is not up to the 'everyday' task of driving.


> I would take the absence of commercial acceptance as evidence that artificial intelligence is not up to the 'everyday' task of driving.

The most interesting thing Tesla is doing to make "acceptable" FSD possible is to open an insurance company.

So commercial acceptance is proxy for capability, but it is not immune to regulatory moat building (or alternatively, go the other way - like, force 80+ year olds to drive enhanced cars).

There's some ways around it, but that problem isn't a technology one.


Momentum is still a thing regardless of where the technology is.

The current fastest production car in the world (and of all time) is an electric car, but most cars are still not electric. That doesn't mean gas cars are "better cars" than electric ones.


Sure, there are technological and commercial (and regulatory, and, and, and...) aspects to the bet. The thing I'm after in my car bet is that I'm using 'replaces humans at something that requires intelligence' as the bar to clear for what you describe as 'smarter computers in our life time.'

Can you give a purer example of a bet that would demonstrate what you believe here?


The special part is the lived experience of social animals and everything that goes with that. Stuff that we have to work hard to get machines to understand, since they're not conscious biological creatures that have to survive, and thus we have to train or program them in a way that is somewhat different from being born and raised as a human.


Why have you decided to triviliaze human brain understanding? Except if you believe you have discovered a way to boost IQ.


In 1979 Douglas Hofstatder wrote Godel, Escher and Bach in which he predicted that computers would eventually be able to beat humans in chess, but that those computers would say, "I'm bored of chess now, I want to talk about poetry." The history of AI has been the hard work of really describing a problem like playing chess or reading letters or recommending songs and then applying AI techniques to it, but I don't think anybody's ever trying to work on a computer that discovers new problems to solve.


We don't know yet if there is anything special about human intelligence, or what the limitations of general intelligence might be. Are animal and plants a slower form of the same intelligence, or is there something qualitatively different? Can rocks and liquids be considered intelligent since it led to life, which eventually led to us?

Current AI/ML does not appear to have any properties of "life intelligence" - for example you can put an animal or even plant into an unfamiliar situation and it will often figure out a way to survive. AI/ML in the evaluation phase is often pretty dumb, and needs new training if anything changes. Reinforcement learning is probably the closest but still seem pretty limited.

I don't think the current approaches will lead to general intelligence. However, I do suspect that when the right theoretical breakthrough is made AI will rapidly become superhuman and humans will not be in control of what it does - it will simply iterate too quickly for us to compete in any way intellectually.


I don't think that's a useful framing. There is nothing special about stars, either, but the chance of human engineering resulting in some power generator that can equal the output of a star any time soon is pretty low. The fact that nature has solved a problem means it can be solved, but it doesn't mean we can trivially figure out how to do it in a manufactured device.

Aside from that, there is no "we" here. Some people reading this are 20 and some are 70. The scope of what it means for something to happen in one's lifetime is quite different for those two groups.


To help improve the framing a little more, with rough orders of magnitude:

* 10^10 Watts: electrical power generation of the Itaipu Dam[0]

* 10^26 Watts: luminosity of the Sun[0]

* 10^10 neurons simulated on the Japanese supercomputer K last year[1]

* 10^10 neurons in the human brain[2]

I'm not claiming that a simulation of the human brain with equivalent capability is just around the corner, just that it is misleading to point to the scale difference between artificial and natural energy sources with the implication that brain simulation is beyond our reach in the same way.

[0] https://en.wikipedia.org/wiki/Orders_of_magnitude_%28power%2...

[1] https://www.frontiersin.org/articles/10.3389/fninf.2020.0001...

[2] https://en.wikipedia.org/wiki/List_of_animals_by_number_of_n...


I would not place bet on the replication of the result of a billions years of iterations and refinement, whatever under-optimized was the process. (who knows, it even might be the most efficient)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: