This post feels like author is insecure about his position and wants to establish some validity. Having going through it all, it feels delusional at best.
The glorified pattern matching can only take us so far. You know it's working as long as there is a pattern. I wouldn't call it a general intelligence per se. There is no "juice" in these algorithms.
If we use these tools, we can immediately see where they fail and where they do not. These are just a new tools in software engineer's box.
Definitely! There was a time when compilers were part of AI research. Now they are just another tool. Same with DL, they are amazing tool. We need them and they provide value if used correctly.
I just didn't want to call it as "intelligent" and use this as basis for defining "intelligence." We can call them something else. It's learning to do a specialized job as intended and in "intelligent" manner. But it is not intelligence. Even a small ant is intelligent than our current AI systems though they aren't sophisticated and can't perform human task, they are intelligent than AI system.
> I just didn't want to call it as "intelligent" and use this as basis for defining "intelligence." We can call them something else.
TL;DR We're also mostly brute forcing our way to discoveries. We're not that smart.
People too are relying on cultural handouts, maybe most of our intelligence is also "something else". Before electricity was discovered we had superstitious ideas about electrical phenomena. Before germ theory was discovered we were getting sick and dying like animals, helpless. Not so smart, even though it was a life and death situation for us.
It's easy to be "intelligent" when you're given the solutions beforehand by culture. ML learns from the same culture, like 99.99% of us who can't discover new things even to save our lives. And many of our discoveries are a gradual work of trial and error, we don't go directly to the target but stumble/brute force our way to it.
There was a news story recently title "Elegant Six-Page Proof Reveals the Emergence of Random Structure". The funny part is how the authors stumbled onto the amazing solution after many many unsuccessful trials by all the math community. Not a great sign of intelligence when you have to rely on chance so much and so many fail before one succeeds.
This tells me we're also mostly doing "something else". Intelligence means solving novel problems with few attempts, not spamming our attempts to death until something comes out. ML research looks more like spamming than intelligence too.
You know what else looks like spamming? Evolution. It's a blind search process brute forcing the problem of self replication for billions of years. It created us and everything else in one run but it's not very intelligent, it just spams a lot.
Pattern matching can solve everything, if given enough storage and training data. Memorizing trillions of sentences is basically what makes GPT3 amazing.
You're absolutely correct that patten matching AIs won't ever be truly intelligent. But then again, many humans also never exceed what can be simulated with good pattern matching. And an AGI household robot only needs to be as smart as the maid that it's replacing.
I'm optimistic that pure pattern matching will get us to usable AGI AI.
Pattern matching can solve everything, if given enough storage and training data.
There's never going to be training data for "how things are going to be next year". A lot of large scale systems involve emergence [1], patterns which previously were not visible suddenly appearing. I think even today's AI can do things that a bit beyond pattern matching (learning to learn, etc). But pure matching as such is inherently limited.
I would be surprised if AI predictions for "how things are going to be next year" would be worse than expert human predictions at the 90% percentile. I mean most trends for next year will already be around this year, they'll just be too weak to notice.
I believe pattern matching is an important part, but intelligence comes from how you organize these patterns and relate them to one another. E.g. you can learn pictures of a dog by pattern matching but you can’t learn if a dog can beat up a bear, if there’s a bear outside a human just knows not to let the dog out.
What we need is a pattern detector + a the ability to create basically infinite ANNs (or be able to multitask on them) + an event loop that takes input feeds (from cameras, microphones etc,) does some kind of reasoning and then pushes to its output feeds (wheels, etc.)
I think you use pattern matching to extract unique objects, store these objects as a node with its own simple neural net + long term storage where it only stores pictures of this object plus a dataset about it e.g, how often you see it. You then you organize them into an object hierarchy. Each new object is compared against all other objects we’ve stored using their pattern marchers. The higher the output the more weight we give their “connection.” Each object is made up of of sub objects so they are the top of their own tree as well, so you can run this pattern finder on the dataset of individual objects itself and if you find new objects the tree recurses. You can then check these objects against existing ones etc.
A general intelligence does this constantly, in real time. Then it’s a quick algorithm.
1. Have I seen this object before
2. No, but it shares characteristics with animals (an object that groups together all things that look like animals.)
3. It’s much larger than my dog, and I’ve seen large animals attack small ones more often than not.
4. My dog is also a dick, and attacks other animals more often than not
5. It’s probably a threat
Just scaling modern compute won’t get you there unless you’re willing to dedicate a few orders of magnitude more energy than a human being to do so. You need a completely different, distributed, architecture if you are going to be able to compare billions of objects against billions of objects every time you see something new and in real time.
Machine Learning is great but it’s only the learning part. Intelligence is reasoning about multiple things in relation to one another not detecting a pattern. You might trick yourself into thinking you’re getting there because pattern matching is powerful but it’ll get you to the intelligence of microbe at best. Even then you need something that’s driving the actions.
AGI is very unlikely to happen within the next 50 years. Dangerous limited AI exists now and it's going to get worse. I don't worry about malevolent AI, because we don't even know what consciousness is, nor the limits of a (presumably) nonconscious entity's attempts to emulate intelligence.
I worry quite a lot about malevolent humans using enhanced technology (note that most things in technology, once accomplished, cease to be AI) will do. Authoritarian states and employers can already learn things about you (that may or may not be true) that no one should be able to know from a basic Google search. This is going to get worse before it gets better, and if corporate capitalism is still in force 50 years from now, we will never achieve AGI in any case because we will be so much farther along our path to extinction.
> I don't worry about malevolent AI, because we don't even know what consciousness is, nor the limits of a (presumably) nonconscious entity's attempts to emulate intelligence.
It doesn’t need to be malevolent. You’re made of atoms and if the AI has uses for those atoms goodbye you. There’s no reason to believe consciousness has any impact on the ability to maximize an objective function, i.e. try for a goal.
AGI is very unlikely to happen within the next 50 years. Dangerous limited AI exists now and it's going to get worse. I don't worry about malevolent AI, because we don't even know what consciousness is, nor the limits of a (presumably) nonconscious entity's attempts to emulate intelligence.
I worry quite a lot about malevolent humans using enhanced technology (note that most things in technology, once accomplished, cease to be AI) will do. Authoritarian states and employers can already learn things about you (that may or may not be true) that no one should be able to know from a basic Google search. This is going to get worse before it gets better, and if corporate capitalism is still in force 50 years from now, we will never achieve AGI in any case because we will be so much farther along our path to extinction.
The glorified pattern matching can only take us so far. You know it's working as long as there is a pattern. I wouldn't call it a general intelligence per se. There is no "juice" in these algorithms.
If we use these tools, we can immediately see where they fail and where they do not. These are just a new tools in software engineer's box.