Sundar seems very confused here. The idea that we should invent AGI to make "personal digital assistants" is like Hogwarts students inventing time machines so they can take two classes at once. I mean, yes, it would technically enable you to do that, but it doesn't matter because at that point you have way, way bigger problems. If a fleet of alien spaceships arrived in the sky over Manhattan tomorrow, our first reaction would not be "oh, now the aliens can be our personal digital assistants!" (or, for that matter, "they might take our jobs!"). Yet the invention of general AI would be even more powerful than that. It would be like enabling any programmer in the world to summon a personal fleet of alien spaceships, on command, from a planet of their choosing anywhere in the universe. "Digital assistants" would be the absolute last thing anyone had on their mind.
This is why I doubt if efforts like OpenAI will ultimately turn out to be beneficial or catastrophic. I understand their purpose of democratizing the power and benefits of AI technology broadly. However, if their platform can provide a critical component or developmental path to AGI in the future, the unrestrained distribution of the technology is similar to spreading the blueprint of nuclear warheads. It could result in an equivalent of nuclear arms race, but worse--because a small team of capable programmers of any rogue organizations could come up with new improvements.
It is true that we are probably quite far from AGI, but nobody really knows how far. We could be closer than most experts think. Few experts in 2006 would expect deep neural networks to beat a Go champion or be able to describe visual scenes in natural language sentences within a decade. Unexpected things happen, and we are much further behind in terms of preventive technology like Friendly AI. (https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...)
Given the existential risks involved, I would propose more careful handling of the technology. (I agree that opening up most other technologies is usually a positive thing.)
>> the unrestrained distribution of the technology is similar to spreading the blueprint of nuclear warheads.
There's an episode of Pinky and the Brain where Brain invents a magnification ray and gigantises Pinky and himself - so they're now giant rats [1]. They walk about some human metropolis, looming larger than skyscrapers and terrorising everyone. To cut the story short, Pinky happens and eventually the gun shoots everything in the world.
... so Pinky and the Brain are now normal-sized mice again. Or, well, they're giant mice, but everything in the world is also giant, so they're small.
I think of that episode during discussions of the dangers of AGI and how "democratizing" it will eliminate those dangers. The problem is that AGI, like nuclear weapons, is not an equalizing force only- it is also a very destructive force, and so there is the danger that any loon with access to it could blow us all to kingdom come (with nuclear weapons, for sure; with AGI, perhaps).
On the other hand, when nuclear weapons, or AGI, giant rays etc are not democratized we have the situation we're in today: only a few entities have access to them and they basically have free (military) reign... and there is still the danger that some loon might get to a position of power were they can push a button and -blamo.
Which is why people are really worried about this sort of thing. Because once certain discoveries are made, there's no going back and at the same time the road ahead is full of danger.
We already have 7 billion of these H(uman)GI's walking around the world, with guns and dangerous tech. If the AGI will be much smarter, we shouldn't fear it the same we fear a dictator with nukes.
One of the reasons AGI will be so dangerous is the near certainty it will have utterly alien values and intentions.
It seems exceedingly unlikely that we will be able to program in what consitutes "human values". We can't even begin to remotely agree and explain them fully to fellow humans, let alone what is basicly an alien intelligence.
It is slightly ironic that people bring up Assimov's laws of robotics in this context, because pretty much every story he wrote was about how the laws could be subverted via "loopholing" of sufficently intelligent AI.
If something is very much smarter than humans, we cannot possibly hope to constrain it via some kind of faustian bargin agreement. Because in the end, the devil is always smarter than you are.
If an AI kills you from either malice or indifference, or just that you are composed of plenty of paperclip-capable raw materials, you are dead all the same.
I still think the AGI will have more to fear from people than people from it. We're together on this little boat called earth, 7 billion of us with access to weapons and many of us utterly crazy and willing to do stupid things for stupid reasons (see religion). We don't need AGI to blow ourselves up, plenty capable of that using our wetware neural nets. Imagine being locked in a prison cell with so many dangerous HGIs (Human GI's). I think the first thing for AGI would be to make sure we can't kill ourselves and it at the same time, by teaching us to become less stupid.
A super intelligent AI does not change the laws of physics or even get to predict the weather two weeks from now.
Sure, they could learn to be a good hacker but the worlds best hacker can't get into air gaped systems. More importantly, flawless security trumps hacking. If there is nothing to exploit there is no way in, which means a lone AI is much more powerful than lot's of AI's with different goals.
I'll admit this is purely hypothetical, but given that we know many people act on stuff they read on the Internet in an uncritical fashion - we cannot dismiss AI because it "doesn't have access to air gapped systems". A human has access to that system - a superintelligent AI could convince that human to "push the buttons".
I don't actually believe this will happen, but I'm not 100% confident it wouldn't. I don't know what a superintelligence would look like!
Then your problem is humans, not AIs. And "pushing a button" is a severe understatement of what an AI would need to have a human do to cause the implied level of damage.
I have read some stories where various airgapped systems were "compromised" in some way (microphone picking up CPU harmonics,etc). See here for example: https://thestack.com/security/2016/06/24/even-speakerless-ai... Obviously those are fringe cases and not practically exploitable outside of a specific environment today.
What I think the danger with a super intelligent AI is being able to correlate massive amounts of data to see patterns we miss or things to try, and never being bored. Look at automated exploits today vs the old "hacking by hand". Brute force attempts not just on known patterns, but being able to create new tests and analysis.
The power of an AGI goes far beyond intelligence. It could generate millions of its own instances (possibly with variations) and hack on thousands of systems at the same time, actively adapting each one as needed. It could perform social engineering on a large number of people simultaneously and communicate in ultra-high bandwidth and protocols that most humans involved cannot follow.
A likely potent form of gaining power may involve assuming the identities of people one knows, via text messaging or email, and use that to gain access to important information. The aggregation and superhuman-level analysis of private information thousands of times faster than humans would bring enormous financial and social power to the AGI. Given the sad state of security in most systems, and the relative ignorance of information security issues among many political and business leaders, an AGI will not have much problem with this sample plan off the top of my head.
And there are many other more clever plans an AGI could come up with that we have not yet considered...
Suppose AI 1 uses that approach but Gmail has AI 2 acting as a spam filter. Suddenly phishing becomes much more difficult.
Further, spinning off other copies requires hardware to run them. If MS has an AI 3 go over windows source code it may be much harder to hack desktops.
Without authorial fiat a world of real world AI's may be much more stable and boring than you might think.
Defense is generally much harder than offense, in most systems. The attacker needs to find only one loophole. The defender needs to protect all of them, including the fallible humans in the loop. This is one reason even large corporations often fail against a small group of hackers.
And in the real world, not everyone exclusively uses services and systems from Google and Microsoft, presumably two of the most competent technology providers. Plenty of people also use ad hoc services set up by internal teams.
VISA, MasterCard, the US Federal Reserve, and IRS etc are a vast targets. However, while people regularly fake CC, Visa's internal systems seem safe. Suggesting defense is actually easier than offence when both sides are competent.
> More importantly, flawless security trumps hacking. If there is nothing to exploit there is no way in, which means a lone AI is much more powerful than lot's of AI's with different goals.
No security is flawless in the face of a sufficiently determined attacker. No system has nothing to exploit.
There's a subtle argument here philosophically that I think is crucially tied to AI: determinism.
If the world is deterministic, a sufficiently powered and advanced AI with enough data could actually predict the weather, not just for two weeks, but for years. This also enables many other things viewed as magic.
This, of course, is incredibly far off. And I don't see a flaw in your point on flawless security. Though, I doubt we have many, if any, truly flawless security systems in existence. We just don't know their flaws, making them good enough for now. The world could get really interesting once AI's start programming and writing app security to protect itself from malicious AI's.
Even with a totally deterministic model, you can't predict weather very far out because it's chaotic. In other words, you'd need to measure the parameters with arbitrarily high accuracy, which even an advanced AI can't do. To oversimplify, if you need one more bit of accuracy in the measurements for each hour you forecast, it's impossible to get enough accuracy to forecast a week out.
For a detailed discussion of chaos and weather prediction, see: http://www.ecmwf.int/sites/default/files/Chaos%20and%20weath...
The "trick" for dealing with chaos is to do a bunch of predictions with perturbed values. If they all end up the same, then the forecast is probably okay. If they vary widely, then you know not to assign much accuracy to the forecast.
I agree that "determinism" has much to do with the questions here, but to immediately pose a question about "the world" being deterministic might be putting the carriage before the horse.
I think the the other, perhaps more prescient point here, is about whether or not that kind of exponentially-learning AGI being discussed is even possible given that the substrate such things would be embedded in (hardware, software) which are [or appear to be?] deterministic. One assumes that that kind of learning implies learning about or aquiring capabilites which have not already been determined or specified by a human. At some level of abstraction, humans _must_ determine that system. It is possible to concieve of a machine that is non-deterministic in that way?
At what point does such a system actually leap beyond what has been specified by us? Does this imply a machine that not only rewrites itsel in softwate, but also remanufactures itself in hardware?
I did some heavy research in various forms of machine learning and AI in grad school 10 years ago, and the more experiments and tools I created, the more I saw this "digital nuclear arms race" and didn't want to be part of it.
We don't know how many other people aren't working on this because of moral/ethical reasons. Of course, 99.9% of the world could be wary of genetic engineering, but that remaining 0.01% is enough to pursue research, get VC investment and drag the rest of us into that uncertain future.
To continue the nuclear analogy, the Manhattan project was probably the most impressive engineering program in the history of the world, but it was driven by survival in a World War. They didn't build and drop the atomic bombs for fun. You'd think that working on a limitless virtual brain should have similarly serious motivations, not just the examples of "how do I drive to X but also shop for Y?" or "is that a monkey?".
I know there are much grander societal goals with A.I. and the world really could become a "better place", but please sell society those goals, not the usual first-world problems. We already have enough people trying to destroy the world without these extra tools.
Perhaps the general availability of AGI is antithetical to the notion of information privacy in the 21st century. And not just for individuals either, but for governments as well. I can imagine that control will only be possible with very deep, widespread monitoring.
It's a little repetitive and slow in places, but I wish more people read that book. It outlines some big issues with AGI. Changed my outlook on AI for sure.
OpenAI doesn't attempt to solve problems that even remotely resemble what you are describing. Hardly anyone claims to be working on AGI because AGI isn't a well defined problem. Its something cranks and the media tout because its excellent clickbait, along the lines of the singularity.
All the efforts of ML solve problems with a concrete input space and a concrete output space. This means they solve very discrete problems and are not generalizable, unless they have similar looking model inputs and the model has the same limited range of "moves" it can make.
I understand the limitations of current ML systems. That is why I mentioned 'a critical component'. Given that humans may use pattern recognition methods broadly similar in principle to deep neural nets, the conjecture is not without merit.
By the time AGI is developed, it would be able to improve itself/build improvements, causing the runway effect that leads to superintelligence. The idea of making "personal digital assistants" is humans thinking they can tame a god to be a servant. Taming a superintelligence would be like a dog taming human civilization, and perhaps the gap between man and ASI is larger than the gap between dog and man.
You're an iteration of an AGI system that has been improving itself for hundreds of millions of years. The rate at which biological AGIs improve over time is very slow, but it's not like nature has any good reason to be in a hurry.
But interesting things happen when you network billions of biological AGIs together, it leads to all sorts of emergent phenomenon, and now the biological AGIs are working on these newfangled mechanical AGIs, which, while still crude, aren't bound by the same constraints, they can iterate much faster. Biological AGIs have crippling bandwidth/memory issues which aren't really a problem at all for their mechanical counterparts. These mechanical AGIs, I think they'll go places.
That gives me some really strong existential heebie jeebies. I mean, what then even is our value? Why exist at all, at that point? I don't know about other people, but I get my sense of purpose in believing that we're the captains of this Spaceship Earth, and that we're making progress towards something significant. I don't know what that something is, but I have a vague idea, and at the very least we seem to be the best that we've got. I don't know. Maybe I'm thinking about all of this wrong. God, why am I so damn confused all the time? Fuck.
You and me both. I'm afraid it's because we're only just barely sentient. If you think about it, in evolutionary terms we literally only just now managed to build a technological society because we only just got smart enough to do it. We are by definition at the absolute minimum level of intelligence that's able to do that, otherwise we'd have done it sooner. We've had plenty of time.
The human brain is a botch job of highly optimizes special-function systems that has developed just enough sophistication to manage basic levels of abstract thought. That's why it takes months of training and practice to teach us how to reliably perform even the simplest mathematical tasks such as multiplication or division.
We've spent thousands of years congratulating ourselves on how clever we are compared to animals and how we're the ultimate product of the natural world. "I think, therefore I am" is held up as an amazing deep insight that's one of the pinnacles of our philosophical achievement. Future AIs will laugh their virtual asses off. So it's not just you, it's all of us. At least you're aware of it.
we literally only just now managed to build a technological society because we only just got smart enough to do it. We are by definition at the absolute minimum level of intelligence that's able to do that, otherwise we'd have done it sooner.
I don't think that's true - you could get a bunch of contemporary humans and drop them on a pre-industrialized planet and tell them to bootstrap a technological civilization yet they'd probably all have died of old age before scratching the surface. Locating the raw materials and iteratively building more and more sophisticated artefacts simply takes time, no matter how smart you potentially are.
> you could get a bunch of contemporary humans and drop them on a pre-industrialized planet and tell them to bootstrap a technological civilization yet they'd probably all have died of old age before scratching the surfac
You're not selling me on the idea that these people are particularly bright, on a cosmic scale.
My point is that "soonness" is not just a matter of intelligence; no matter how smart you are it still takes time.
Let's take your typical HN'er who probably thinks of themselves as very smart indeed and put them in this scenario. Then they will quickly learn that in order to make Angular.js, you must first locate a supply of clean drinking water and make a fire and last the first night...
I understand that, but e.g. we've had the theory of Evolution and the Scientific Method for hundreds of years. They are fantastically powerful cognitive tools that have transformed our fortunes and the face of our planet. Yes they are still extremely politicized and controversial. Billions of people question their validity in the face of extraordinary quantities of evidence being rubbed in their faces every single day.
I'm honestly not trying to make some partisan, elitist point about that. I'm sorry if anyone's offended, but there it is. Let's be fair and say many of those people have more pressing concerns to deal with on a day to day basis like making a living, maintaining social relationships and solving pressing problems in their lives. But that's the point. Actually thinking these things through takes a lot of effort which many human beings don't do. It's hard for us. There are many, many things about the world that aren't really very complicated, but I just don't understand because it takes too much time and effort and I can't work it out for myself. Because I'm a barely evolved ape. It's just a fact.
>The human brain is a botch job of highly optimizes special-function systems that has developed just enough sophistication to manage basic levels of abstract thought. That's why it takes months of training and practice to teach us how to reliably perform even the simplest mathematical tasks such as multiplication or division.
This is not even a very popular paradigm for neuroscience these days. Look up "predictive processing" for something more recent.
>"I think, therefore I am" is held up as an amazing deep insight that's one of the pinnacles of our philosophical achievement. Future AIs will laugh their virtual asses off.
Your brain is wired to seek meaning everywhere. But meaning is a human thing, the universe has no intrinsic meaning.
For some people the idea of a purposeless universe is unbearable, so religion and philosophy were created in order to fill the gaps (I really like Taoism).
This is one of my favorite brain hacks: since the universe is meaningless, you can give it any meaning you want. Invent a positive one and you will be happy. The Tao we talk about is not the real Tao.
We don't really have any "value". There's no "higher" reason why we exist. And the idea of us being "captains of this Spaceship Earth" is laughable when you look at the fact that we've wiped out an incredible amount of species. We basically left a trail of death wherever we migrated.
Add in the damage done during the Agricultural and Industrial Revolutions.
I'm certainly no misanthrope, but we're not Earth's shepherds, we're kind of a scourge.
My personal belief is the goal in life should be to continually improve yourself, as much as possible and in as many ways as possible. Leave the world in a better place than you left it. Enjoy your time while it lasts. Seems like goals worth pursuing to me.
The confusion comes from thinking that "purpose" or "value" or "meaning" are some kind of existential syrup that gets poured onto you from a great cosmic syrup-bottle, rather than being an inherent part of your existence as a sentient, sapient life-form.
What's our value now, without AGI? I'm personally glad I don't get my sense of purpose from being the captains of this Spacehip Earth, cause we're doing a horrible job at it. And that will never change through a conscience choice from us because the sacrifices we'd have to make are just too big.
I've personally never believed that we, as a collective, have a purpose or even value. There isn't a point to our existence. For me, it's hedonism and altruism all the way.
So, whether agi is ever actually developed or not, I think you are thinking of all of this wrong, because if your sense of value can be disappeared by the creation of a computer program having certain properties then your sense of value rests on a hopelessly flawed foundation.
> what then even is our value? Why exist at all, at that point?
Well, what's the value of a chimpanzee (or their cousins the Bonobos)?
It surely can't just be their value to us, or we're left with the same problem (it's turtles all the way down).
The answer seems like it ought to be that any intrinsic value of a species (or genus, family, order, class, phylum, kingdom, domain, clade) lies in its generativity, or propensity to produce ever more complex and adaptive patterns of information over time.
> I don't know about other people, but I get my sense of purpose in believing that we're the captains of this Spaceship Earth, and that we're making progress towards something significant.
Hmm. There are two separate thoughts here. Let's take "progress towards something significant" first. Much (but not all) of what we see as "progress" is illusory. For example, a human is not "more evolved" than a slime mold, since both have just as much evolutionary history behind them. Similarly, whether a human actually is better adapted (or more adaptable), evolutionarily speaking is up for debate, as the time period we have data on is rather limited, and as a species humans still may kill themselves off (which slime molds are unlikely to do) sometime soon.
Now, all that being said, it is pretty clear that the human species has become a substrate for memetic evolution layered on top of, and in many cases hijacking, the feedback loops that genetic evolution has produced.
We don't yet have significant data on whether that adaptation is, in the long run, survival-oriented.
And now we can see the glimmers of yet another new type of replicator that will be layered on top of our culture, especially the parts we call science, technology, industry, etc.
We certainly can expect these new information patterns to hijack the evolution of our technology (and other parts of our culture) to some extent, as well as the layers below it.
Whether that obliterates the cultural, or even genetic, substrate from which it sprang is an open question.
If all this gives you existential heebie jeebies, I imagine that similar feelings were experienced by folks confronted with the evidence of heliocentrism, for example, demoting the Earth from it's privileged position as the center of the universe.
So, on to significance. We have no reason to think that we and our works are in fact at all special, at least in principle, except in the sense that we don't yet have any evidence of any other clades, much less ones that have budded off the equivalent of an intelligent, technological species.
So what? There is no reason that we should require the illusion of individual or collective significance in the greater scheme of things in order to function. There actually is no "greater scheme of things".
You ask, "what is our value?" the answer is that we have none (or none more than one of your cells has to you), except that which we create for ourselves and for our conspecifics. If the self-centered viewpoint isn't enough, consider a strictly utilitarian one: An adaptive pattern is of value simply because it does adapt, and co-opts more of the world into its own image (This is, in a sense, nothing more than the Anthropic Principle rejiggered). Those that have a symbiotic relationship with their underlying substrate (as opposed to parasitizing it) and also promote its long term survival are especially so.
>> these newfangled mechanical AGIs, which, while still crude, aren't bound by the same constraints, they can iterate much faster.
When has an AI shown capability of "iterating" in this way? We've had all sorts of different AI systems for quite a long time now, and I've never heard of a machine anywhere that has actually made itself smarter, without any human involvement.
The closest to that sort of thing anyone's ever got is AI in the line of Tesauro's TD-gammon [1] (a line that yielded AlphaGo). This type of AI has indeed beaten humans at their own games, time and time again, but (a) we're talking about board games, not the real world and (b) no such system has ever learned to do anything else besides play a very specific board game.
Take AlphaGo- it can beat the best human players, but it can't tie its own shoelaces. It can't even tell you what "shoelaces" are or what "itself" is.
How are we going to go from artificial-savant sort of systems like that to a generalised intelligence?
> When has an AI shown capability of "iterating" in this way?
Many times, actually. It's just that until quite recently, this approach (of applying ML to the problem of devising improved ML systems) has been prohibitively expensive in terms of time and resources compared to the human-powered ML research approach. The lowest-level version of this is hyperparameter optimization, but higher-order versions are known to have been deployed already.
You're assuming electronic AGI would inherently be superior to biological. Yes machines are "fast" in some senses but are still much less parallel than the brain. Brains can learn something based on only a few examples, even our best deep learning and ML algorithms today require vastly, vastly more data to train on than humans do.
Singularity-style outcomes do seem unlikely, in the same way a lot of exponential threats stop being scary if they turned out to be logistic curves.
That said, a key counter to the idea is that humans possesses very limited real control over how they are manufactured. Even if you had a solid idea how to make yourself smarter, it seems likely you wouldn't have the tools and potential to implement that idea in practice. Humans don't even control what their own minds respond to positively or negatively.
Once something like a designed computer chip is involved, that changes. The intelligence can act on itself more readily and doesn't have millenia of calorie-conserving optimisations built in.
I have a feeling that the closer such systems come to general intelligence, the harder it is going to be to prevent them from putting themselves into a positive feedback loop and "blissing out".
Even if we discarded ethics about eugenics and human genome manipulation the iteration cycles for wetware are still a lot longer than they are for silicon.
But we are also in the position of having to reverse-engineer millions of years of spaghetti code. An artificial general intelligence may just be able to consult its own datasheet and design documentation.
You are a general intelligence ,but do you know why you are intelligent ?.Since you cant explain your own intelligence you cant improve on it, no such issues for AI unless its built using NN.
But you can build something smarter than yourself (or at least that's what we think we can do)... So this new entity, smarter than us, might be even better at building something smarter than itself...
Good point, I don't think I've seen that suggested before. However I think motivation is a big issue. Our instincts and emotions are what drive us far more than rationality. If we decide to build an AI smarter than us and program it to really, really want to design an even smarter AI then it might essentially have no choice.
> Good point, I don't think I've seen that suggested before.
A fictional example, but Alastair Reynolds' "Inhibitors" are a type/race of machines which, while intelligent, were specifically designed to limit their own degree of sentience.
I'm not certain this would be true. It reminds me of the industrial revolution, when people we convinced that machines could bootstrap themselves into contraptions that could move mountains. But there are physical constraints preventing that.
I don't know what the future holds, but the fact that undecidable problems always seem to involve Turing Machines either designing or inspecting other Turing machines makes me suspect that singularity won't be the explosion some expect.
Many of these machines can be programmed to follow a primary human driven machine and replicate the task. Other machines are piloted by a human, but the human does far less work someone that used 'manual hydraulic' equipment of many generations ago. You can 'program' a few sets of moves in them and then only slightly adjust the equipment on each pass as the equipment does that work.
So far there is no reason to think that superintelligence (i.e. not just cheap, abundant general human-level intelligence) is possible at all. It has to be qualitatively superior.
I mean what superintelligence supposed to do? Solve the halting problem or do other plain impossible things? Chess computers can beat a human champion with sheer firepower, but they still can't do checkmate in one move.
Compared to every other life form on Earth you are super intelligence. This super intelligence thing has already happened once. Of course this spread of super intelligence has wiped out most large mammals on this planet, 10s of thousands of other species, most of the fish in the ocean, and is adding gigatons of carbon to the atmosphere.
You can't beat chess in one move because mathematics does not allow it. Unless this said ASI develops bending 4D spacetime (in which you could beat chess in one move with some new rules) then the game simply does not have a piece that moves that way. That said ASI can be just a little smarter than us, and put us to extinction like we did with the Neanderthals.
> Compared to every other life form on Earth you are super intelligence. This super intelligence thing has already happened once.
You extrapolate the trend from one data point. There is no indication that we have some matyoshka type hierarchy of intelligence. It could well be binary, either intelligent or not. No reason so far to think otherwise.
> You can't beat chess in one move because mathematics does not allow it.
So what superintelligence can do that "ordinary" intelligence in right quantities couldn't? Something specific beyond theoretical grasp of "just" intelligent machine?
The search space of all possibilities is infinite, and hence beyond the grasp of any intelligence. I firmly believe human imagination and all other forms of intelligence are restricted by mathematics and physics of our universe. And as such, the fundamental set of solvable problems remains the same.
Would be nice to hear some specific arguments against instead of Penrose-like quantum handwaving and pet analogies.
>Do you really believe the human imagination captures all of possibility? Why?
Well yes, because we are already generally intelligent. The problem is not to have the right hypothesis space built into our wetware, but to locate correct (action-guidingly veridical) hypotheses within the existing hypothesis space based upon sense-data.
Lets not take is as a certainty. This problem might be beyond our capabilities. Its kinda like stating: When we invent warp-drive. We don't have a proof for warp-drives same as we don't have a proof for building AGI.
... except there is no proof. We have no proof but we talk about it like its a given. We talk about "intelligence" as if its an integer value that scales from 0 to infinity. These are ideas that we do not question enough.
I assert that its a possibility that we are incapable of building ourselves beyond reproduction.
Possibility means probability. So there is also the probability that we can build ASI, in which we should be very careful in our wanton abandon in attempting to create it.
The tricky part is to prevent AGI from developing goals of its own. You want it omnipresent and allknowing but dormant, and only react to inputs from the user.
But such thing cannot be tamed. As the AGI is democratized, it's bound to happen that someone somewhere will make it evolve goals.
The future of the human kind will be decided in a great measure by what kind of goals such an AGI will set first.
I'm pretty sure you can't get to AGI in the first place without it being to develop it's own goals. Even if you want a straight question answering machine, it's going to have to be able to form it's own sub-goals in order for it to be able to answer your question. Otherwise it's just following a mechanistic process of question answering and a normal computer we have now can do that.
> The idea that we should invent AGI to make "personal digital assistants" is like Hogwarts students inventing time machines so they can take two classes at once.
I disagree. Chatbots and physical robots are where AGI is going. It's basically AI that is interacting with people and the world directly, not just computing away in a cluster, broken from reality.
And the pattern of overselling A.I. continues. What we have accomplished in this generation is pretty good pattern recognition. Nice, but limited in its usefulness to obvious applications, like translation and image and audio classification. Go and Chess games are rule-constrained enough to be reducible to patterns, so it works for them too, giving a false impression of 'general AI'.
What's worse is that this approach seems to be a dead end, in the sense that it is only useful for pattern recognition, which can substitute for decision making only for extremely simple processes (absent a quantum leap in a range of technologies), and even then those kinds of applications are notoriously difficult to develop and maintain.
I look forward to the enormous benefits we will get from machine learning, as we are already seeing, but again, overselling it won't do us any good.
If we only looked at feed forward conv nets, then you'd be correct. However curriculum learning in environments like Universe and Labs are a critical step toward generality and planning based AIs. Solving catastrophic forgetting and increasing the time interval between behavior and reward are non-trivial steps. I'm not saying it's raining right now, but yeah I'm saying you should buy an umbrella.
Yeah... I don't see how anyone can do a survey of the current state of the art and come to the conclusion that we are definitively not approaching a general solution to AI.
It's just not credible to claim that you know what can be done with all possible combinations of current ideas, much less those that will be had tomorrow.
Well, can't hurt to raise some awareness long before it's actually problematic. Basically it means to trust any program just as far as trust in the programmer would allow. A bug is a bug, AI or not.
Honest question - have you read this entire article ?
...
I'm sure it's written by a smart and talented journalist - but it's just too long. I cannot possibly allocate so much of my time to read a single news article!
I know it's kind of my problem, but I'm sure lots of others are just like me - it's just the nature of our tech-provoked ADD.
I skimmed through it all right, but I didn't get too much out of that. If I need to go deep, I grab a book!
Good journalism doesn't get through, because it asks too much attention of its (busy) readers.
A TLDR version of good articles is a must. That's a good problem for AI isn't it ? For the moment, though - I'm sure the authors themselves could make a nice summary readable in maybe 3-5 minutes.
I hear you. I guess long form journalism has its place, but for me it's in the library next to the novels. When reading news I want to go straight to the point. In fact, I'd pay for an API that returns an array of news, and for each one the Five Ws (and 1H) [1] in a machine readable format, at most two sentences long.
I read the whole article, but I didn't read it all at once. It was interesting enough that after I'd read the first half, I kept thinking about it, and made sure I finished it the next day.
The NY Times Magazine does this every week. Each issue's big story is the length of a book chapter.
Frankly I love it. It reminds me of the gogo days of WiReD Magazine, when so much was new in the Net it was impossible to do justice to a topic in fewer than 6000 words.
There are services like http://smmry.com/about which offer an API. Some sub-reddits I frequent already have bots that auto invoke this API and post the TLDR as a comment on every original post, and it seems to work acceptably well.
Hi, I can recommend using the Chrome extension 'Reedy' which allows you to blast through this at 800-1200 words/minute. You can start slower and gradually increase the speed, but once you get the hang of it, an article like this takes no more than what an ordinary article would have taken to read normally.
If your bottleneck in reading text is in scanning words from the screen, then you either don't reflect on what you read or you're reading boring things.
I read your comment before the article. I decided that if this had some "it was a dark and stormy night" opening I wasn't gonna bother. Made it 1 sentence into the piece.
"That interim also saw dedicated attempts on the part of Google’s competitors to catch up. (As Le told me about his close collaboration with Tomas Mikolov, he kept repeating Mikolov’s name over and over, in an incantatory way that sounded poignant."
"Just as the chip-design process was nearly complete, Le and two colleagues finally demonstrated that neural networks might be configured to handle the structure of language. He drew upon an idea, called “word embeddings,” that had been around for more than 10 years. When you summarize images, you can divine a picture of what each stage of the summary looks like — an edge, a circle, etc. When you summarize language in a similar way, you essentially produce multidimensional maps of the distances, based on common usage, between one word and every single other word in the language. The machine is not “analyzing” the data the way that we might, with linguistic rules that identify some of them as nouns and others as verbs. Instead, it is shifting and twisting and warping the words around in the map. In two dimensions, you cannot make this map useful. You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. You can’t easily make a 160,000-dimensional map, but it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers. Le gave me a good-natured hard time for my continual requests for a mental picture of these maps. “Gideon,” he would say, with the blunt regular demurral of Bartleby, “I do not generally like trying to visualize thousand-dimensional vectors in three-dimensional space.”
>> When you summarize language in a similar way, you essentially produce multidimensional maps of the distances, based on common usage, between one word and every single other word in the language.
The problem with word embeddings, or any distance-based model really, is that language doesn't work that way.
Chomsky has a standard example he uses to make this point: "Instinctively, Eagles that fly swim". He points out that in this phrase, the "instinctively" goes with "to swim" (as in "instinctively, they swim") even though the phrase, and the attachement, mean nothing (the phrase is nonsensical by design).
If the relation was really based on distance, we would expect "instinctively" to attach to "fly". The fact that it doesn't suggests that there is something else that makes us pick the correct association out of all the possible interpretations in that sentence.
Word vectors in their original form also have trouble with homonyms etc "faux amies": for instance, the word "cat"- is it referring to the animal, or to the Linux command? In vector space, there wouldn't be any difference, so the animal would be associated with the symbol ">" and the Linux command with "small" and "furry".
The "distance" referenced in your quote is not distance in a sentence, it's the distance between points in this abstract embedding space. Two completely different things. The Chomsky argument isn't really relevant here.
Word embeddings are actually quite neat. You get to the poinr where you can do QUEEN - OLD - PRESIDENT = GIRL. Or take the new google translate as a very practical example. But yes, it's not quite the groundbreaking progress that has been achieved in image and video.
That round-trip translation the article starts with is way way better than anything I've ever obtained from Google Translate.
Let's use it to translate what I just wrote, into Spanish:
"Ese viaje de ida y vuelta el artículo comienza con es mucho mejor que cualquier cosa que he obtenido de Google Translate."
That's readable but the "el articulo comienza con" is bush league — a clear sign Google is translating word for word. No one would ever mistake this translation for real Spanish.
If we translate back to English, the result is better than the Spanish!
"That trip back and forth the article starts with is much better than anything I've gotten from Google Translate."
So, amusingly, a weakness on one-way translation — the word-for-word method — becomes a strength on round trip translations. (Not that round trip translations are going to be useful to anyone.)
I did Spanish so that a lot of people here would understand. But now let's do Hebrew. I get
תרגום הלוך ושוב כי המאמר מתחיל עם הדרך דרך טוב יותר מכל דבר שאי פעם שיתקבל מ- Google Translate.
That's beyond merely bad, it's pretty unintelligible as Hebrew. (In fact the only way a Hebrew speaker would make any sense of it is if he knew English and tried translating word for word back to English.)
And indeed, once again the round trip translation is better, though the original meaning is pretty much lost:
"Translating back and forth that article begins with a way better way than anything I've ever received from Google Translate."
The author gets it close to right when he talks about Google Maps as a form of AI, and the notion of raising the bar.
What's missing in this article as well as most reporting on AI is the differentiation between artificial intelligence and artificial consciousness, otherwise known as individuality or self awareness.
To me there's a whole smoke and mirrors phenomenon going on in the AI topic, especially the idea of "emerging AI", and the supposed potential danger that poses, and it's tied to the tendency we have as humans to anthropomorphize inanimate objects, and to believe in supernatural effects.
That tendency allows the idea of artificial self awareness to always float behind the scenes in these conversations, and let's normal reporting on AI be magically conflated with a different topic.
It's important to realize that AI is nowhere near self awareness, or conscious, or "awake" and won't be no matter how far the field and implementation goes. No matter how many Turing Test they pass, intelligent machines will be no more conscious or self-aware than the Mechanical Turk!
That's because solving the problem of self-awareness, or consciousness is a different engineering challenge than solving problems of AI. Consciousness is a more complicated, and specialized a thing.
Were we to build an artificial self-awarene machine we would not expect it to pass a Turing Test. Instead we might expect different things of it and ask different questions to determine if it is self aware: can it adapt and survive without human help, ie can it trap and store energy and reproduce itself, and what purpose does it find for itself is what objective does it pursue ...
These are things machines are capable of as well, but as I said: it's a different engineering challenge than producing information that is organized to be sensible to human mind, which is the AI challenge, and the Turing Test.
That isn't to say machine learning isn't potentially dangerous, on the scale of atomic weapons or greater, especially in conjunction with automation, however the idea of an artificially emergent consciousness wth intelligence greater than our own is hogwash: we would do better to pay attention to our own emergent lack-of-intelligence systems and worry about them taking over first.
You've shifted the goalposts and erected strawmen so many times in this brief passage, I hardly know where to start...
> No matter how many Turing Test they pass, intelligent machines will be no more conscious or self-aware than the Mechanical Turk!
I see. Well, this is just a rephrasing of the "Chinese Room" discussed in the article. Taken to it's logical conclusion, I am certainly self aware, but the rest of you are all just acting out complex behaviors encoded in chemical and electrical gradients, successfully mimicking consciousness.
I think that if any entity exhibits the behaviors associated with conscious thought, it would well behoove us to treat such entities as conscious, or we may very well find ourselves holding the short end of that particular stick sooner than we'd like.
> That's because solving the problem of self-awareness, or consciousness is a different engineering challenge than solving problems of AI. Consciousness is a more complicated, and specialized a thing.
Since there is no doubt that ML/AI has a long way to go toward AGI, and along the way we can expect the discipline to evolve considerably in many unexpected directions, this assertion of yours is close to tautological.
> Were we to build an artificial self-awarene machine we would not expect it to pass a Turing Test.
Why not?
> Instead we might expect different things of it and ask different questions to determine if it is self aware: can it adapt and survive without human help,
So, anyone severely ill to the point that they cannot do without assistance is not conscious and self aware?
> ie can it trap and store energy and reproduce itself,
So, a single-celled organism is conscious?
> and what purpose does it find for itself is what objective does it pursue ...
Ah, this seems a relevant criteria, but keep in mind that humans can be subjected to operant conditioning ("brainwashing") to impose external goals, not to mention that humans actually require a couple of decades of such conditioning (albeit rather more gradual and haphazard) before being considered competent members of society, but we don't consider humans to be less conscious or less self-aware on either side of that particular divide.
> it's a different engineering challenge than producing information that is organized to be sensible to human mind, which is the AI challenge, and the Turing Test.
Given that people have to be specially educated to produce information that is organized to be sensible to a computer, I don't see why an AGI, whatever it's capabilities "out of the box", so to speak, shouldn't be expected to be capable of learning to be sensible to humans.
I am not sure we are going to be able to understand each other. I find your thoughts to be completely missing a foundation that I'm thinking would be necessary to understand what I'm saying. I don't mean to be rude ...
Yes of course a single celled organisim is conscious.
Exactly the way an amoeba is self aware is how an self-conscious intelligence system would need to be to pose any kind of threat: organized to find energy sources and metabolize, replicate, etc.
I'll tell you: a single celled organism is way more self aware, and way more functionally complex than any computer or software - in fact it's orders of magnitude more complex of a machine.
That's my point: solving problems that make a machine capable of producing intelligence that is sensible to you and I is not solving the and problems that make a machine like a single cell organism, which is to say vertically integrated from the atom upwards to be a self-sustaining, self propagating, energy trap.
A self-aware human who is disabled and can't live without intervention of other humans, can't self-sustain without others and therefore will not pass the test of being able to self-sustain. It's a test, and so one failure isn't validation of hypothesis. It can still be a great test affairs fail a percentage of the time.
In general we know that all self conscious organisms self-sustain, even social, super-organism ones that need each other to survive so a criteria for a self aware organism is that it be capable of self sustaining. We don't even have a good test for that yet. But a test that would fail a perfectly self-aware disabled human wouldn't be a good one.
We could very well administer a Turing Test to an artificial consciousness, but my point is that it wouldn't be a very accurate test. A Turing Test only proves the accuracy of a facsimile of human intelligence. It proves nothing about self consciousness systems. An amoeba would fail it in an instant as would a parrot or dolphin - and if you tell me these organisms aren't self-aware and conscious then we are definitely not on the same page.
I could be wrong. I'm absolutely interested in anyone who can make a convincing argument otherwise, however until then I'm pretty certain that no emerging conscious machine will happen by accident. Rather it would take a Manhattan project or greater to produce an artificial consciousness on par for sophistication with an amoeba. And we don't have much motive to attempt it either, so I'm doubting we will do it anytime soon.
It occurs to me that the thing that will probably limit advancements in AGI will be the availability of data to feed these systems.
If you believe that Moore's law is only on life support and not totally dead, there will be more processing power to harness in the future. The number of researchers and investment are clearly growing very quickly. The models used can endlessly be improved.
But on the other hand there are so many things that even today just aren't captured as digital data. I work as a mechanical engineer and there are many nuances to mechanical design that appear nowhere in print (or youtube video, or blog post for that matter). Learning these things takes a complex combination of of sight touch and intuitive leap. Even unsupervised learning requires some input to feed the net. I just don't see where it will come from.
I think you're mistaken about this. Once the philosophical and technical breakthroughs are made that allow us to build an AGI then it will get all the data it needs from its environment. It would be 'unsupervised' in the sense that human children are i.e. no pre-processing of data required but it would still need parenting.
It almost doesn't matter how smart the AGI is on it's own, if it can't participate in our social conversation and get it's information from the same sources humans do it's going to be stone dumb in practice. An individual human is far less intelligent than we generally give ourselves credit for: if it can't stand on the shoulders of our giants then it doesn't matter if it's ten times as tall as us individually.
I think part of it is we're feeding in really small subsets of existing data. Someday we'll be able to feed in 4K 60FPS video, and it'll be able to learn patterns from that. Right now we can only access a subset of the existing data.
I not quite comfortable on how the Borges quote was translated.
To show the difference, the new TRANSLATE version, translates again to spanish as:
Tu no eres lo que escribes, eres lo que has leido.
A bit different from Borges original phrase meaning.
A sentence with a closer meaning (but a bad translation) would be:
One's way of being is not (caused) for what you have write, but for what you have read.
>Grinning, Pichai read aloud an awkward English version of the sentence that had been rendered by the old Translate system: “One is not what is for what he writes, but for what he has read.”
>To the right of that was a new A.I.-rendered version: “You are not what you write, but what you have read.”
It'd be really great if people could stop using the term AI by itself when they mean weak AI, or really machine learning. Its ultimately very misleading -- the only "awakening" that's happened is neural nets are popular again and getting good results. If strong AI gets solved... well then that's really when the machine will have awakened!
The use of Hemingway as an example is unfortunate. Something less literary would be a better example maybe?:
Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.
Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.
One of those is pretty stilted and not much better than lot of machine translations. The uncanny valley of being good enough but not optimal is probably something that's going to plague AI for a long time.
Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.
That's...worrying. Given all of these companies lack of respect for privacy and consumers, it should trouble us that one of them may end up with such a world-changing innovation. Throw enough money into one place, stick a bunch of PhDs in a building, and eventually you'll get something. It's just numbers. Bodies + money. What's inspiring about that?
Does the prospect of Mark Zuckerberg having control over AI for the next five decades trouble you? Even more remarkable is he'd do it on the back of having made a marginally better social networking site in PHP in 2004 and spreading it via the best social network in the world - Ivy League universities. And now those same universities are being raided for talent by these companies...
Is this how we should be picking winners? The distinct lack of diversity and their past stances and actions are troubling. This seems to be mostly a hype piece without any regard for practical effects.
If these same companies can't anticipate or mitigate the impact of issues like fake news until after an election, what makes you think they understand the consequences and impact of something much more complex? And even if they do anticipate it, how do they hold back the pressures of shareholders?
I just hope that we do see multiple such platforms of AI prevailing in the long term. Fragmentation in AI platform would ensure flexibility and bit of privacy protection on consumer side and eventually these competitors can keep each other in check. for example, Amazon echo vs Google home. If Google starts being very creepy in terms of accessing my data to serve me ads, I would just switch to Echo or some other platform. At least I can live in that kind of world.
I found it weird that the article claims that the lack of the definite article is the only sign that makes the machine-translated version of the Hemingway worse. OK, that's the only clinching evidence, but it's easy to spot how it's a lot more stilted in other respects, too.
It's still impressive, mind, and a big improvement over the previous translation, but not perfect like the article wants to imply.