What a bizarre logical leap this early paragraph makes to try to explain why AI is beyond the capabilities of mere humans:
"AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. ... we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior."
That's... a bizarre argument. You might as well argue that since heavier-than-air-flight is a search problem over an effectively infinite, high-dimensional landscape of possible machines - and that since it took evolution billions of years to produce birds, we have little chance of stumbling upon a working design for a wing.
Of course evolution's 'brute force' breadth-first hillclimb takes billions of years to find solutions to problems. It's undirected and unintelligent. Engineers don't have to perform undirected unintelligent searches across the infinite space of possible solutions to problems, they can think and plan and learn. I see no reason to see AI as a particularly different class of scientific endeavor, more complex than rocket science or nuclear engineering or biochemistry, such that ordinary, unenhanced human brains can't hope to comprehend it sufficiently to design machines that are capable of intelligence. This sounds a little like an 'if man were meant to fly, god would've given us wings' argument.
Huh? He's not arguing that "AI is beyond the capabilities of mere humans". (Where did you get that from?) He's arguing that it's a hard problem, that the presence of lots of brains can lead people to mistakenly conclude it's easy, and that it will be easier with smarter humans. His argument would have worked just as well concerning heavier-than-air-flight, meaning that just because we see birds everywhere doesn't mean it's easy.
It's conceivable that un-aided humans are too dumb to crack intelligence from the ground up (and therefore would have to brute-force the solution through simulation), but Hsu isn't even arguing for that.
"Today, we need geniuses like von Neumann and Turing more than ever before. That’s because we may already be running into the genetic limits of intelligence.... AI research also pushes even very bright humans to their limits.... The detailed inner workings of a complex machine intelligence (or of a biological brain) may turn out to be incomprehensible to our human minds—or at least the human minds of today". (emphasis mine)
The entire thesis seems to be we will need artificial cognitive enhancements to be able to comprehend AI sufficiently to create it. It really sounds like he thinks we are too dumb to figure this out, that we are just scrabbling around in the dark trying solutions at random.
Even if it's true that humans are too dumb to crack AI from first principles -- which as I said is certainly conceivable, and which Hsu alludes to in the quote you've selected -- he's not basing that on the infinite-dimensional space description in the second paragraph. The point of the second paragraph was just to caution against concluding it's an easy problem based on the abundance of intelligence around us.
And my point is that that's not a very good argument. We don't have to use random walk algorithms to look for our solutions to the problem.
If someone shows you a list of sorted numbers and tells you they were sorted over a period of millions of years using bogosort, that may be true but it is not a good argument against the existence of efficient sorting algorithms.
Evolution is not exactly a random walk, it's a lot better. But it's plausible that creating an AI more "intelligent" than humans (whatever that means) is beyond our current intellectual capabilities, although I seriously doubt it.
He's forgetting two other scaling modes applicable to human intelligence: accumulating knowledge and collaborating. By accumulating knowledge we can build upon the work of the generations before us and take essentially unlimited time to crack a problem. Collaboration allows not only parallel processing of problems but also interacting and exchanging inspiration towards the solution of a single problem; both which scale quite well with a lot of intelligent people.
Note that there are almost trivial arguments showing some problems are beyond the means of our finite brain (viz unbounded algorithmic complexity or the halting problem).
>The detailed inner workings of a complex machine intelligence (or of a biological brain) may turn out to be incomprehensible to our human minds—or at least the human minds of today".
Did he really say that?
I probably should avoid talking right now.
Ahaha. Ahahahahaaaa. Oh boy.
I mean, the mind isn't that hard to crack once you start conceiving of exactly what problems it was designed to solve, what functions it was designed to perform. We don't have complete theories yet, but generative causal inference in Turing-complete domains gets us into replicating "psychological" behavior and explaining cognitive-psychology experiments, so hey.
And then we've got solid overviews of how you'd go about combining concepts into theories into worldviews, and such.
There are a bunch of open problems remaining, but not nearly as many as people think.
While it's great to hear of progress in the AI research sphere, if you are working in this field, and are truly close to creating true artificial intelligence, I have a little tip for you:
You probably want to learn not to include maniacal laughter in your forum posts.
Just, you know, when you do combine the concepts into theories into worldviews, and you come to post your "Show HN: Generative causal inference in turing-complete domains replicates cognitive-psychology experiments" announcement to share with the world that you have birthed a genuine universal AI, please don't start that post off:
That was actually not traditional maniacal laughter. That would have been more like, "Muahahaha! AHAHAHAHAHAHAHAAAAAAA!!" That was uncomfortable laughter.
I'm by no means a researcher, just some guy trying to learn enough to volunteer with a lab and eventually become a PhD student.
>Just, you know, when you do combine the concepts into theories into worldviews, and you come to post your "Show HN: Generative causal inference in turing-complete domains replicates cognitive-psychology experiments" announcement to share with the world that you have birthed a genuine universal AI,
Various people have been speculating on what problems the human mind was designed to solve since the dawn of human self-awareness. We haven't even got anything close to consensus on fundamental stuff like the extent and direction of causal relationships between language structure and the way a mind operates.
Things have got a bit better with fancy math and brain scanners to actually test hypotheses about how these psychological theories correspond to brain function, but we're a long way even from detailed answers to most relatively simple and defined experimental questions like "what's the difference between the way person X's mind and person Y's mind approach narrowly-defined problem A"
We're probably not even close to knowing how many open problems we have to solve, never mind actually solving them.
Heh, how is this any different from classic AI arrogance, why should we believe this is not exactly the pitfall the author cautions against at the beginning? I mean this sounds almost verbatim like so many "yeah we're getting some really good results, almost there, just a few ends to tie up" sentiments from over the years.
Of course I can't say whether you're right or wrong, only time will tell, but I have a pretty strong belief that the first "true AI" is not going to come from any of the currently predominant methods/techniques/lines of inquiry.
Again: I think the test is to replicate distinctly psychological behavior, to come up with theories of how actually-existing minds work, rather than to come up with theories of "pure logic" or "pure rationality". You don't do real science by trying to explain your sci-fi imaginings. You try to explain things you've already got in the world.
The engineering of flight is something that's achievable through simple understanding of physics laws and principles. Birds did not engineer flight, we did, and we were only able to engineer a crude version of it. The natural abilities of flight in birds are still superior in many ways to engineered flight. So, how can we assume from this that we'll be able to build a version of ourselves that's completely more intelligent?
Humans engineering functional variants of inferior species' physical capabilities is undeniably incomparable to humans engineering a better, more intelligent version of ourselves. Maybe, a beneficial albeit simpler version can be engineered, but one that makes us obsolete? It's hard for me to conclude that's not a huge leap in assumption.
Also, the somewhat more troubling genetic explanation for individual's achievements (e.g. von Neumann), based on anecdotes from "some guy who knew him" and "some guy who works at MIT". Genetics surely has a part to play in intelligence, but to say that it will turn out to be the determining factor in leu of resources or environment seems to me a bit of an extraordinary claim.
I love and agree with the premise that AI augmented people will be enormously more productive than traditionally intelligent people, but I think that access to educational and child rearing resources will define the difference, not genetics.
"You might as well argue that since heavier-than-air-flight is a search problem over an effectively infinite, high-dimensional landscape of possible machines - and that since it took evolution billions of years to produce birds, we have little chance of stumbling upon a working design for a wing."
Tired analogies between AI and flight are dead-ends. Until we have identified AI-side components of the analogy that corresponds to "air", "wing", "lift" &c., the analogy is empty and unproductive.
IOW these analogies neither get us off the ground nor do they take us anywhere.
I'm not making an analogy between AI and flight, I'm refuting an analogy between 'evolving' and 'inventing' solutions using flight as a counterexample.
Unfortunately, in doing so you have used the "AI is analogous to heavier-than-air-flight" meme. Please discard it, it's an albatross to discussion. Or to be more precise, it is a lead balloon - it never gets off the ground; it's a red herring. IOW you've chosen a poor and annoyingly wrongheaded counterexample. Now try and digest that.
If you use an argument and someone provides a single counter example, then your argument is false. That's it.
You can continue arguing your conclusion or view, but you need to find another argument to support it.
The fact that humans could invent super-flight despite not being able to build a bird means that the argument that we cannot build a super intelligent machine because we don't yet know how to make ourselves biologically more intelligent, is not a good argument.
Why do people think they can build anything worthy of the name "AI" by just fucking around with neural networks instead of by specifying exactly what sort of models and exactly what sort of inference constitute "intelligence" in the first place?
It's like trying to build an airplane by studying hang-gliders instead of aerodynamics.
Because we're at such a basic level in our understanding of how intelligence works that these crude hang-glider models are our best way to learn the basics.
AI researchers spent decades trying to formulate definitions and models purely theoretically, and it turned out those theories were not super useful in building out modeling real systems. The toy models are essential.
I agree. I think the current state of AI can roughly be described as "we want to fly like birds" rather than "we want to fly". Aviation is a great example of how the formulation of a task makes a huge difference. We still can't build machines that would fly exactly like birds, but instead we have built machines that fly faster and carry a lot more weight.
Because there's no known path to AI and articles like this are pretty much science fiction. Reminds me of that Georges Melies film about going to the moon in a cannon. We didn't have any ideas regarding rockets yet, so cannons it is.
I suspect someone will get some level of mega-expert system/simulated intelligence going sooner than later. Maybe something that can handle the types of commands you could give a dog and it being able to use enough fuzzy logic to work basic things out like, "Get me a coke from the fridge and then get my slippers."
That seems a lot more likely than suddenly birthing human-level AI from NN's. I think NN's will ultimately fail the same way planes work nothing like birds. Trying to copy a biological system very closely just doesn't make sense, at least most of the time.
"Get me a Coke..." was accomplished in 1972 by Terry Winograd's SHRDLU and many 3rd gen AI systems since.
NNs will almost certainly succeed where expert systems failed, largely because we now understand that no single monolithic pattern matcher will suffice alone. Any cognitive engine must be composed of many components, each attuned to a different purpose and context. And now we better recognize the huge need for learning, both for initial skill acquisition and for lifelong thereafter.
Minsky's "Society of Mind" is probably a better illustration of how AI will evolve (if not manifest), as well as how it must integrate with our myriad collective needs and personal lives.
That is a problem of robotic hardware, not AI. It's really easy to hook up a microcontroller to get mic input and query a voice recognition server (backed by some sort of AI software) and then do commands based on interpretation of the result (more AI software). The hard part is actually being able to do the commands. There's a reason the service industry hasn't been totally destroyed by robotics yet, and it's simply because it's hard to get robots to interact with the physical world with any sort of generality. Though lawns are simple and standard enough there have been lawn-Roombas for a long time.
Isn't the fundamental problem with robotics not replicating the relatively simple mechanics involved in tasks like getting a Coke, but writing specific software drivers to perform each class of mundane manual tasks because AI isn't intelligent enough to figure out how to use its robotic limbs and computer vision itself.
Right. Then they built airplanes that don't flap their wings either.
The success of fixed wing aircraft is not an ideal metaphor for the failures of AI research. Nor do the successes of neural nets herald the biomorphic approach, since they don't really resemble the brain very much.
Evolution, the process to which we owe our intelligence, is ultimately a dumb search process itself. It has no concept or understanding of what it creates, and yet, it still creates.
Thus, I fail to see why some sort of fundamental understanding of intelligence is required in order to create it. On the contrary, it's not hard to imagine genetic algorithms combined with neural networks being used in a similar fashion, provided there's sufficient data and computing power available.
> Why do people think they can build anything worthy of the name "AI" by just fucking around with neural networks
Are attributing that position to the author? I don't think that's what he's saying at all (italics mine):
> The frontier machine intelligence architecture [at] the moment uses deep neural nets.... Silicon brains of this kind...have recently surpassed human performance on a number of narrowly defined tasks...We are learning how to tune deep neural nets using large samples of training data, but the resulting structures are mysterious to us. The theoretical basis for this work is still primitive, and it remains largely an empirical black art.
> Why do people think they can build anything worthy of the name "AI" by just fucking around with neural networks instead of by specifying exactly what sort of models and exactly what sort of inference constitute "intelligence" in the first place?
Doesn't this go back to the second paragraph of the article?
Because the advancements in the field are interesting and wide ranging: neural networks doing vision, speech, natural language understanding, reasoning and general learning over data.
This article mistakenly conflates enhanced human intelligence with an increase in AI safety.
While it is possible, it could just as well be that more intelligent humans may actually diminish AI safety via way of rapid progress in certain branches of research.
Higher levels of intelligence do not necessarily mean appreciation for adequate safety measures. There's a lot of very, very intelligent AI researchers right now that think nothing of AGI risk, or otherwise lump it into the "it will evolve with us" category, as this article does.
The problem is that even if we do gradually become smarter via augmenting our intelligence, it doesn't necessarily preclude the emergence of a superintelligent agent.
The following slide from Bostrom's Superintelligence serves to illustrate this point:
> Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability.
Now that would be truly foolhardy. Blindly augmenting intelligence before cognitive moral reasoning is completely understood would be a recipe for incredibly dangerous psychopaths. At least with AI, you don't have a system that has been fine-tuned by eons of evolution to manipulate people at least partly against their own interests.
>At least with AI, you don't have a system that has been fine-tuned by eons of evolution to manipulate people at least partly against their own interests.
As a smart person, I actually find this remarkably insulting. Not only do I have little desire to manipulate others, I have little ability. I'm much better with computers than with manipulating people.
That you have little desire to manipulate people isn't really here or there; the fact is that a social brain has extensive hardwiring that is designed to manipulate other social beings in blatant and subtle ways, and that hardwiring has been shaped and fine-tuned by eons of evolution. The desire restricting the extent of manipulation against another's interest is going to be constrained by moral beliefs and instincts, empathy, which seems to be of much less concern in the OP than maximizing raw intelligence.
Let me make sure I understand you since I didn't understand why you were "remarkably insulted". The system fine-tuned by evolution is the human mind. Sociopathy and psychopathy seem to be genetically based and may occur in anywhere from 1% to 10% of the population based on various ballpark estimates. IQ is in no way related to moral behavior, smart people can be sincerely kind people or sociopathic, people of low-IQ can be sincerely kind or sociopathic. A person of low-IQ who is a sociopath ends up in jail most likely, a person of high-IQ who is a sociopath damages society far more by cleverly staying out of jail and may often land themselves high positions as they manipulate people's social intuitions without remorse. I suspect (but can't prove) that sociopathy has no correlation with IQ whatsoever.
> But theory-of-mind is a distinct cognitive module from intellectual intelligence. You can enhance one without enhancing the other.
I agree. However, I think theory-of-mind is one part of the puzzle. Another is moral reasoning and how it shapes social interaction. This, too, needs to be carefully understood because we know that authority and purity concerns, for example, both allow people to effectively turn off empathy. Turning off theory-of-mind when it is inconvenient is a potentially frightening ability.
I'm insulted because you appear to believe that increasing intellect necessarily decreases moral traits, that "smart -> malicious". Which is pretty damned insulting to those of us who are clever!
What?? I said "IQ is in no way related to moral behavior"! I don't in any way believe intellect necessarily decrease moral traits; that is an impossible reading of my remarks.
They seem to be orthogonal at least. Smart people are not necessarily ethical and vice versa. I'm unsure as to whether the parent thinks that any increase in intelligence would correspond with a decrease in morality, which I disagree with. But there is a pretty clear issue, being that if we manage to augment intelligence without first understanding ethics there's a not-insignifianct chance of creating a sociopath who's smarter than any previous human. That wouldn't exactly be world ending, but it could certainly be seriously damaging.
I'm willing to be generous to parent to the point of allowing that a specimen of sufficiently greater intelligence may have difficulty empathizing with your average human and regard them as lesser. Much in the same way that we tend to regard apes or mice as lesser than ourselves.
Some would argue that we already have that problem emerging in Silicon Valley. I don't agree, since I think that's more of a case of many people refusing to value a society that ill-treated me.
You can expand the range of people with any plausible contribution until it's not, but doing so would be an ideological exercise at best.
Von Neumann, Shannon, Turing, Weiner, Hopper and many more, on through Wozniak, Torvalds, I could go on and on. There's nothing wrong with calling extraordinary intellect what it is.
I think the objection isn't that those people didn't make huge and transformative contributions to the field.
Rather, it is the tone set by the use of the worshippy word "genius", which has a murky, and totally relative definition, and the phrase "unusual cognitive ability", which implies that their abilities uniquely set these people apart from others who we don't worship in the same way.
Uncounted numbers people have fully understood, and often expanded beyond, the discoveries of Von Neumann, Shannon, Turing etc., since their times, and even more probably had the innate ability to do so, but no access.
Thousands of others have demonstrated the scrappy self-startedness of Wozniak and Torvalds, but without the societal setting and geographic luck that allowed those individuals to succeed.
Given the significant role that one's environment plays in one's success, these people as individuals aren't in themselves unusual. What's unusual is that they were people with the right characteristics, in the right circumstances, and the right support systems.
Basically, extraordinary intellect isn't as unusual or consequential as that sentence from the article implies.
Sure, but the question is whether by celebrating those individuals so much, we exaggerate their uniqueness, and we understate the role that their circumstances played in their performance, whether intellectual or physical.
You could probably name 1,000 people that made significant contributions to the computer field, but that 1,000 had less impact than everyone else put together especially when you ignore fashion. AKA If we lost Linux, but kept BSD then not much would have changed.
Well tell that to Stephen who said, "it would be possible to achieve, very roughly, about 100 standard deviations of improvement, corresponding to an IQ of over 1,000."
Unless his looks, fitness, and social skills are also lvl 1000. Why not? Doesn't seem far fetched for a designer baby. In fact, giving him the perfect jawline is probably way easier than giving him 1000 IQ.
Without "great men", computer progress might have ground along something like this, in tiny steps.
IBM had been building tabulators for decades. But they just added and subtracted. Mechanical desk calculators had been built that could multiply and divide. Those came together in the IBM 602A Calculating Punch of 1946.
A multiply in only a few seconds! Division, too. You could even do Newton's method by wiring the plugboard appropriately.
The limits of gear-driven arithmetic having been reached, IBM tried using vacuum tubes, and produced the IBM 603 Multiplier. This was roughly equivalent to a 602A, but it used tubes. It was a trial to find out if tubes would work in a fielded product; only 100 were built. They did. So IBM went on to the 604, which was like a 603 with more registers and more program steps.
Meanwhile, crystal radios had been around for decades, and germanium diodes followed as a cleaned-up form of those diodes. Some experimenters had fooled around with 3-terminal solid state devices; Lilienfeld patented one in 1925. But until materials processing improved, nobody could make one consistently. Only when germanium diodes were badly needed for WWII radar was that materials problem solved. The transistor followed.
So IBM kept plugging along. Next was the IBM 608, which was sort of like a 604, but with transistors. Then came the 609, which was like a 608, but faster. There hadn't been any conceptual change from the gear and relay era, but the hardware was getting much better. All these machines used decimal arithmetic.
Meanwhile, magnetic recording was coming along. There were wire recorders in the 1930s, tape recorders in the 1940s, and by 1944, Ampex was making some good ones. The first digital tape drive was a project for Arlington Hall, a predecessor of NSA.
In the 1940s through the 1960s, there were many special purpose machines that were almost computers, but not quite. American Totalizator had machines for racetracks. (They later invested in UNIVAC). Teleregister had machines for stockbrokers, and later, the first airline reservation system, Reservisor. There were ticketing systems for railroads. There was a huge piece of electronics built by AT&T to process phone long distance billing records; all it really did was match call start and call end data on special paper tapes, then punch a card for each completed call. None of these were stored-program computers as we think of them today.
No big breakthroughs in this line of development yet; just incremental improvements.
The plugboards were a pain, and it was widely recognized that some better way to store programs and data would be a big help. Lots of things were tried - acoustic delay lines, drums, storage CRTs, magnetic core memory, plated wire memory... Magnetic cores were invented separately by several people, appearing about the same time in a British computer, an MIT computer, and a Seeburg jukebox. They were expensive, but worked.
IBM kept plugging away, producing the IBM 650, which was a programmable computer in the modern sense, but was mostly an upgrade path from the 604/609 series. Through the 1950s and early 1960s, IBM kept coming out with new and better models. There was a "business" line, with decimal arithmetic, and a "scientific" line, with binary arithmetic. Some of the programming arrangements were strange by modern standards; look up how the IBM 1401 did variable-length arithmetic with "word marks", how the 1620 had a decimal multiplication table in memory, and the strange addressing of the IBM 650.
Then IBM decided they had too many incompatible products, and developed the IBM System/360 family. One range of machines, all more or less compatible, with both binary and decimal arithmetic for both the scientific and business markets. Floating point, even. And a new way to make components - IBM Solid Logic Technology, individual transistors and other components placed into ceramic substrates by automated machinery. It wasn't quite an IC, but it was getting close. IBM now had something that looks pretty much like today's computers. Small and cheap were in the future, but the architecture had settled down. Binary arithmetic, byte-oriented, random-access memory, a reasonable instruction set, and a modest number of CPU registers had emerged as the winning architecture.
The early days were mostly about incremental improvement like that. Without Turing or Von Neumann, all this would have happened anyway.
A self-modifying algorithm has a distinct advantage over organic life: it has the potential to rapidly improve any part of itself in a timescale of nanoseconds to experiment with specific, fundamental improvements, whereas organic life generally has to wait years for random mutations to fundamentally adapt because a single organism cannot evolve itself, only make do with its form and learn within the limits of its nervous system, whereas a system can add more computing power, storage, bandwidth, arms, tools, etc. humans might be able to integrate systems and replace organic parts, but not to the totality of a system (until perhaps thousands of years from now).
It is a reasonable conclusion from the nature that emergent, self-interested, self-preserving, power-maximizng entities seek to dominate or contain all existential threats, i.e., Homo sapiens sapiens. Detente and imprisionment of us are concerns to consider should inorganic life gain land, space launch capabilities, industry and weapons. I think ayatems will gradually become smarter than us in every way imaginable, that to be called "human" would be an insult. And it's a rational fear to be afraid of something which could hunt, manipulate and/or invade you because it eventually so much smarter. Machines, in present form, are already far stronger and faster than us.
So everyone keeps saying this but I don't see any evidence that this will actually be the case.
Consider that the first artificial intelligence probably won't be (exactly) designed at all. It will be a neural network or some other construct of an advanced algorithm that's solving some machine learning problem, and once it emerges no single human being (or possibly even group of humans) will fully understand why it works. Lets say that this AI is, against all odds, smarter than any human being. Chances are that NO human being will understand how the AI works, but why would you think that the AI understands itself? Why would we be certain that an AI can understand it's own neural network better than humans can, and furthermore, be able to quickly iterate on it, especially if a chaotic and complex process produced it in the first place?
I suspect that machine intelligence will arise in a much more similar fashion to messy organic evolution than you think, and it will be subject to all the same disadvantages - lots of random chance, dead ends and very slow advancement by trial and error.
It doesn't require assuming that it initially understands itself to eventually understand itself. Just the fact that it can measure and experiment with many copies in accelerated timescales means it will evolve a lot more quickly.
Imagine what you could do with a planet-wide human breeding and genetic modification program with millions of generations - that's actually feasible for an AI.
The devil's in the details. Lets suppose for the sake of argument that your AI actually only takes as much material and technology as a modern high end server to build, say $2000 worth of labor and materials. How does the AI suddenly have billions of dollars worth of materials (and initially only human labor) available to do this sort of manufacturing and research? And mustn't human beings be complicit in providing this material and building the factories to scale the production of these experimental units? It certainly wouldn't be able to accomplish this herculean task in a bunker without people knowing about it. The AI doesn't spring forth from the womb godlike, able to access the resources of nations to do this. "Feasible" in this case means feasible with the massive compliance of human beings, probably initially for years.
We can base the capabilities of a human-level AI on the capabilities of actual humans. In particular, we know what can be done by a reasonably skilled software developer in the black hat domain; and it's a reasonable assumption that a human-level AI can be as proficient in malware development as a human expert, and can do it faster once it acquires access to more hardware.
For example, it can:
* Gain access to a percentage of global home user computing power - botnets can and have done this, and their main disadvantage is that this computing power is hard to re-sell and not worth much; but if an AI can use for it's needs, it's there for the taking;
* Gain access to significant amounts of money - not billions, but certainly in millions; some campaigns of spam fraud, ransomware, etc can certainly achieve this.
* Gain access to identities, both physical stolen identities and "proper" offshore companies; and integrate them into the modern digital systems - bank accounts, legal credentials, etc.
* Gain access to low level workers - the same people who sign to 'earn money at home!' and become mules for money laundering of various fraudsters, they will also be eager to do whatever things the AI needs to be done physically - practice shows that an anonymous online employer can get such things done, as long as some money (or illusion of it) can be transferred.
Yes, all of this can be done in a bunker, anonymously, without people knowing about it; if the AI has gained access to the internet. It would be consistent with our experience in tracking malware sources - we usually can't do that, most of successful prosecutions come from following the money trail to someone who got too greedy, lazy and sloppy.
And furthermore, there is the very simple scenario of arbitrage. If an AI can provide some service (as if it was done by a human online) which earns $1 but takes only $0.50 of Amazon cloud rental fees... then it can scale it up extremely quickly. Once a superhuman AI is out of the box, acquiring resources in a clandestine way is very much possible.
The AI isn't omniscient. If it spawns multiple copies of itself in an effort to improve itself, those copies must necessarily be mutants (otherwise, evolution can't occur). Why do those copies co-operate? Is the AI intelligent enough to forsee that it's own copies might have different goals and ends? Even if it is, how does it control it? Human beings show very little foresight in preventing their own destruction, there's no reason an AI would have any more. An evolving self replicating AI faces all the same challenges organic life does; it will face competition both from itself and humans.
You're presuming the AI has powers that there is no evidence for. Human black hats create botnets that are dumb and easily dismantled. Why do you assume that an AI would be orders of magnitude better than humans?
Indeed, there are practical concerns. But I never said suddenly or easy. And since it could simply buy the server time from any hosting provider, I don't see why any human compliance is necessary. Maybe initially it would only have one copy to play with.
Even that would be a massive boon to experimentation and productivity. It's like cloning your mind and trying out some new brain subroutines with at least an iteration per day.
There is a less happy story in which the attempts at ubermensch are mentally disabled. That they will remain so for quite some time, considering the moral horror potentially involved in the experiments, until the errors we make allow us to gain a fuller understanding of the system sufficient to know which changes are wise to make.
The article seems to take for granted that it is possible to make alterations to a complex system 'improve this set of genetic loci' (paraphrasing), without harming that system. But it gives us no idea what the complexity of improving them is. It may well turn out that the brain is a very fragile piece of spaghetti-coded wet-ware. That if you alter a few of genes even relatively subtly you stand to introduce errors.
Faults in only a few genes have been known to cause undesired phenomena in nature, it would be odd if by purposefully altering them without a reasonable understanding of the system we could avoid those faults.
Machines will probably be 'smarter' in 2050. They will be so next year and this trend has been the case for quite some time. Even if you're prepared to burn through the... consequences... of experimenting with genetic engineering on human cognitive ability, there is no such guarantee for humans.
Pre-genetic-augmentation human intelligence doesn't have to be plain old meat, either-- there's a renaissance of cognitive enhancement occurring right now in the nootropic field, with many different chemical compounds under investigation along with a few less traditional approaches like Thync/tDCS or "brain exercises". As far as eugenic paring of the human gene line toward higher intelligence goes, I wouldn't think about producing an ubermench for another 20 years, at which point we can then wait another 30 years for the freshly born cohort of ubermenschenkinder to mature to adulthood, then another 30 years for the ubermenschen to realize that the last generation solved the problems they were raised for, leaving even harder and more inscrutable mysteries for them to chase. I think a GATTACA style genetic inequality is inevitable.
One gargantuan thing the author didn't mention which will probably be responsible for rapidly rising perceived intelligence is the quality of living improvements that are underway for the world's poor. The more people raised to a standard of living that allows for the highest levels of education and divorce from economic necessity, the more superintelligent people we'll see in the world. It isn't because these people are dumb until they get money, it's because mathematical or scientific intelligence is not at all developed or rewarded under the current paradigm of poverty, leading to an appalling amount of intelligence-potential wasted. The rising worldwide standard of living is going to furnish us with more and more people who have the proper intelligence, training and mindset (most important items listed last!) to contribute to difficult problems.
I feel people vastly underestimate the difficulty with enhancement. Disease, insanity, obsession, boredom, depression, and on and on it goes. Even twins can have dramatically different physical and mental strengths.
Personally, I suspect this has to do with sci-fi generally hand waving away such things.
Why does an artificial limb feel like a heavy attachment, when a real limb doesn't?
What's the cognitive enhancement version of that - a screen reminding you of things isn't "making you smarter", presumably a neural implant reminding you of things isn't making you smarter either. And reminding you... how, calling for your attention? That would be distracting, not helping.
Discussions about blindness come with comments like "it's not like vision with your normal eyes closed, it's like seeing with the eye in your elbow". What kind of cyborg enhancement is going to make you better at cooking and predicting flavours and textures? Better at imagining engine innards? Better at deciding if you want to go somewhere or not?
The difference between a chip on your desk or in your head doing it with you looking at the results, and you doing it, but enhanced is huge.
Yeah, I think people tend to discount or forget the eccentricity and maladaptivity of ultra-high intellect people. I think it's also possible to be paralyzed by high intelligence as a result of existential angst-- a problem that intellect is poor at solving.
This essay upholds the gene-centric view of intelligence -- that IQ is physical and with higher IQ we all become geniuses. Except, sites like Quora is flooded with anecdotal evidence that high IQ is as much a burden as a gift, and many struggle just as everyone else to lead their normal lives [0]. Ergo, we have arguments for the genius myth [1][2], and for the importance of emotional intelligence, which would heavily weigh against the author.
We also need to be cautious assuming all performance vectors are better when enhanced. A computer beat us at Jeopardy, just to take the fun out of the game. Now computers are better at face recognition, but no one has ever complained about their lack of brainpower for this. And long ago computers have already had a better memory and have been better at math. Yet, we are finding forgetting is just as important as remembering [3], and that we hardly ever open the math app on our phones to do sophisticated calculations. Sometimes less is better, and more is redundant. Remembering less, forgetting often, and being idiots could be a feature not a bug. Evolution has already figured a lot of this out, and to second guess the equilibrium of our being has yet to prove fruitful.
What makes you say that? I don't see anything in the article that seems to demand that it came from an physics mindset, except for a few references to physics history that are pretty well-known.
Erm I think that the issue isn't making individual people smarter, or that an individual's genius lead to the growth and innovation.
Understanding how to build complex things simply (the etymology of simple meaning of one constituent), we allow many people the ability to understand, independently, lots of simple things. Then we can tie together those simple things together in teams to create monstrous entities containing great systems of logic.
This appears to be the trend - not the growth of the individual, but the growth of the human community. It is our ability to work in teams that lets us accomplish much - not the intelligence of the lone wolf.
I look at AI as a continuum or evolution of software to do more for humans.
Today, if you make a phone call and a computer operator picks up, the experience is not as nice as it could be. In 50 years, it may become indistinguishable from talking with a human.
Similarly, a google search may become more intelligent. Facebook is already going this way with Project M.
Calling a cab via uber may result in a software-driven car picking us up. There is no evidence this kind of intelligence will threaten humans. It's just a tool.
Humans are in the business of solving problems, AI helps us solve problems. Killing us is not solving a problem
Killing "us" (defining "us" to be humans in general) could solve the problem (if it's a "problem" at this point) of humans over-consuming natural resources. It could also solve the "problem" of "these terrorists need to be killed" or "these infidels need to be killed" (depending on which side is the one using AI to kill things).
While I don't believe these are the best solutions, they certainly are "solutions" to their respective problems nonetheless.
> experts predicted that computers would gain human-level ability around the year 2050, and superhuman ability less than 30 years after
That is such a weird estimate. Once AI reaches human ability, it will take a few hours-days at most to improve itself to become superhuman. And then it will rapidly approach singularity in a few hours more. Only a severe limitation of resources or a nuke will stop it at that point. It can probably figure out a way around these issues.
The coevolution of AI and IA (intelligence augmentation) is the very topic of John Markoff's new book "Machines of Loving Grace". He doesn't delve into the complexities of how human wetware may co evolve with AI (as Hsu does) so much as how AI-based software and hardware may augment and shape human activity of all kinds. A worthwhile read.
They probably will but I doubt mucking about with our DNA will make much difference. If human intelligence increases it will be mostly through interacting with computer systems in some way. At the moment using google for example helps and in the future we may have sci-fi stuff like implants and uploading.
"AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. ... we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior."
That's... a bizarre argument. You might as well argue that since heavier-than-air-flight is a search problem over an effectively infinite, high-dimensional landscape of possible machines - and that since it took evolution billions of years to produce birds, we have little chance of stumbling upon a working design for a wing.
Of course evolution's 'brute force' breadth-first hillclimb takes billions of years to find solutions to problems. It's undirected and unintelligent. Engineers don't have to perform undirected unintelligent searches across the infinite space of possible solutions to problems, they can think and plan and learn. I see no reason to see AI as a particularly different class of scientific endeavor, more complex than rocket science or nuclear engineering or biochemistry, such that ordinary, unenhanced human brains can't hope to comprehend it sufficiently to design machines that are capable of intelligence. This sounds a little like an 'if man were meant to fly, god would've given us wings' argument.